this post was submitted on 07 Apr 2025
-18 points (35.0% liked)

Technology

68991 readers
3800 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

What paid work might remain for human beings to do if we approach a world where AI is able to perform all economically useful tasks more productively than human beings? In this paper, I argue that the answer is not ‘none at all.’ In fact, there are good reasons to believe that tasks will still remain for people to do, due to three limits: ‘general equilibrium limits,’ involving tasks in which labor has the comparative advantage over machines (even if it does not have the absolute advantage); ‘preference limits,’ involving tasks where human beings might have a taste or preference for an un-automated process; and ‘moral limits,’ involving tasks with a normative character, where human beings believe they require a ‘human in the loop’ to exercise their moral judgment. In closing, I consider the limits to these limits as AI gradually, but relentlessly, becomes ever-more capable.

you are viewing a single comment's thread
view the rest of the comments
[–] Telorand@reddthat.com 7 points 1 week ago (1 children)

Not going to read most of this paper, because it reads like a freshman thesis, and it fundamentally oversells or misunderstands the existing limits on AI.

In closing, I consider the limits to these limits as AI gradually, but relentlessly, becomes ever-more capable.

The AI technofacists building these systems have explicitly said they've hit a wall. They're having to invest in their own power plants just to run these models. They have scores of racks of GPUs, so they're dependent upon the silicon market. AI isn't becoming "ever more capable," it's merely pushing the limits of what they have left.

And all the while, these projects are still propped up almost entirely by venture capital. They're an answer to a problem nobody is having.

Put another way, if the leaders of the AI companies are right in their predictions, and we do build AGI in the short- to medium-term, will these limits be able to withstand such remarkable progress?

Again, the leaders are doing their damnedest to convince investors that this stuff will pay off one day. The reality is that they have yet to do anything close to that, and investors are going to get tired of pumping money into something that doesn't return on that investment.

AI is not some panacea that will magically make ultracapitalists more wealthy, and the sooner they realize that, the sooner we can all move on—like we did with the Metaverse and blockchain.

[–] masterspace@lemmy.ca 0 points 1 week ago* (last edited 1 week ago) (1 children)

The AI technofacists building these systems have explicitly said they've hit a wall. They're having to invest in their own power plants just to run these models. They have scores of racks of GPUs, so they're dependent upon the silicon market. AI isn't becoming "ever more capable," it's merely pushing the limits of what they have left.

While I agree that this paper sounds like a freshman thesis, I think you're betraying your own lack of knowledge here.

Because no, they havent said they've hit a wall, and while there are reasons to be skeptical of the brute force scaling approach that a lot of companies are taking, those companies are doing that because they have massive amounts of capital and scaling is an easy way to spend capital to improve the results of your model while your researchers figure out how to make better models, leaving you in a better market position when the next breakthrough or advancement happens.

The reasoning models of today like o1 and Claude 3.7 are substantially more capable than the faster models that predate them, and while you can make an argument that the resource / speed trade off isn't worth it, they're also the very first generation of models that are trying to integrate LLMs into a more logical reasoning framework.

This is on top of the broader usage of AI that is rapidly becoming more capable. The fuzzy pattern matching techniques that LLMs use have literally already revolutionized fields like Protein Structural Analysis, all the result of a single targeted DeepMind project.

The techniques behind AI allow computers to solve whole new classes of problems that werent possible before, dismissing that is just putting your head in the sand.

And yes companies are still dependent on silicon and energy, which is why they're vertically integrating and starting to try and produce that on their own. That's not a sign that they see AI as a waste of time.

[–] VerticaGG@lemmy.blahaj.zone 0 points 1 week ago* (last edited 1 week ago) (1 children)

whatever way you slice it, "AI" at this point in time is a gross waste of precious resources and betrays the utter contempt of the Other and the Worker that the Owning class harbors.

its really not a "whole new class" of doing things. Save for a gimmick for improv troops to do a one-off, in practice it ammounts to "Why pay for workers when you can automate machines to pass off an unreliable result as a substitute and then send disatisfied customers into support line limbo" devaluing the labor while offering a "platinum plan" to wealthy clients willing to be extorted to talk with a human from your business.

STEM examples? Maybe -- but be real that's not the majority of whats wrong, and even if it is, it's models trained on socially prejudiced data sets

[–] masterspace@lemmy.ca 2 points 1 week ago* (last edited 1 week ago) (1 children)

Bruh, I get how you feel, but your complaints are with capitalism, not algorithms that are wildly better than previous ones at fuzzy pattern matching.

Here is an example of how AI has already literally revolutionized science through one targeted project:

https://m.youtube.com/watch?v=P_fHJIYENdI

This work won the Nobel prize in chemistry.

And my best friend literally did his PhD in protein crystallography, is at MIT doing a protein structural analysis Post Doc, and the work of the new AI based protein structural predictions has literally completely changed the direction of their lab's research, basically overnight.

Because, yes AI algorithms literally are able to solve a new class of problems. It's literally what this old pre-LLM xkcd is talking about: https://xkcd.com/1425/ and while it's asking for confirmation of a 'bird', identifying photos of say, cancer, is the literal exact same problem from an algorithm standpoint, and is a huge amount of other fuzzy pattern matching problems.

Yeah there's a lot of dumb tech bros over hyping AI, and a lot of giant corporations that care about using it for literally nothing but getting personally richer, but you're going to be misinformed the other direction about its genuine usefulness if you just read nothing but AI doomer blogs from people who don't actually bother trying to use or understand the technology.

[–] VerticaGG@lemmy.blahaj.zone 0 points 1 week ago* (last edited 1 week ago) (1 children)

Capitalism is merely a child of heirarchal domination. https://www.youtube.com/watch?v=-1ZK2-viyAo

Real shit im glad you're able to find a few diamonds in the rough -- BUT from the fashy techbros you mentioned to Corpo wide mainstream forcefeeding it, absolutely a net negative.

Truly, I love new tech. Always have. I wanna love AI...but as things stand I come to the inevitable conclusion that it is tossing gas on the fires that are the climate crisis, on social and economic inequity and so, sweet summer child soooo much more. I'm far from a doomer. Pull your head out your ass.

"bruh"

[–] masterspace@lemmy.ca 0 points 1 week ago* (last edited 1 week ago) (1 children)

Real shit im glad you're able to find a few diamonds in the rough -- BUT from the fashy techbros you mentioned to Corpo wide mainstream forcefeeding it, absolutely a net negative.

In what way are they causing more harm than they were with crypto, or with gamification, or with social media, or with whatever tech fad came before that?

The point is that tech bros and conman have always existed and have always been shilling overhyped shit. That's a reality of the world we live in, not a new invention of AI.

And by "few diamonds in the rough", I assume you mean a literal entirely new class of problems that computers were unable to solve for before?

Truly, I love new tech. Always have. I wanna love AI...but as things stand I come to the inevitable conclusion that it is tossing gas on the fires that are the climate crisis, on social and economic inequity and so, sweet summer child soooo much more. I'm far from a doomer.

Just because you bookend your doomer statement with 'i love tech' and 'im far from a doomer', doesn't make it not a doomer' statement. You literally start it by saying that your pessimistic conclusion is inevitable.

[–] VerticaGG@lemmy.blahaj.zone 1 points 5 days ago (1 children)

lmao refusing to drink your technocratic poor excuse for "utopian" vision does not make one a doomer

idc if you watch it. this is me not granting you the engagement

[–] masterspace@lemmy.ca 0 points 5 days ago

Those who can't express simply, don't understand it.

Figure out how to make your point in less than 45 minutes or don't bother making it.