this post was submitted on 27 May 2025
502 points (90.2% liked)

Technology

70528 readers
3752 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] bampop@lemmy.world 2 points 5 days ago (2 children)

I think the author was quite honest about the weak points in his thesis, by drawing comparisons with cars, and even with writing. Cars come at great cost to the environment, to social contact, and to the health of those who rely on them. And maybe writing came at great cost to our mental capabilities though we've largely stopped counting the cost by now. But both of these things have enabled human beings to do more, individually and collectively. What we lost was outweighed by what we gained. If AI enables us to achieve more, is it fair to say it's making us stupid? Or are we just shifting our mental capabilities, neglecting some faculties while building others, to make best use of the new tool? It's early days for AI, but historically, cognitive offloading has enhanced human potential enormously.

[–] joel_feila@lemmy.world 2 points 5 days ago (1 children)

Well creating the slide was a form of cognitive offloading, but barely you still had to know how to use and what formula to use. Moving to the pocket calculator just change how you the it didn't really increase how much thinking we off loaded.

but this is something different. With infinite content algorithms just making the next choice of what we watch amd people now blindly trusting whatever llm say. Now we are offloading not just a comolex task like sqrt of 55, but "what do i want to watch", "how do i know this true".

[–] bampop@lemmy.world 2 points 5 days ago (1 children)

I agree that it's on a whole other level, and it poses challenging questions as to how we might live healthily with AI, to get it to do what we don't benefit from doing, while we continue to do what matters to us. To make matters worse, this is happening in a time of extensive dumbing down and out of control capitalism, where a lot of the forces at play are not interested in serving the best interests of humanity. As individuals it's up to us to find the best way to live with these pressures, and engage with this technology on our own terms.

[–] joel_feila@lemmy.world 2 points 5 days ago (1 children)

how we might live healthily with AI, to get it to do what we don’t benefit from doing,

Agree that is oir goal, but one i don't ai with paying for training data. Also amd this the biggest. What benefits me is not what benefits the people owning the ai models

[–] bampop@lemmy.world 2 points 5 days ago

What benefits me is not what benefits the people owning the ai models

Yep, that right there is the problem

The article agrees with you, it's just a caution against over-use. LLMs are great for many tasks, just make sure you're not short-changing yourself. I use them to automate annoying tasks, and I avoid them when I need to actually learn something.