this post was submitted on 28 Jun 2025
532 points (94.3% liked)

Technology

71997 readers
2776 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

top 50 comments
sorted by: hot top controversial new old
[–] Bogasse@lemmy.ml 7 points 2 hours ago

The idea that RAGs "extend their memory" is also complete bullshit. We literally just finally build working search engine, but instead of using a nice interface for it we only let chatbots use them.

[–] Sorgan71@lemmy.world -3 points 38 minutes ago (1 children)

The machinery needed for human thought is certainly a part of AI. At most you can only claim its not intelligent because intelligence is a specifically human trait.

[–] Zacryon@feddit.org 4 points 34 minutes ago

We don't even have a clear definition of what "intelligence" even is. Yet a lot of people art claiming that they themselves are intelligent and AI models are not.

[–] aceshigh@lemmy.world 10 points 5 hours ago (3 children)

I’m neurodivergent, I’ve been working with AI to help me learn about myself and how I think. It’s been exceptionally helpful. A human wouldn’t have been able to help me because I don’t use my senses or emotions like everyone else, and I didn’t know it... AI excels at mirroring and support, which was exactly missing from my life. I can see how this could go very wrong with certain personalities…

[–] PushButton@lemmy.world 3 points 1 hour ago* (last edited 1 hour ago)

That sounds fucking dangerous... You really should consult a HUMAN expert about your problem, not an algorithm made to please the interlocutor...

[–] Snapz@lemmy.world 13 points 2 hours ago

This is very interesting... because the general saying is that AI is convincing for non experts in the field it's speaking about. So in your specific case, you are actually saying that you aren't an expert on yourself, therefore the AI's assessment is convincing to you. Not trying to upset, it's genuinely fascinating how that theory is true here as well.

[–] biggerbogboy@sh.itjust.works 4 points 4 hours ago (1 children)

Are we twins? I do the exact same and for around a year now, I've also found it pretty helpful.

[–] Liberteez@lemm.ee 3 points 1 hour ago

I did this for a few months when it was new to me, and still go to it when I am stuck pondering something about myself. I usually move on from the conversation by the next day, though, so it's just an inner dialogue enhancer

[–] psycho_driver@lemmy.world 7 points 7 hours ago (1 children)

Hey AI helped me stick it to the insurance man the other day. I was futzing around with coverage amounts on one of the major insurance companies websites pre-renewal to try to get the best rate and it spit up a NaN renewal amount for our most expensive vehicle. It let me go through with the renewal less that $700 and now says I'm paid in full for the six month period. It's been days now with no follow-up . . . I'm pretty sure AI snuck that one through for me.

[–] laranis@lemmy.zip 7 points 6 hours ago

Be careful... If you get in an accident I guaran-god-damn-tee you they will use it as an excuse not to pay out. Maybe after a lawsuit you'd see some money but at that point half of it goes to the lawyer and you're still screwed.

[–] bbb@sh.itjust.works 10 points 8 hours ago (2 children)

This article is written in such a heavy ChatGPT style that it's hard to read. Asking a question and then immediately answering it? That's AI-speak.

[–] JackbyDev@programming.dev 3 points 2 hours ago

Asking a question and then immediately answering it? That's AI-speak.

HA HA HA HA. I UNDERSTOOD THAT REFERENCE. GOOD ONE. 🤖

[–] sobchak@programming.dev 8 points 8 hours ago (1 children)

And excessive use of em-dashes, which is the first thing I look for. He does say he uses LLMs a lot.

[–] bbb@sh.itjust.works 10 points 5 hours ago* (last edited 5 hours ago) (2 children)

"…" (Unicode U+2026 Horizontal Ellipsis) instead of "..." (three full stops), and using them unnecessarily, is another thing I rarely see from humans.

Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character. I might be wrong on this one.

[–] mr_satan@lemmy.zip 2 points 2 hours ago (1 children)

Am I… AI? I do use ellipses and (what I now see is) en dashes for punctuation. Mainly because they are longer than hyphens and look better in a sentence. Em dash looks too long.

However, that's on my phone. On a normal keyboard I use 3 periods and 2 hyphens instead.

[–] Sternhammer@aussie.zone 2 points 32 minutes ago

I’ve long been an enthusiast of unpopular punctuation—the ellipsis, the em-dash, the interrobang‽

The trick to using the em-dash is not to surround it with spaces which tend to break up the text visually. So, this feels good—to me—whereas this — feels unpleasant. I learnt this approach from reading typographer Erik Spiekermann’s book, *Stop Stealing Sheep & Find Out How Type Works.

[–] sqgl@sh.itjust.works 2 points 4 hours ago

Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character.

Not on my phone it didn't. It looks as you intended it.

[–] mechoman444@lemmy.world 6 points 10 hours ago* (last edited 10 hours ago) (1 children)

In that case let's stop calling it ai, because it isn't and use it's correct abbreviation: llm.

[–] HugeNerd@lemmy.ca 4 points 9 hours ago (9 children)
load more comments (9 replies)
load more comments
view more: next ›