371
this post was submitted on 25 Apr 2025
371 points (96.5% liked)
Technology
69346 readers
3129 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Try this on your friends, make up an idiom, then walk up to them, say it without context, and then say "meaning?" and see how they respond.
Pretty sure most of mine will just make up a bullshit response nd go along with what I'm saying unless I give them more context.
There are genuinely interesting limitations to LLMs and the newer reasoning models, and I find it interesting to see what we can learn from them, this is just ham fisted robo gotcha journalism.
My friends would probably say something like "I've never heard that one, but I guess it means something like ..."
The problem is, these LLMs don't give any indication when they're making stuff up versus when repeating an incontrovertible truth. Lots of people don't understand the limitations of things like Google's AI summary* so they will trust these false answers. Harmless here, but often not.
* I'm not counting the little disclaimer because we've been taught to ignore smallprint from being faced with so much of it
Ok, but the point is that lots of people would just say something and then figure out if it's right later.
Quite frankly, you sound like middle school teachers being hysterical about Wikipedia being wrong sometimes.
LLMs are already being used for policy making, business decisions, software creation and the like. The issue is bigger than summarisers, and "hallucinations" are a real problem when they lead to real decisions and real consequences.
If you can't imagine why this is bad, maybe read some Kafka or watch some Black Mirror.
Lmfao. Yeah, ok, let's get my predictions from the depressing show dedicated to being relentlessly pessimistic at every single decision point.
And yeah, like I said, you sound like my hysterical middle school teacher claiming that Wikipedia will be society's downfall.
Guess what? It wasn't. People learn that tools are error prone and came up with strategies to use them while correcting for potential errors.
Like at a fundamental, technical level, components of a system can be error prone, but still be useful overall. Quantum calculations have inherent probabilities and errors in them, but they can still solve some types of calculations so much faster than normal computers that you can run the same calculation 100x on a Quantum Computer, average out the results to remove the outlying errors, and get to the right answer far faster than a classical computer.
Computer chips in satellites and the space station are constantly having random bits of memory flipped by cosmic rays, but they still work fine because their RAM is error-correcting ram, that can use similar methods to verify and check for errors.
And at a super high level, some of my friends and coworkers are more reliable than others, that doesn't mean the ones that are less reliable aren't helpful, it just means I have to take what they say with a grain of salt.
Designing for error correction is a thing, and people are perfectly capable of doing so in their personal lives.
and this is why humans are bad, a tool is neither good or bad, sure a tool can use a large amount of resources to develop only to be completely obsolete in a year but only humans (so far) have the ability (and stupidity) to be both in charge of millions of lives and trust a bunch of lithographed rocks to create tarrif rates for uninhabited islands (and the rest of the world).
My friends aren't burning up the planet just to come up with that useless response though.
Yes, they literally are. Or maybe you haven't heard of human caused climate change?
You dumb
it highlights the fact that these LLMs refuse to say "I don't know", which essentially means we cannot rely on them for any factual reporting.
But a) they don't refuse, most will tell you if you prompt them well them and b) you cannot rely on them as the sole source of truth but an information machine can still be useful if it's right most of the time.
So, you have friends who are as stupid as an AI. Got it. What's your point?
Yeah, mine would say, "what you talkin' 'bout Willis?":