this post was submitted on 11 Jul 2025
100 points (98.1% liked)

Fuck AI

3467 readers
365 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

There have been multiple things which have gone wrong with AI for me but these two pushed me over the brink. This is mainly about LLMs but other AI has also not been particularly helpful for me.

Case 1

I was trying to find the music video from where a screenshot was taken.

I provided o4 mini the image and asked it where it is from. It rejected it saying that it does not discuss private details. Fair enough. I told it that it is xyz artist. It then listed three of their popular music videos, neither of which was the correct answer to my question.

Then I started a new chat and described in detail what the screenshot was. It once again regurgitated similar things.

I gave up. I did a simple reverse image search and found the answer in 30 seconds.

Case 2

I wanted a way to create a spreadsheet for tracking investments which had xyz columns.

It did give me the correct columns and rows but the formulae for calculations were off. They were almost correct most of the time but almost correct is useless when working with money.

I gave up. I manually made the spreadsheet with all the required details.

Why are LLMs so wrong most of the time? Aren’t they processing high quality data from multiple sources? I just don’t understand the point of even making these softwares if all they can do is sound smart while being wrong.

(page 2) 31 comments
sorted by: hot top controversial new old
[–] Zetta@mander.xyz 0 points 1 day ago (3 children)

¯\_(ツ)_/¯ would I get downvoted for saying skill issue lol?

I have recently used llms for troubleshooting/assistance with exposing some self hosted services publicly through a VPS I recently got. I'm not a novice but I'm no pro either when it comes to the Linux terminal.

Anyway long story short in that instance the tool (llm) was extremely helpful in not only helping me correctly implement what I wanted but also explaning/teaching me as I went. I find llms are very accurate and helpful for the types of things I use it for.

But to answer your question on why llms can be wrong, it's because they are guessing machines that just pick the next best word. They aren't smart at all, they aren't "ai" they are large language models.

load more comments (3 replies)
[–] leftzero@lemmynsfw.com 3 points 2 days ago

The thing about LLMs is that they "store" information about the shape of their training models, not about the information contained therein. That information is lost.

A LLM will produce text that looks like the texts it was trained with, but it only can only reproduce any information contained in them if it's common enough in its training data to statistically affect their shape, and even then it has a chance to get it wrong, since it has no way to check its output for fact accuracy.

Add to that that most models are pre-prompted to sound confident, helpful, and subservient (the companies' main goal not being to provide information, but to get their customers hooked on their product and coming back for more), and you get the perfect scammers and yes-men. Auto-complete mentalists that will give you as much confident sounding information shaped nonsense as you want, doing their best to agree with you and confirm any biases you might have, with complete disregard for accuracy, truth, or the effects your trust in their output might have (which makes them extremely dangerous and addictive for suggestible or intellectually or emotionally vulnerable users).

[–] drspod@lemmy.ml 12 points 3 days ago (1 children)

LLMs only output fan-fiction of our reality.

[–] Outwit1294@lemmy.today 4 points 3 days ago (1 children)

Yeah, I have realised that now. It seems to be useful because it is well spoken but it is not.

load more comments (1 replies)
[–] thegr8goldfish@startrek.website 8 points 3 days ago (1 children)

I like to think of them as artificial con men. They sound great. They have confidence and are complimentary and are very agreeable, but they will tell you what they think you want to hear. Whether or not what they are telling you is truthful isn't even part of the equation.

[–] Outwit1294@lemmy.today 3 points 2 days ago

Yeah, confidence is the problem. And they don’t accept that they don’t know something.

[–] shalafi@lemmy.world 7 points 2 days ago (1 children)

I almost always get perfect responses, but I'm very limited in what I'll input. Often I'm just using ChatGPT to remember a word or event I've forgotten. Pretty much 100% accurate on that bit.

Couldn't explain how I know what will and won't work, but I have a sense of it. Also, the farther you drill into a thing, the more off-topic it gets. I'm almost always one and done with a prompt.

[–] Outwit1294@lemmy.today 3 points 2 days ago

You are getting more surface level information from it which is probably going to be correct unless there is a major problem in training data.

[–] FlashMobOfOne@lemmy.world 10 points 3 days ago (1 children)

The first time I ever used it I got a bugged response. I asked it to give me a short summary of the 2022 Super Bowl, and it told me Patrick Mahomes won the Super Bowl with a field goal kick.

Now, those two things separately are true. Mahomes won. The game was won on a field goal.

The LLM just looks at the probability that the sentences it's generating are correct based on its training data, and it smashed two correct statements together thinking that was the most probable reasonable response.

It does that a lot. Don't use GenAI without checking its output.

[–] Outwit1294@lemmy.today 8 points 3 days ago (2 children)

I have noticed that it is terrible when you know at least a little about the topic.

[–] spankmonkey@lemmy.world 11 points 3 days ago

Or a more accurate way to say it is AI is terrible all the time but it is easier to notice when you know at least a little about the topic.

[–] ZDL@lazysoci.al 2 points 2 days ago

Ooh! Now do the press!

[–] RockBottom@feddit.org 4 points 3 days ago

Much of the Input Comedy from the Web now, not the web 2003.

[–] Kyle_The_G@lemmy.world 2 points 3 days ago (2 children)

I've used it a few times to quickly check some reference values, calculations and symptoms (research/physiology) and most of the time its fine but occasionally it spits out some of the craziest shit i've ever seen, like dangerously wrong but its just as confident.

[–] Outwit1294@lemmy.today 10 points 3 days ago (1 children)

The confidence is the problem. If a human does not know the answer, they say that they don’t know. LLMs seem to not know that it is an option.

[–] spankmonkey@lemmy.world 8 points 3 days ago (1 children)

Confidence isn't a good description either, since there is no thought proces or motivation. It is a bullshit machine, spewing out something that looks like a coherent response.

[–] Outwit1294@lemmy.today 2 points 2 days ago

It does sound more professional than most humans

load more comments (1 replies)
load more comments
view more: ‹ prev next ›