this post was submitted on 17 Apr 2025
30 points (96.9% liked)

Hacker News

2369 readers
251 users here now

Posts from the RSS Feed of HackerNews.

The feed sometimes contains ads and posts that have been removed by the mod team at HN.

founded 11 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] br3d@lemmy.world 23 points 4 months ago (2 children)

This is how an LLM will always work. It doesn't understand anything - it just predicts the next word based on the words so far, learned from reading loads of text. There is no "knowledge" in there, so stop asking these things questions and expecting useful answers

[–] cmhe@lemmy.world 6 points 4 months ago* (last edited 4 months ago) (1 children)

Yeah, I don't understand why people seem to be surprised by that.

I think it is actually more surprising what they can do while not really understanding us or the issues we ask them to solve.

[–] Retro_unlimited@lemmy.world 1 points 4 months ago (1 children)

LLM is just a “random sentence generator“

[–] br3d@lemmy.world 3 points 4 months ago

Not quite. It's more an "average sentence generator" - which is one reason to be skeptical: written text will tend to get more average and bland over time