this post was submitted on 13 Mar 2025
646 points (98.4% liked)

Technology

69298 readers
4100 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

… the AI assistant halted work and delivered a refusal message: "I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly."

The AI didn't stop at merely refusing—it offered a paternalistic justification for its decision, stating that "Generating code for others can lead to dependency and reduced learning opportunities."

Hilarious.

you are viewing a single comment's thread
view the rest of the comments
[–] balder1991@lemmy.world 3 points 1 month ago* (last edited 4 weeks ago) (6 children)

Not sure why this specific thing is worthy of an article. Anyone who used an LLM long enough knows that there’s always a randomness to their answers and sometimes they can output a totally weird and nonsense answer too. Just start a new chat and ask it again, it’ll give a different answer.

This is actually one way to know whether it’s “hallucinating” something, if it answers the same thing consistently many times in different chats, it’s likely not making it up.

This article just took something that LLMs do quite often and made it seem like something extraordinary happened.

[–] Traister101@lemmy.today 1 points 1 month ago (3 children)

Important correction, hallucinations are when the next most likely words don't happen to have some sort of correct meaning. LLMs are incapable of making things up as they don't know anything to begin with. They are just fancy autocorrect

[–] richieadler@lemmy.myserv.one 0 points 1 month ago

Thank you for your sane words.

load more comments (2 replies)
load more comments (4 replies)