this post was submitted on 19 Jul 2025
20 points (95.5% liked)

Hacker News

2103 readers
281 users here now

Posts from the RSS Feed of HackerNews.

The feed sometimes contains ads and posts that have been removed by the mod team at HN.

founded 10 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] lvxferre@mander.xyz 7 points 5 days ago

To add to it further.

If someone asks you a question, and you answer it with "here's chat ChatGPT says: [insert output]", there are two possibilities.

One of them is the person does not want LLM output. In that case, you're shitting on their consent.

But if the person does want LLM output, it's still bad - you're basically telling someone "I assume you're too much of stupid trash to ask the bot directly, but thankfully even filth like you has someone like ME! to spoonfeed it."

It is different if you have the technical expertise necessary to call the LLM bullshit out for that topic. But then you aren't just parroting the slop, you're fixing it into non-slop.