this post was submitted on 28 Aug 2025
73 points (97.4% liked)

Technology

74545 readers
4086 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] vacuumflower@lemmy.sdf.org 2 points 21 hours ago* (last edited 21 hours ago) (1 children)

Thus, a user receives an answer that has already undergone a filtering of sorts.

Wouldn't this be an expected trait of a system predicting next most likely token based on lossy compression of specific datasets and other lossy optimization?

[–] Eq0@literature.cafe 2 points 19 hours ago (1 children)

Depends. For an expert, that is self evident (even if it might not be clear which biases have been incorporated). But that is not how it has been marketed. Chatgpt and similar are perceived as answering “the truth” at all times, and that skews the user’s understanding of the answers. Researching how deeply the answers are affected by the coders’ bias is the focus of their research and a worthwhile undertaking to avoid overlooking something important

[–] spankmonkey@lemmy.world 1 points 18 hours ago (1 children)

For an expert, that is self evident

I am far from an expert, but it seemed obvious to ne.

[–] Eq0@literature.cafe 1 points 17 hours ago

I teach, nothing is evident to anyone 😭