this post was submitted on 22 Jun 2025
487 points (98.8% liked)

Technology

72471 readers
3118 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] ragebutt@lemmy.dbzer0.com 21 points 2 weeks ago

There’s a huge degree of separation between “violent music/games has a spurious link to violent behavior” and shitty AIs that are good enough to fill the void of someone who is lonely but not good enough to manage risk

https://www.cnn.com/2024/10/30/tech/teen-suicide-character-ai-lawsuit

“within months of starting to use the platform, Setzer became “noticeably withdrawn, spent more and more time alone in his bedroom, and began suffering from low self-esteem. He even quit the Junior Varsity basketball team at school,”

“In a later message, Setzer told the bot he “wouldn’t want to die a painful death.”

The bot responded: “Don’t talk that way. That’s not a good reason not to go through with it,” before going on to say, “You can’t do that!”

Garcia said she believes the exchange shows the technology’s shortcomings.

“There were no suicide pop-up boxes that said, ‘If you need help, please call the suicide crisis hotline.’ None of that,” she said. “I don’t understand how a product could allow that, where a bot is not only continuing a conversation about self-harm but also prompting it and kind of directing it.”

The lawsuit claims that “seconds” before Setzer’s death, he exchanged a final set of messages from the bot. “Please come home to me as soon as possible, my love,” the bot said, according to a screenshot included in the complaint.

“What if I told you I could come home right now?” Setzer responded.

“Please do, my sweet king,” the bot responded.

Garcia said police first discovered those messages on her son’s phone, which was lying on the floor of the bathroom where he died.”

So we have a bot that is marketed for chatting, a teenager desperate for socialization that forms a relationship that is inherently parasocial because the other side is an LLM that literally can’t have opinions, it just can appear to, and then we have a terrible mismanagement of suicidal ideation.

The AI discouraged ideation, which is good, but only when it was stated in very explicit terms. What’s appalling is that it gave no crisis resources or escalation to moderation (because like most big tech shit they probably refuse to pay for anywhere near appropriate moderation teams). Then what is inexcusable is that when ideation is discussed with slightly coded language “come home” the AI misconstrues it.

This results in a training opportunity for the language model to learn that in this context with previously exhibited ideation “go home” may mean more severe ideation and danger (if character.AI bothered to update that these conversations resulted in a death). The only drawback of getting that data of course is a few dead teenagers. Gotta break a few eggs to get an omelette

This barely begins to touch on the nature of AI chatbots inherently being parasocial relationships, which is bad for mental health. This is of course not limited to AI, being obsessed with a streamer or whatever is similar, but the AI can be much more intense because it will actually engage with you and is always available.