this post was submitted on 27 Nov 2025
458 points (98.1% liked)

Technology

77144 readers
2402 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.

The earliest look at OpenAI’s strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teen’s “suicide coach.” OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world’s most engaging chatbot, parents argued.

But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring “the full picture” revealed by the teen’s chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he’d begun experiencing suicidal ideation at age 11, long before he used the chatbot.

you are viewing a single comment's thread
view the rest of the comments
[–] gian@lemmy.grys.it -3 points 1 day ago* (last edited 1 day ago) (1 children)

I would say that it is more like a software company putting in their TOS that you cannot use their software to do a specific thing(s).
Would be correct to sue the software company because a user violated the TOS ?

I agree that what happened is tragic and that the answer by OpenAI is beyond stupid but in the end they are suing the owner of a technology for a uses misuse of said technology. Or should we sue also Wikipedia because someone looked up how to hang himself ?

That’s like a gun company claiming using their weapons for robbery is a violation of terms of service.

The gun company can rightfully say that what you do with your property is not their problem.

But let's make a less controversial example: do you think you can sue a fishing rods company because I use one of their rods to whip you ?

[–] Pieisawesome@lemmy.dbzer0.com 2 points 1 day ago (1 children)

To my legal head cannon, this boils down to if OpenAi flagged him and did nothing.

If they flagged him, then they knew about the ToS violations and did nothing, then they should be in trouble.

If they don’t know, but can demonstrate that they will take action in this situation, then, in my opinion, they are legally in the clear…

depends whether intent is a required factor for the state's wrongful death statute (my state says it's not, as wrongful death is there for criminal homicides that don't fit the murder statute). if openai acted intentionally, recklessly, or negligently in this they're at least partially liable. if they flagged him, it seems either intentional or reckless to me. if they didn't, it's negligent.

however, if the deceased used some kind of prompt injection (i don't know the right terms, this isn't my field) to bypass gpt's ethical restrictions, and if understanding how to bypass gpt's ethical restrictions is in fact esoteric, only then would i find openai was not at least negligent.

as i myself have gotten gpt to do something it's restricted from doing, and i haven't worked in IT since the 90s, i'm led to a single conclusion.