this post was submitted on 27 Nov 2025
459 points (98.1% liked)

Technology

77144 readers
2390 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.

The earliest look at OpenAI’s strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teen’s “suicide coach.” OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world’s most engaging chatbot, parents argued.

But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring “the full picture” revealed by the teen’s chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he’d begun experiencing suicidal ideation at age 11, long before he used the chatbot.

top 50 comments
sorted by: hot top controversial new old
[–] ChairmanMeow@programming.dev 62 points 1 day ago (1 children)

A TOS is not a liability shield. If Raine violated the terms of service, OpenAI should have terminated the service to him.

They did not.

[–] CptOblivius@lemmy.world 21 points 1 day ago* (last edited 1 day ago) (1 children)

I don't know an 16 year old can be held to a TOS agreement anyway. That is openai's fault for allowing services like this to children .

[–] explodicle@sh.itjust.works 3 points 1 day ago (1 children)

Are you over 18? Click yes to continue, or click no to leave the site.

[–] JcbAzPx@lemmy.world 5 points 1 day ago

A minor cannot enter into a contract even if they lie about their age.

[–] Don_alForno@feddit.org 44 points 1 day ago (1 children)
[–] oh_@lemmy.world 7 points 1 day ago

Good for PR. Billion dollar company looking to not pay.

[–] Buffalox@lemmy.world 150 points 2 days ago (9 children)

That's like a gun company claiming using their weapons for robbery is a violation of terms of service.

[–] DaddleDew@lemmy.world 125 points 2 days ago* (last edited 2 days ago) (3 children)

I'd say it's more akin to a bread company saying that it is a violation of the terms and services to get sick from food poisoning after eating their bread.

[–] Buffalox@lemmy.world 40 points 2 days ago

Yes you are right, it's hard to find an analogy that is both as stupid and also sounds somewhat plausible.
Because of course a bread company cannot reasonably claim that eating their bread is against terms of service. But that's exactly the problem, because it's the exact same for OpenAI, they cannot reasonably claim what they are claiming.

load more comments (2 replies)
[–] jazzkoalapaws@ttrpg.network 7 points 1 day ago

That analogy is 100% accurate.

It is exactly like that.

[–] CosmoNova@lemmy.world 8 points 2 days ago

That‘s a company claiming companies can‘t take responsibility because they are companies and can‘t do wrong. They use this kind of defense virtually every time they get criticized. AI ruined the app for you? Sorry but that‘s progress. We can‘t afford to lag behind. Oh you can’t afford rent and are about to become homeless? Sorry but we are legally required to make our shareholders happy. Oh your son died? He should‘ve read the TOS. Can‘t afford your meds? Sorry but number must go up.

Companies are legally required to be incompatible with human society long term.

[–] mriormro@lemmy.zip 13 points 2 days ago (1 children)

If the gun also talked to you

Talked you into it*

[–] Whostosay@sh.itjust.works 7 points 2 days ago (3 children)

Yeah this metaphor isn't even almost there

load more comments (3 replies)
load more comments (3 replies)
[–] ryper@lemmy.ca 126 points 2 days ago (5 children)

“Our deepest sympathies are with the Raine family for their unimaginable loss,” OpenAI said in its blog, while its filing acknowledged, “Adam Raine’s death is a tragedy.” But “at the same time,” it’s essential to consider all the available context, OpenAI’s filing said, including that OpenAI has a mission to build AI that “benefits all of humanity” and is supposedly a pioneer in chatbot safety.

How the fuck is OpenAI's mission relevant to the case? Are suggesting that their mission is worth a few deaths?

[–] Psythik@lemmy.world 25 points 2 days ago

"Some of you may die, but that is a chance I am willing to take."

[–] call_me_xale@lemmy.zip 63 points 2 days ago

Sure looks like it.

Get fucked, assholes.

[–] roofuskit@lemmy.world 32 points 2 days ago

Tech Bros all think they are the saviors of humanity and they are owed every dollar they collect.

[–] frustrated_phagocytosis@fedia.io 45 points 2 days ago (1 children)

"All of humanity" doesn't include suicidal people, apparently.

[–] slacktoid@lemmy.ml 16 points 2 days ago

To be fair as a society we have never really cared about suicide. So why bother now (I say as a jaded fuck angry about society)

[–] JasonDJ@lemmy.zip 14 points 2 days ago* (last edited 2 days ago)

I think they are saying that her suicide was for the benefit of all humanity.

Getting some Michelle Carter vibes...

[–] buttnugget@lemmy.world 10 points 1 day ago (1 children)

A big part of the problem is that people think they’re talking to something intelligent that understands them and knows how many instances of letters words have.

how many instances of letters words have.

it's five, right?

yeah, it's five.

[–] vacuumflower@lemmy.sdf.org 24 points 2 days ago

Modern version of "suicide is a sin and we don't condone it, but if you have problems you're devil-possessed and need to repent and have only yourself to blame".

Also probably could be countered by their advertising contradicting their ToS. Not a lawyer.

[–] NotMyOldRedditName@lemmy.world 44 points 2 days ago* (last edited 2 days ago)

The situation is tragic... their attempt to hide behind their ToS on that is fucking hilarious.

[–] spongebue@lemmy.world 46 points 2 days ago (6 children)

So why can't this awesome AI be stopped from being used in ways that violate the TOS?

load more comments (6 replies)
[–] myfunnyaccountname@lemmy.zip 5 points 1 day ago

The biggest issue to me is that the kid didn’t feel safe enough to talk to his parents. And that mental health, globally, is taboo and ignored and not something we talk about. As someone part of the mental health system, it’s a joke how bad it is.

[–] W3dd1e@lemmy.zip 5 points 1 day ago

Fuck that noise. ChatGPT and OpenAI murdered Adam Raine and should be held responsible for it.

[–] IonTempted@lemmynsfw.com 23 points 2 days ago* (last edited 2 days ago)

It is scary how the AI can't assist you with sexual fantasies/roleplays but can assist with that, even though I'm curious what the logs are because I think OpenAI is at least smart enough to tell you "Hey, please don't do that here's some numbers" even if you push it I think.

[–] just_another_person@lemmy.world 34 points 2 days ago

Fucking.WOW.

Sam Altman just LOVES answering stupid questions. People should be asking him about this in those PR sprints.

[–] Reverendender@sh.itjust.works 29 points 2 days ago

The police also violated my Terms of Service when they arrested me for that armed bank robbery I was allegedly committing. This is a serious problem in our society people; something must be done!

[–] RememberTheApollo_@lemmy.world 13 points 2 days ago

Well there you have it. It’s not the dev’s fault, it’s the AI’s fault. Just like they’d throw any other employee under the bus, even if it’s one they created.

[–] cmbabul@lemmy.world 8 points 2 days ago

Just going through this thread and blocking anyone defending OpenAI or AI in general, your opinions are trash and your breath smells like boot leather

[–] fonix232@fedia.io 17 points 2 days ago
[–] MourningDove@lemmy.zip 9 points 2 days ago* (last edited 2 days ago)

And open AI violates human culture and creativity. It’s a fucking shame that there are laws against this because that fucker should be up against the wall.

[–] Treczoks@lemmy.world 5 points 2 days ago

Well, did anyone expect them to admit guilt?

[–] the_q@lemmy.zip 7 points 2 days ago (1 children)
[–] Corkyskog@sh.itjust.works 1 points 1 day ago

I guarantee the TOS says that anyone under 18 has to use the service with a parent or guardian present...

It will be hilarious if they market it that way because they could lose everyone under 18.

Does the synthesis of D-Lysergic Acid work against the terms of service if you ask for a mind-bending experience?

load more comments
view more: next ›