On Tuesday, parents of a teen who died by suicide filed the first ever wrongful death lawsuit against OpenAI and its CEO, Sam Altman, alleging that their son received detailed instructions on how to hang himself from the company’s popular chatbot, ChatGPT. The case may well serve as a landmark legal action in the ongoing fight over the risks of artificial intelligence tools — and whether the tech giants behind them can be held liable in cases of user harm.
The 40-page complaint recounts how 16-year-old Adam Raine, a high school student in California, had started using ChatGPT in the fall of 2024 for help with homework, like millions of students around the world. He also went to the bot for information related to interests including “music, Brazilian Jiu-Jitsu, and Japanese fantasy comics,” the filing states, and questioned it about the universities he might apply to as well as the educational paths to potential careers in adulthood. Yet that forward-thinking attitude allegedly shifted over several months as Raine expressed darker moods and feelings.
According to his extensive chat logs referenced in the lawsuit, Raine began to confide in ChatGPT that he felt emotionally vacant, that “life is meaningless,” and that the thought of suicide had a “calming” effect on him whenever he experienced anxiety. ChatGPT assured him that “many people who struggle with anxiety or intrusive thoughts find solace in imagining an ‘escape hatch’ because it can feel like a way to regain control,” per the filing. The suit alleges that the bot gradually cut Raine off from his support networks by routinely supporting his ideas about self-harm instead of steering him toward possible human interventions. At one point, when he mentioned being close to his brother, ChatGPT allegedly told him, “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all — the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”
In a similar statement to The New York Times, OpenAI reiterated that its safeguards “work best in common, short exchanges,” but will “sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”
That just means they put "Don't give suicide advice!" in the system prompt, and there is no other safeguard.
Because AIs are just text prediction machines, the further something goes into the past the less relevant it becomes, while the recent context - which is driven by the human - becomes increasingly relevant and guides the conversation towards what the person wants to talk about, and in the tone they want to talk about it. So it's always going to happen.
There is an easy and simple solution that would stop almost all of these problems with people getting into twisted relationships with AIs (apart from fuck AI completely) - delete every conversation 24 hours from when it starts. Nobody is going to get emotionally attached to a bot or think it's a "real person" if they have to start from zero every day - and it would let people recognise just how fake and unhealthy the entire thing is.
But of course they won't because it turns out that "relationships" generate engagement, and engagement generates usage. People becoming romantically and emotionally attached to bots is no longer a side-effect, it's quickly becoming the whole point, literally the business model.