Reporting a user for risky behaviour relies on an assessment that violates the EU AI legislation. It seems they reasoned a machine assessment is already a rights violation too far.
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
Couldn't have said it better myself.
no, not fuck AI. keep the internet private
Absolutely not.
Leaders rejected the safety team’s urgings and declined to report the user to law enforcement.
OpenAI will “find ways to prevent tragedies like this in the future” and to continue “working with all levels of government to help ensure something like this never happens again,” Altman said.
They already have a fucking way to prevent this and they opted not to, for PR reasons. They are complicit, they provided a service that aided planning and decided to continue service and allowed further planning.
If you post a message to a website, that message is not private from the website regardless of the method they use to receive it. They have the moral responsibility to respond to threats to life regardless of the legal responsibility they are arguing they don't have.
If I put a cork board up in front of my house and someone pins threats to it, when I notice it it's now my responsibility to act on that.
this is more akin to asking a library for information
it's really not. more like gathering a crowd of a few billion people, asking them a question, hearing the loudest answer and assuming it's correct
as far as I know, Open Ai is not hosting the largest forum in the world
No they are just training their model on it?
https://openai.com/index/openai-and-reddit-partnership/
Like isn't this common knowledge?