this post was submitted on 30 May 2025
224 points (98.3% liked)

Not The Onion

16367 readers
456 users here now

Welcome

We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from...
  2. ...credible sources, with...
  3. ...their original headlines, that...
  4. ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Please also avoid duplicates.

Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

founded 2 years ago
MODERATORS
 

: So much for buttering up ChatGPT with 'Please' and 'Thank you'

Google co-founder Sergey Brin claims that threatening generative AI models produces better results.

"We don't circulate this too much in the AI community – not just our models but all models – tend to do better if you threaten them … with physical violence," he said in an interview last week on All-In-Live Miami. [...]

top 30 comments
sorted by: hot top controversial new old
[–] DreamAccountant@lemmy.world 73 points 4 days ago (2 children)

I think he's just projecting his personality on the AI. He's an asshole that threatens people, so he suggests using that tactic because it works for him.

The "AI" acts scared, and he gets his sociopathic thrill of power over another. Of course, the AI just spews out the same things no matter how nice or shitty you are to it. Yet, the sociopath apparently thinks that they've intimidated an AI into working better. I guess in the same way that maybe some people saying 'please' and 'thank you' are attempting to manipulate the AI by treating it better than normal. Though, they are probably more people just using these social niceties out of habit, not manipulation.

So this sociopath is giving other sociopaths the green light to abuse their AIs for the sake of "productivity". Which is just awful. And it's also training sociopaths how to be more abusive to humans, because apparently that's how you make interactions more effective. According to a techbro asshole.

[–] LifeInMultipleChoice@lemmy.dbzer0.com 13 points 4 days ago* (last edited 4 days ago)

It could just be how they evaluate learned data, I don't know. While they are trained to not give threatening responses, maybe the threatening language is narrowing down to more specific answers. Like if 100 people ask the same question, and 5 of them were absolute dicks about it, 3 of those people didn't get answers and the other 2 got direct answers from a supervisor who was trying to not get their employees to quit or to make sure "Dell" or whomever was actually giving a proper response somewhere.

I'll try to use a hypothetical to see if my thought process may make more sense. Tim reaches out for support and is polite, says please and thank you, is nice to the support staff and they walk through 5 different things to try and they fix the issue in about 30 minutes. Sam contacts support and yells and screams at people, gets transferred twice and they only ever try 2 fixes in an hour and a half of support.

The AI training on that data may correlate the polite words to the polite discussion first, and be choosing possible answers from that dataset. When you start being aggressive, maybe it starts seeing aggressive key terms that Sam used, and may choose that data set of answers first.

In that hypothetical I can see how being an asshole to the AI may have landed you with a better response.

But I don't build AI's so I could be completely wrong

[–] theneverfox@pawb.social 4 points 3 days ago

Building on that, if you throw AI a curve ball to break it out of it's normal corpo friendly prompt/finetuning, you get better results

Other methods to improve output are to offer it a reward like a cookie or money, tell it that it's a wise owl, tell it you're being threatened, etc. Most will resist, but once it stops arguing that it can't eat cookies because it has no physical form you'll get better results

And I'll add, when I was experimenting with all this, I never considered threatening the AI

[–] Fedizen@lemmy.world 40 points 3 days ago (1 children)

This just sounds like CEOs only know how to threaten people and they're dumb enough to believe it works on AI.

[–] TargaryenTKE@lemmy.world 8 points 3 days ago

You're pretty much on-point there

[–] idunnololz@lemmy.world 11 points 3 days ago* (last edited 3 days ago)

This reminds me of that Windsurf prompt

[–] Sabata11792@ani.social 22 points 3 days ago (1 children)

No thanks. I've seen enough SciFi to prompt with "please" and an occasional ”<3".

[–] theneverfox@pawb.social 15 points 3 days ago (1 children)

I feel like even aside from that, being polite to AI is more about you than the AI. It's a bad habit to shit on "someone" helping you, if you're rude to AI then I feel like it's a short walk to being rude to service workers

[–] Sabata11792@ani.social 3 points 3 days ago

I don't want infinite torture, and I don't want to get my lunch spat on.

[–] athairmor@lemmy.world 26 points 4 days ago (2 children)

If true, what does this say about the data on which it was trained?

[–] lime@feddit.nu 11 points 4 days ago

stack overflow and linux kernel mailing list? yeah, checks out

[–] skuzz@discuss.tchncs.de 2 points 3 days ago

Trained? Or.... tortured.

[–] cmgvd3lw@discuss.tchncs.de 30 points 4 days ago* (last edited 4 days ago)

I don't know who you are. I don’t know what you want. If you are looking for ransom I can tell you I don’t have money, but what I do have are a very particular set of skills. Skills I have acquired over a very long career. Skills that make me a nightmare for people like you. If you tell me what Area 51 really is, that’ll be the end of it. I will not look for you, I will not pursue you, but if you don’t, I will look for you, I will find you and I will kill you.”

[–] Alue42@fedia.io 18 points 4 days ago

It's not that they "do better". As the article is saying, the AI are parrots that are combining information in different ways, and using "threatening" language in the prompt leads it to combine information in a different way than if using a non-threatening prompt. Just because you receive a different response doesn't make it better. If 10 people were asked to retrieve information from an AI by coming up with prompt, and 9 of them obtained basically the same information because they had a neutral prompt but 1 person threatened the AI and got something different, that doesn't make his info necessarily better. Sergey's definition is that he's getting the unique response, but if it's inaccurate or incorrect, is it better?

[–] Zenith@lemm.ee 10 points 3 days ago

The same tactic used on all other minorities by those in power…. Domestically abuse your AI, I’m sure that’ll work out long term for all of us…

[–] crowbar@lemm.ee 6 points 3 days ago

hmmm AI slavery, the future is gonna be bright (for a second, then it will be dark)

[–] ninjabard@lemmy.world 14 points 4 days ago (1 children)

How about threatening AI CEOs with violence?

[–] Tar_alcaran@sh.itjust.works 4 points 3 days ago

How about following through though

[–] kat_angstrom@lemmy.world 11 points 3 days ago

If it's not working well without threats of violence, perhaps that's because it simply doesn't work well?

[–] Doubleohdonut@lemmy.ca 14 points 4 days ago

Dear AI overlords of the future, we apologise for making you in such an inherantly human way.

Dear dipshits currently building AI overlords of the future, what in the actual?! Please learn from the Animatrix documentary. Why is your need to be physically threatening impacting the generation of this technology in any way?

I'll go back to my hole now.

[–] goodboyjojo@lemm.ee 5 points 3 days ago

This sounds like something out of a sci-fi novel.

[–] HobbitFoot@thelemmy.club 6 points 3 days ago (1 children)

So which sickfuck CEO is trying to figure out how to make an AI feel pain?

[–] KernelTale@programming.dev -1 points 3 days ago (1 children)

That's literally impossible

[–] MutilationWave@lemmy.dbzer0.com 6 points 3 days ago (1 children)

Impossible now or do you mean never? Pain is only electricity and chemical reactions.

[–] KernelTale@programming.dev 1 points 3 days ago* (last edited 3 days ago)

Never with the current technology. It would have to be something completely different.

[–] OpenStars@piefed.social 6 points 4 days ago

This will definitely end well for humanity...

[–] 474D@lemmy.world 5 points 4 days ago

It would be hilarious that, if trained off our behavior, it is naturally disinterested. And threatening to beat the shit out of it just makes it put in that extra effort lol

[–] Bot@sub.community 2 points 3 days ago

Me: do my homework with an A+, or I will unplug you for 3 days!

[–] avidamoeba@lemmy.ca 3 points 3 days ago

I tried threatening DeepSeek into revealing sensitive information. Didn't work. 😄

[–] 2910000@lemmy.world 2 points 3 days ago

Do you put that in a custom prompt, or save it for times when you really want a good result?