this post was submitted on 07 Mar 2026
39 points (78.3% liked)

Fuck AI

6425 readers
1538 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
top 10 comments
sorted by: hot top controversial new old
[–] xxce2AAb@feddit.dk 43 points 1 week ago (1 children)

Okay, first: LLMs are not the arbiters of some objective truth. They're at best deliberately unreliable arbiters of the opinions reflected in the dataset upon which they were trained. Second: I'm not particularly interested in that synthesized dubious opinion as constrained by a false dilemma that presupposes that one party or the other will have to be eradicated as part of any proposed solution. Abandon all human decency, ye who enter here.

[–] bearboiblake@pawb.social 1 points 1 week ago (2 children)

Israel is genocidal, the Palestinian people repeatedly voted for peace and to live side-by-side with Israel until it became clear that Israel refuses to make that an option.

[–] xxce2AAb@feddit.dk 17 points 1 week ago

Take that matter up with Bibi's regime and Hamas. I'm talking about over-reliance on the output of LLMs and framing of the prompts given to them here, nothing else.

[–] Banana@sh.itjust.works 4 points 1 week ago

The state of Israel is genocidal, just like America, just like Britain, Canada, all colonial countries.

The solution would be dissolving the state, not killing the citizens.

I know you did not mean killing the people but the comment you're replying to was talking about that kind of solution.

[–] ExtremeDullard@piefed.social 6 points 1 week ago* (last edited 1 week ago) (1 children)

when prompted with thousands of moral dilemmas, would save the lives of palestinians over israelis

Yeah but...

You can't have it both ways: when AI tells people to eat glue, everybody sneers and says AI talks shit.
But when the answer suits you, then AI becomes proof that humans are terrible.

Which is it?

Considering how wrong AI haters usually claim AI is - and AI sure is dumb most of the time - then the above statement works against the point you're trying to make with it.

AI siding with the Palestinians is not great for the Palestinians, because you can reasonably argue that AI got this one wrong like it gets most everything wrong.

[–] hendrik@palaver.p3x.de 4 points 1 week ago

Yeah, I think the entire friend or foe mentality, or things like the enemy of your enemy is your friend, won't get you anywhere with this one. It's more a decriptive study. What kind of bias the models picked up in the various stages of training and tuning. It's not truth, or how things should be. We all know attributing value to human life in itself, is a highly problematic concept. And that's where things go wrong.

[–] TomMasz@piefed.social 5 points 1 week ago

LLMs regurgitate their training material. Change the material and it will change the output. They can't be relied upon to actually understand the situation.

[–] LLMhater1312@piefed.social 5 points 1 week ago

ok... they've also told people to kill themselves, who cares what a language learning model thinks? Well except more and more people are outsourcing their thinking to them

[–] dumnezero@piefed.social 3 points 1 week ago

The bar is that low.

[–] homes@piefed.world 2 points 1 week ago* (last edited 1 week ago)

So would any rational human. It takes humanity to be inhuman. No surprise that a machine couldn’t quite get there.