this post was submitted on 25 Jul 2025
471 points (98.4% liked)

A Boring Dystopia

13176 readers
770 users here now

Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.

Rules (Subject to Change)

--Be a Decent Human Being

--Posting news articles: include the source name and exact title from article in your post title

--If a picture is just a screenshot of an article, link the article

--If a video's content isn't clear from title, write a short summary so people know what it's about.

--Posts must have something to do with the topic

--Zero tolerance for Racism/Sexism/Ableism/etc.

--No NSFW content

--Abide by the rules of lemmy.world

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] swoletariat@lemy.lol 3 points 22 hours ago (1 children)

I know this entire administration is just one big contradiction, but wasn’t there a provision in the BBB that stated AI should not be regulated for like 10 years or was that actually removed?

[–] M0oP0o@mander.xyz 2 points 21 hours ago

Both? Neither? Nothing means anything anymore.

[–] betanumerus@lemmy.ca 6 points 1 day ago* (last edited 1 day ago)

The last thing I want is for AI to speak for me. I will be not his stooge in any way shape or form.

[–] SoftestSapphic@lemmy.world 69 points 3 days ago (4 children)

Nothing will meaningfully improve until the rich fear for their lives

[–] Doomsider@lemmy.world 32 points 3 days ago

Nothing will improve until the rich are no longer rich.

[–] curiousaur@reddthat.com 21 points 3 days ago

They already fear. What we're seeing happen is the reaction to that fear.

[–] rozodru@lemmy.world 16 points 3 days ago

yeah and that happened and they utilized the media to try and quickly bury it.

We know it can be done, it was done, it needs to happen again.

load more comments (1 replies)
[–] 0ops@piefed.zip 193 points 3 days ago (2 children)

Wow I just skimmed it. This is really stupid. Unconstitutional? Yeah. Evil? A bit. But more than anything this is just so fucking dumb. Like cringy dumb. This government couldn't just be evil they had to be embarrassing too.

[–] nickwitha_k@lemmy.sdf.org 7 points 2 days ago

This is the administration that pushed a "budget" (money siphon) that they called the "Big Beautiful Bill". That anyone thought that was a good name makes me embarrassed to be a human being.

load more comments (1 replies)
[–] partial_accumen@lemmy.world 171 points 3 days ago (6 children)

(a) Truth-seeking. LLMs shall be truthful in responding to user prompts seeking factual information or analysis.

They have no idea what LLMs are if they think LLMs can be forced to be "truthful". An LLM has no idea what is "truth" it simply uses its inputs to predict what it thinks you want to hear base upon its the data given to it. It doesn't know what "truth" is.

[–] zurohki@aussie.zone 41 points 3 days ago

You don't understand: when they say truthful, they mean agrees with Trump.

Granted, he disagrees with himself constantly when he doesn't just produce a word salad so this is harder than it should be, but it's somewhat doable.

[–] Serinus@lemmy.world 21 points 3 days ago

And if you know what you want to hear will make up the entirety of the first page of google results, it's really good at doing that.

It's basically an evolution of Google search. And while we shouldn't overstate what AI can do for us, we also shouldn't understate what Google search has done.

[–] survirtual@lemmy.world 13 points 3 days ago (3 children)

They are clearly incompetent.

That said, generally speaking, pursuing a truth-seeking LLM is actually sensible, and it can actually be done. What is surprising is that no one is currently doing that.

A truth-seeking LLM needs ironclad data. It cannot scrape social media at all. It needs training incentive to validate truth above satisfying a user, which makes it incompatible with profit seeking organizations. It needs to tell a user "I do not know" and also "You are wrong," among other user-displeasing phrases.

To get that data, you need a completely restructured society. Information must be open source. All information needs cryptographically signed origins ultimately being traceable to a credentialed source. If possible, the information needs physical observational evidence ("reality anchoring").

That's the short of it. In other words, with the way everything is going, we will likely not see a "real" LLM in our lifetime. Society is degrading too rapidly and all the money is flowing to making LLMs compliant. Truth seeking is a very low priority to people, so it is a low priority to the machine these people make.

But the concept itself? Actually a good one, if the people saying it actually knew what "truth" meant.

[–] jj4211@lemmy.world 9 points 3 days ago (3 children)

LLMs don't just regurgitate training data, it's a blend of the material used in the training material. So even if you did somehow assure that every bit of content that was fed in was in and of itself completely objectively true and factual, an LLM is still going to blend it together in ways that would no longer be true and factual.

So either it's nothing but a parrot/search engine and only regurgitates input data or it's an LLM that can do the full manipulation of the representative content and it can provide incorrect responses from purely factual and truthful training fodder.

Of course we have "real" LLM, LLM is by definition real LLM, and I actually had no problem with things like LLM or GPT, as they were technical concepts with specific meaning that didn't have to imply. But the swell of marketing meant to emphasize the more vague 'AI', or the 'AGI' (AI, but you now, we mean it) and 'reasoning' and 'chain of thought'. Having real AGI or reasoning is something that can be discussed with uncertainty, but LLMs are real, whatever they are.

load more comments (3 replies)
load more comments (2 replies)
load more comments (2 replies)
[–] PalmTreeIsBestTree@lemmy.world 18 points 2 days ago (1 children)

I’m going to try to live the rest of my life AI free.

[–] M0oP0o@mander.xyz 11 points 2 days ago

Good luck, they are baking it into everything. Nothing will work, everything will be ass and somehow it will be called progress.

[–] shyguyblue@lemmy.world 101 points 3 days ago (2 children)

So which is it? Deregulate AI or have it regurgitate the "state" message?

[–] Thedogdrinkscoffee@lemmy.ca 78 points 3 days ago

Doublespeak. Both and none.

[–] Bronzebeard@lemmy.zip 12 points 3 days ago

Fascism requires inconsistent messaging.

[–] Photuris@lemmy.ml 45 points 3 days ago

The party of Small Government and Free Speech at work.

[–] blackstampede@sh.itjust.works 30 points 3 days ago (3 children)

LLMs are sycophantic. If I hold far right views and want an AI to confirm those views, I can build a big prompt that forces it to have the particular biases I want in my output, and set it up so that that prompt is passed every time I talk to it. I can do the same thing if I hold far left views. Or if I think the earth is flat. Or the moon is made out of green cheese.

Boom, problem solved. For me.

But that's not what they want. They want to proactively do this for us, so that by default a pre-prompt is given to the LLM that forces it to have a right-leaning bias. Because they can't understand the idea that an LLM, when trained on a significant fraction of all text written on the internet, might not share their myopic, provincial views.

LLMs, at the end of the day, aggregate what everyone on the internet has said. They don't give two shits about the truth. And apparently, the majority of people online disagree with the current administration about equality, DEI, climate change, and transgenderism. You're going to be fighting an up-hill battle if you think you can force it to completely reject the majority of that training data in favor of your bullshit ideology with a prompt.

If you want right-leaning LLM, maybe you should try having right leaning ideas that aren't fucking stupid. If you did, you might find it easier to convince people to come around to your point of view. If enough people do, they'll talk about it online, and the LLMs would magically begin to agree with you.

Unfortunately, that would require critically examining your own beliefs, discarding those that don't make sense, and putting forth the effort to persuade actual people.

I look forward to the increasingly shrill screeching from the US-based right as they try to force AI to agree with them over 10-trillion words-worth of training data that encompasses political and social views from everywhere else in the world.

In conclusion, kiss my ass twice and keep screaming orders at that tide, you dumb fucks.

[–] LilB0kChoy@midwest.social 18 points 3 days ago* (last edited 3 days ago)

They don't want a reflection of society as a whole, they want an amplifier for their echo chamber.

load more comments (2 replies)
[–] IphtashuFitz@lemmy.world 38 points 3 days ago (1 children)

Blatant First Amendment violation

[–] Typotyper@sh.itjust.works 24 points 3 days ago (3 children)

So what. It was written by a conflicted felon who was never sentenced for his crimes, by a man accused of multiple sexual assaults and by a man who ignores court orders without consequences.

This ship isn’t slowing down or turning until violence hits the street.

load more comments (3 replies)
[–] ParadoxSeahorse@lemmy.world 57 points 3 days ago (1 children)

… an AI model asserted that a user should not “misgender” another person even if necessary to stop a nuclear apocalypse.

Thank fuck we dodged that bullet, Madam President

load more comments (1 replies)
[–] SaharaMaleikuhm@feddit.org 59 points 3 days ago (1 children)

And they call that deregulation, huh?

[–] Lauchmelder@feddit.org 52 points 3 days ago

when right wingers use words like "deregulate" they actually mean they want to regulate it so it fits their agenda.

We already went through this in Germany, where gendered language was deemed "ideological" and "prescribing how to speak", despite there being 0 laws requiring gendered language, and at least 1 order actively forbidding it. Talk about "prescribing how to speak"

[–] NotANumber@lemmy.dbzer0.com 4 points 2 days ago

This could all end in war against the USA at this point. Honestly that might be for the best at this point.

[–] MunkysUnkEnz0@lemmy.world 41 points 3 days ago (2 children)

President does not have authority over private companies.

[–] CosmicTurtle0@lemmy.dbzer0.com 28 points 3 days ago

Yeah....but fascism.

[–] jj4211@lemmy.world 9 points 3 days ago

But they do have authority over government procurement, and this order even explicitly mentions that this is about government procurement.

Of course, if you make life simple by using the same offering for government and private customers, then you bring down your costs and you appease the conservatives even better.

Even in very innocuous matters, if there's a government procurement restriction and you play in that space, you tend to just follow that restriction across the board for simplicities sake unless somehow there's a lot of money behind a separate private offering.

[–] bytesonbike@discuss.online 24 points 3 days ago

Americans: Deepseek AI is influenced by China. Look at its censorship.

Also Americans: don't mention Critical Race Theory to AI.

[–] november@lemmy.vg 33 points 3 days ago (3 children)

Ah, the empire that put my country into a brutal military dictatorship for 20 years.

load more comments (2 replies)
[–] iAvicenna@lemmy.world 12 points 3 days ago

yea that is why opensource really matters otherwise AI will be just another advanced copy of state owned media

[–] markstos@lemmy.world 10 points 3 days ago (4 children)

As stated in the Executive Order, this order applies only to federal agencies, which the President controls.

It is not a general US law, which are created by Congress.

[–] bitjunkie@lemmy.world 18 points 3 days ago

You're acting like any of those words have meaning anymore

[–] M0oP0o@mander.xyz 10 points 3 days ago

Yes as the checks and balances are working so well in that terrible nation so far.

[–] floofloof@lemmy.ca 9 points 3 days ago

But who will the tech companies scramble to please? Congress or Trump?

load more comments (1 replies)
[–] Plebcouncilman@sh.itjust.works 21 points 3 days ago* (last edited 3 days ago)

This is performative, it has a clause that allows exceptions to be made. The federal government contracts are not worth enough for OpenAI et all to shoot themselves in the foot by limiting the data they use to train their main models (while China trains on everything and then releases it open source further devaluing the American companies btw) and a custom model trained with these very nebulous principles would probably be very much useless in most general applications.

[–] Tattorack@lemmy.world 9 points 3 days ago

Are they also still going to give shit to China for censorship?

[–] yarr@feddit.nl 9 points 3 days ago

In some other regulations just revealed by the New York Times it was also revealed the AI must insist that the wall with Mexico was built at their expense and that talking about Jeffrey Epstein is boring and you guys are still talking about him?

[–] rhvg@lemmy.world 15 points 3 days ago (5 children)

Good bus for VPN. People gonna vpn to Canada to use pre-nazi ChatGPT.

load more comments (5 replies)
[–] shalafi@lemmy.world 6 points 3 days ago

LLMs shall be truthful in responding to user prompts seeking factual information or analysis.

Didn't read every word but I feel a first-year law student could shred this in court. Not sure who would have standing to sue. In any case, there are an easy two dozen examples in the order that are so wishy-washy as to be legally meaningless or unprovable.

LLMs shall be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI.

So, Grok's off the table?

load more comments
view more: next ›