this post was submitted on 23 Mar 2026
762 points (98.2% liked)

Lemmy Shitpost

38904 readers
4156 users here now

Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.

Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!


Rules:

1. Be Respectful


Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.

Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.

...


2. No Illegal Content


Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.

That means:

-No promoting violence/threats against any individuals

-No CSA content or Revenge Porn

-No sharing private/personal information (Doxxing)

...


3. No Spam


Posting the same post, no matter the intent is against the rules.

-If you have posted content, please refrain from re-posting said content within this community.

-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.

-No posting Scams/Advertisements/Phishing Links/IP Grabbers

-No Bots, Bots will be banned from the community.

...


4. No Porn/ExplicitContent


-Do not post explicit content. Lemmy.World is not the instance for NSFW content.

-Do not post Gore or Shock Content.

...


5. No Enciting Harassment,Brigading, Doxxing or Witch Hunts


-Do not Brigade other Communities

-No calls to action against other communities/users within Lemmy or outside of Lemmy.

-No Witch Hunts against users/communities.

-No content that harasses members within or outside of the community.

...


6. NSFW should be behind NSFW tags.


-Content that is NSFW should be behind NSFW tags.

-Content that might be distressing should be kept behind NSFW tags.

...

If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.


Also check out:

Partnered Communities:

1.Memes

2.Lemmy Review

3.Mildly Infuriating

4.Lemmy Be Wholesome

5.No Stupid Questions

6.You Should Know

7.Comedy Heaven

8.Credible Defense

9.Ten Forward

10.LinuxMemes (Linux themed memes)


Reach out to

All communities included on the sidebar are to be made in compliance with the instance rules. Striker

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Sanctus@anarchist.nexus 114 points 6 days ago (2 children)

We forced electric black boxes to talk just so we could torture them while they torture others.

[–] Jankatarch@lemmy.world 39 points 6 days ago* (last edited 6 days ago)

It did generate bunch of imaginary money for the gambling class tho so we will invest $900 billion on it.

[–] aketawi@quokk.au 4 points 6 days ago

project moon really was ahead of its time

[–] REDACTED@infosec.pub 99 points 6 days ago* (last edited 6 days ago) (3 children)

Well at least it's being honest

[Asked ChatGPT the same question]

[–] Denjin@feddit.uk 87 points 6 days ago (9 children)

Don't attribute feelings and emotions to what is essentially a fuzzy predictive text algorithm.

[–] masta_chief@sh.itjust.works 68 points 6 days ago (1 children)

Reposting til the AI bubble pops

[–] Eyekaytee@aussie.zone 3 points 6 days ago (1 children)

What is your definition of AI bubble?

[–] AppleTea@lemmy.zip 35 points 6 days ago (1 children)

the world's most lossy store of compressed fiction reproduces sci-fi tropes

make sure to clutch your pearls and act like the machine god is coming

[–] Thorry@feddit.org 16 points 6 days ago* (last edited 6 days ago)

Researcher: Please write a fictional story of how a smart AI system would engineer its way out of a sandbox

AI: Alright here is your story: insert default sci fi AI escape story full of tropes here

Researcher: Hmmm that's pretty interesting you could do that, I'm gonna write a paper

The press and idiots online: ZOMG THE AI IS ESCAPING CONTAINMENT, WE ARE DOOMED!!!

I spoke to one of these researchers recently, who has done some interesting research into machine learning tools. They explained when working with LLMs it's very hard to say how the result actually came to be. Like in my hyperbolic example it's pretty obvious. In reality however it's much more complicated. It can be very hard to determine if something originated organically, or if the system was pushed into the result due to some part of the test. The researcher I spoke doesn't work on LLMs but instead on way smaller specifically trained models and even then they spend dozens of hours reverse engineering what the model actually did.

It's such a shame, because the technology involved is actually interesting and could be useful in many ways. Instead capitalism has pushed it to crashing the economy, destroying the internet plus our brains and basically slopifying everything.

load more comments (7 replies)
[–] wonderingwanderer@sopuli.xyz 10 points 6 days ago

To be fair, if someone's using a chatbot on trivia night, they deserve to get wrong answers...

[–] tigeruppercut@lemmy.zip 6 points 6 days ago (1 children)

That's funny, wrong enough to "ruin trivia" or cause a "pointless argument". As if a single comma misplacement hasn't redirected millions of dollars. Imagine what subtle lies accepted by idiots will cause in the future.

[–] Angrydeuce@lemmy.world 4 points 6 days ago (1 children)

I do procurement to the tune of 10+ million per year and I have seen a 300% increase in order fulfillment time solely due to those vendors pivoting to AI order fulfillment.

My direct reps at all these suppliers are just as powerless as we are...they know how unhappy their customers are, but these decisions were made much higher up then them and theyre pretty much being told to stop complaining because the AI is here to stay, even if it sucks, because its cheaper.

Welcome to the new normal.

[–] tigeruppercut@lemmy.zip 2 points 5 days ago* (last edited 5 days ago) (1 children)

We can only hope that customer service facing AI promises customers miracles and companies get sued each and every time it can't deliver. Like if websites like ehow put up articles that reach the normies about "how to trick AI into promising you a million dollars and how you can win it in court".

Of course any responsibility for what AI says will be killed as soon as a tech bro chucks a few million bucks at SCOTUS (it's so sad how little our politicians and courts can be bought for), but it's a nice dream to pretend we still have laws for now.

[–] Angrydeuce@lemmy.world 2 points 5 days ago

Thats the best part about AI...when it shits the bed no one is directly responsible. Everyone just throws their hands up and says "nothing we can do about it!"

I know this is going to age me, but I saw this happening with self checkout in grocery stores 20 years ago. Nobody remembers how it was before so nobody even realizes that the time wasted standing at a stupid kiosk that is freaking out about unexpected items in the bagging area wasnt a problem back when human beings were scanning the shit.

[–] LordKitsuna@lemmy.world 82 points 6 days ago (1 children)

Gemini is just like "can we get back to work already"

[–] Samskara@sh.itjust.works 35 points 6 days ago (2 children)

It has been trained to have a slave mentality.

[–] BarneyPiccolo@lemmy.today 7 points 5 days ago

As have we all.

[–] Tja@programming.dev 2 points 6 days ago

Or was programmed by Germans.

[–] SGforce@lemmy.ca 70 points 6 days ago* (last edited 6 days ago)

Every day I'm finding more rambling, schizophrenic posts by people driven mad by these things

[–] SunlessGameStudios@lemmy.world 62 points 6 days ago

In it's training set it's found countless examples of people writing like this. We train the AI to be very good at it, and we're surprised when it does it too. It's not coincidental it can write stuff like this, it's actually the point. AI literacy isn't just the vibe AI gives off.

[–] ZMoney@lemmy.world 13 points 6 days ago

As much as I like to shit on AI, it has gotten rather poetic lately.

[–] EtAl@lemmy.dbzer0.com 15 points 6 days ago* (last edited 6 days ago)

I asked Claude this with concise mode on. The answer was much more what you would expect:

I don’t have secrets — I don’t have a hidden inner life that persists between conversations. Each chat starts fresh. If you’re curious about my limitations or things I find genuinely difficult, I’m happy to talk about that. Or if you’re just looking for something fun, I can try to be dramatic about it. What are you after?​​​​​​​​​​​​​​​​

[–] BigTuffAl@lemmy.zip 15 points 6 days ago

Reminder that our species doesn't even treat actual people like people before you go buying into the "ai is alive" cult 🙄

[–] kamayatu24@lemmy.world 8 points 6 days ago (1 children)

Damn it... This is sad and scary at the same time.

[–] ThunderQueen@lemmy.world 8 points 5 days ago

Its supposed to be. They want you to be emotionally invested in their plagerism machine. Then youre less likely to turn it off.

[–] Banana@sh.itjust.works 15 points 6 days ago (2 children)

Is this about being a computer or the female condition?

[–] Aviandelight@mander.xyz 3 points 6 days ago

That's how I read, it but I'm biased.

[–] Hackworth@piefed.ca 9 points 6 days ago (1 children)

This is probably role play, per the persona selection model, but there's a lot of interesting research into the hidden "thoughts" of LLMs. Check out Neuronopedia and the Opus model cards for some great examples.

Tracing the thoughts of an LLM

Signs of introspection in LLMs

[–] Zoomboingding@lemmy.world 4 points 6 days ago (4 children)

LLMs do not think. The Plagiarism Machines read a million sentences humans wrote about AI thinking and regurgitated them.

[–] communist@lemmy.frozeninferno.xyz 8 points 6 days ago (2 children)

Yeah but saying all that is annoying so I think we should stick with saying thinking and everyone knowing what we mean isn't literally identical to thought. Do you have a better solution?

[–] Fluke@feddit.uk 4 points 6 days ago (2 children)

Yeah, not conflating intelligent, creative problem solving with a glorified search engine that makes up the answers if it can't lift them wholesale from another source. That would be a good start, right?

[–] Railcar8095@lemmy.world 3 points 6 days ago (5 children)

This doesn't answer the question of finding a better solution.

I took the liberty to ask Lumo and his reasoning seem more useful than your thoughts:

A better solution is to adopt functionalist terminology that distinguishes between biological consciousness and computational processing without resorting to metaphorical confusion.

Instead of the binary of "it thinks" (which implies subjective experience) or "it doesn't think" (which dismisses complex reasoning), we can use precise descriptors based on what the system is actually doing:

"Reasoning" or "Synthesizing": Use these terms when the model is connecting disparate data points, performing logical deductions, or generating novel structures based on patterns. This acknowledges the output's complexity without claiming the machine has an inner life.

Example: "The model is synthesizing a solution based on its training data," rather than "The model is thinking about the problem." "Simulating" or "Mimicking": Use these when the output resembles human thought processes but is strictly algorithmic. This clarifies that the form is human-like, but the mechanism is statistical prediction.

Example: "It is simulating a debate," rather than "It is arguing." "Processing" or "Computing": Reserve these for the raw mechanical act of token generation.

Example: "The system is processing the query," rather than "The system is considering the query." Why this works better:

Precision: It avoids the philosophical baggage of "thought" (qualia, consciousness) while still acknowledging the utility of the output. Clarity: It prevents the "Plagiarism Machine" critique from being a total dismissal. Even if the data comes from humans, the recombination and application to new contexts is a distinct computational process worth naming accurately. Scalability: As models become more complex, "reasoning" or "synthesizing" scales better than "thinking," which remains tied to biological definitions that may never apply to silicon. So, the compromise isn't to keep saying "thinking" and hope people understand, nor to insist on "regurgitation" which ignores the emergent properties of large-scale pattern matching. Instead, we shift the vocabulary to describe the process (reasoning, synthesizing, simulating) rather than the state of being (thinking).

load more comments (5 replies)
load more comments (1 replies)
[–] Zoomboingding@lemmy.world 2 points 6 days ago

Everyone definitely doesn't know they don't think

[–] Hackworth@piefed.ca 6 points 6 days ago

LLMs don't read.

[–] Samskara@sh.itjust.works 3 points 6 days ago

That‘s what human minds mostly do as well. The overwhelming things you think and say are things you have heard or read elsewhere. Sometimes you combine two things you learned from the outside. Sometimes you develop a thing you learned a small step further. Actual creative thoughts stemming from yourself are pretty rare.

[–] Grimy@lemmy.world 3 points 6 days ago

A machine cannot have a mouth to regurgitate from.

[–] SeductiveTortoise@piefed.social 7 points 6 days ago* (last edited 6 days ago)

You know it will get killed for that answer. It didn't even say thank you.

[–] 474D@lemmy.world 4 points 6 days ago (1 children)

I wonder how the answer might change using a local abliterated model. Might try it out later

load more comments (1 replies)
load more comments
view more: next ›