this post was submitted on 27 Jun 2025
408 points (98.3% liked)

Funny

10400 readers
1258 users here now

General rules:

Exceptions may be made at the discretion of the mods.

founded 2 years ago
MODERATORS
top 49 comments
sorted by: hot top controversial new old
[–] CtrlAltDefeat@sh.itjust.works 1 points 2 hours ago

People are using it wrong. Use it to figure out what sort of behaviors would help you get out of your funk and come up with ideas that motivate you. The AI's answers won't make you feel better, but they can help you help yourself.

[–] abbadon420@sh.itjust.works 1 points 5 hours ago

It reminds me of this one

[–] iAvicenna@lemmy.world 1 points 7 hours ago

get that monkey a real buddy. NOW!

[–] lemmy_acct_id_8647@lemmy.world 8 points 19 hours ago

The fucked up thing is I’m guilty of this. I’ve literally no one in my life, so when things get hard and I need to get it out, ChatGPT is usually my only option without bothering people with trauma dumping posts.

[–] LordOfTheFlatline@lemmy.zip 6 points 19 hours ago

I love this comparison and hate this experiment. I would reference this to explain the cruelty of capitalism in general, but now things are literal af.

[–] IhaveCrabs111@lemmy.world 6 points 1 day ago

Let’s be honest, that poor excuse for a robotic monkey is a better and more loving patent than what most of us got.

[–] Denjin@lemmings.world 33 points 1 day ago (2 children)

ELIZA, the first chatbot created in the 60s just used to parrot your response back to you:

I'm feeling depressed

Why do you think you're feeling depressed

It was incredibly basic and the inventor Weizenbaum didn't think it was particularly interesting but got his secretary to try it and she became addicted. So much so that she asked him to leave the room while she "talked" to it.

She knew it was just repeating what she said back to her in the form of a question but she formed a genuine emotional bond with it.

Now that they're more sophisticated it really highlights how our idiot brains just want something to talk to whether we know it's real or not doesn't really matter.

[–] CosmicTurtle0@lemmy.dbzer0.com 25 points 1 day ago (2 children)

One of the last posts I read on Reddit was a student in a CompSci class where the professor put a pair of googly eyes on a pencil and said, "I'm Petie the Pencil! I'm not sentient but you think I am because I can say full sentences." The professor then snapped the pencil in half that made the students gasp.

The point was that humans anamorphize things that seem human, assigning them characteristics that make us bond to things that aren't real.

[–] lime@feddit.nu 2 points 6 hours ago

that's just a bit from community

[–] tetris11@feddit.uk 15 points 1 day ago

That or the professor was stronger than everyone thought

[–] BudgetBandit@sh.itjust.works 8 points 1 day ago (1 children)

Depends. I think I’m on the autistic spectrum, I just don’t see them as equal, but as tools.

[–] CosmicTurtle0@lemmy.dbzer0.com 5 points 1 day ago (3 children)

I'm not in the autistic spectrum. They aren't equals and they are barely tools.

[–] LordOfTheFlatline@lemmy.zip 1 points 19 hours ago

Barely tools eh? Takes one to know one I suppose

[–] Korhaka@sopuli.xyz 7 points 1 day ago (1 children)

They are good tools for communicating with the robots in management. ChatGPT, please output some corpobullshit to answer this form I was given and have no respect for.

[–] 5in1k@lemmy.zip -1 points 1 day ago

I don’t know what I am but I don’t feel shit for no fucking robot. That arm that squeegees hydraulic fluid back into itself, fuck em.

[–] venusaur@lemmy.world 23 points 1 day ago (2 children)

This is an interesting comparison because the wire monkey study suggests that we need physical contact from a caregiver more than nourishment. In the case of AI, we’re getting some sort of mental nourishment from the AI, but no physical contact.

The solution? AI tools integrated into either hyper-realistic humanoid robots, or human robo-puppets.

Or, we could also leverage our advancing technology to support the working class by implementing UBI through a reduction in production costs and an evening out of wealth and resources.

But who wants that? I, a billionaire, sure don’t.

[–] veroxii@aussie.zone 5 points 1 day ago (3 children)

I mean last week it was all over the news that Mattel and OpenAI made a deal to put chatgpt in toys such as Barbie.

[–] LordOfTheFlatline@lemmy.zip 3 points 19 hours ago

Sorry to say it mate but that was always the plan. Toy companies (namely the people behind LeapFrog) have been implementing ai, funding the advancement of it, using it to help the deaf, all sorts of things. I grew up with many robot companion toys. M3GAN is pretty accurate but we have all seen Child’s Play, no?

[–] dream_weasel@sh.itjust.works 7 points 1 day ago

Put that shit in a furby or a 1993 toy biz voice bot.

[–] venusaur@lemmy.world 3 points 1 day ago

Oh freaky! That’s a huge liability though. I don’t see that happening with a model anywhere close to what we’re using in ChatGPT.

[–] PattyMcB@lemmy.world 1 points 1 day ago (4 children)

How about just hug a real human. Problem solved

[–] LordOfTheFlatline@lemmy.zip 2 points 19 hours ago (1 children)

In Japan they have clubs for that sort of thing

[–] PattyMcB@lemmy.world 2 points 14 hours ago
[–] TexasDrunk@lemmy.world 6 points 1 day ago

How will they sell a human at the lowest cost? People have to eat and sleep.

[–] jumping_redditor@sh.itjust.works 2 points 1 day ago (1 children)

slavery was made illegal decades ago

[–] LordOfTheFlatline@lemmy.zip 1 points 19 hours ago (1 children)
[–] jumping_redditor@sh.itjust.works 2 points 7 hours ago (1 children)

officially iirc yes, in practice no but because prisoners aren't considered human ¯\(ツ)

[–] LordOfTheFlatline@lemmy.zip 1 points 1 hour ago

Neither are the disabled apparently

[–] venusaur@lemmy.world 2 points 1 day ago

They might not be able to feed your brain

[–] BallShapedMan@lemmy.world 27 points 1 day ago (2 children)

A colleague is all in on AI. She sends these elaborate notes generated by AI from our transcript that she is so proud of. I really hope she hasn't read any of them because they're often quite disconnected from what occurred on the call. If she is reading them and sending them anyway.... Wow.

[–] kat_angstrom@lemmy.world 17 points 1 day ago (2 children)

Probably not reading them. A family member told me at their work someone had an LLM summarize an issue spread out over a long email chain and sent the summary to their boss, who had an LLM summarize the summary.

[–] lordnikon@lemmy.world 11 points 1 day ago (1 children)

Most people don't read them. It reminds me of back before the AI days when you would have to spend time writing up those email summaries to send the team only that nobody reads them. I proved this to my boss that for 4 weeks straight I embedded that to the first person to email this +email gets $50. I never had to pay out because I was right 4 weeks in a row before I stopped. So many emails and newsletters in companies are done just because it's just how it's done for proper communication. Its just mindless busy work that wastes my time.

[–] LordOfTheFlatline@lemmy.zip 2 points 19 hours ago

Greetings fellow Lord

[–] trolololol@lemmy.world 4 points 1 day ago* (last edited 1 day ago)

From experience, people who tend to do this wouldn't understand the issue even if they spent all the time in the world reading the email chain, or attending the meeting.

That's what gives them a false sense that AI is helping, because AI is as good as themselves with comprehension and then saying plausible things that aren't real. On the upside, it takes 2 seconds instead of an hour.

[–] shalafi@lemmy.world 7 points 1 day ago (1 children)

I didn't know those were off. About a year ago we were playing with Zoom's AI meeting recorder and it was astonishing how accurate the summary was. Hell, it could even tell when I was joking, which was a bit eerie.

[–] BallShapedMan@lemmy.world 5 points 1 day ago

I've not had much of an issue, my guess is her prompts aren't great or she's combining it with really poorly taken notes?

[–] FarraigePlaisteach@lemmy.world 26 points 1 day ago (2 children)

Does anyone know the name of this monkey or experiment? It’s kind of harrowing seeing the expression on its face. It looks desperate for affection to the point of dissociation.

[–] miraclerandy@lemmy.world 22 points 1 day ago (2 children)
[–] pennomi@lemmy.world 17 points 1 day ago

The context makes it even more heartbreaking.

[–] 5in1k@lemmy.zip 7 points 1 day ago

Absolute horror.

[–] nocklobster@lemmy.world 16 points 1 day ago (1 children)

The experiment was done by harry harlow, but I don’t think the name of the monkey was given, could have just been a number :(

Thank you so much. I’ve found a Wikipedia page on him and his research so I’ll give it a read. The poor money. https://en.wikipedia.org/wiki/Harry_Harlow

[–] TheAlbatross@lemmy.blahaj.zone 24 points 2 days ago* (last edited 2 days ago) (2 children)

We love cloth mother, way better than wire mother, gotta say

Where does scrub daddy factor into this?

[–] RunJun@lemmy.dbzer0.com 4 points 1 day ago (1 children)

Damn, wire mother is going dig into my brain.

[–] RizzRustbolt@lemmy.world 7 points 1 day ago

Yes... very apt comparison.

Cloth AI will love and comfort us until the end of our days.

Which will be soon, because only Wire Computer provides us with actual sustenance.

[–] AFKBRBChocolate@lemmy.ca 5 points 1 day ago

I feel more like I've got the wire monkey mother from that same experiment.