this post was submitted on 13 Mar 2026
365 points (99.2% liked)

Share Funny Videos, Images, Memes, Quotes and more

3195 readers
33 users here now

#funny

founded 6 years ago
MODERATORS
top 27 comments
sorted by: hot top controversial new old
[–] hemko@lemmy.dbzer0.com 51 points 4 days ago (1 children)

Need to create a chipotle support plugin in vscode

[–] ThePantser@sh.itjust.works 30 points 4 days ago (1 children)

I bet the Chipotle bot could help you write that.

[–] geneva_convenience@lemmy.ml 16 points 4 days ago

Slop to the top

[–] meathorse@lemmy.world 47 points 4 days ago (2 children)

Someone with enough know-how to automate (or if we coordinate), could overwhelm ai chat bots one target at a time with the most expensive requests possible, blowing up their ai budget until they pull the plug

[–] davel@lemmy.ml 43 points 4 days ago (3 children)

Try feeding them nonhalting problems that send them into infinite loops of token consumption.

[–] veroxii@aussie.zone 14 points 3 days ago (1 children)

I like the idea but most chatbots have timeout limits. And even agentic workflows have number of step limits to stop infinite loops.

However this is because it's super easy for LLMs to get stuck in loops. You don't even need a nonhalting problem. They're stupid enough on their own.

[–] davel@lemmy.ml 5 points 3 days ago

Yeah I assumed they had some sort of breaker, but hitting that limit is still expensive for them, if you can get them to do it over & over with a script that does the prompting.

[–] BigTurkeyLove@lemmy.dbzer0.com 9 points 4 days ago (3 children)
[–] davel@lemmy.ml 9 points 3 days ago* (last edited 3 days ago) (1 children)

https://theconversation.com/limits-to-computing-a-computer-scientist-explains-why-even-in-the-age-of-ai-some-problems-are-just-too-difficult-191930

Much has been written about them in computer science volumes. But I’m an LLM luddite, have never tried it, and have no idea if it can even work. At the very least, I assume they have some sort of limiter to keep them running completely out of control. They may also have guardrails that can recognize some problems of this type, and refuse to go down the rabbit hole.

My idea of getting them to consume tokens in an (iterative or recursive) loop is also entirely hypothetical, to me at least.

Maybe some LLM developer or prompt engineer can shed some light.

Look all I'm asking for is an example I can plug into Chipotle right now. Fuck AI

[–] Viking_Hippie@lemmy.dbzer0.com 8 points 4 days ago (1 children)

"Sudo world peace"? 🤷🏻

[–] SpaceNoodle@lemmy.world 1 points 3 days ago (1 children)

The only winning move is not to play

[–] Birch@sh.itjust.works 1 points 2 days ago

"War is as ingrained in human nature as breathing and defecating. To stop war is therefore just as possible as plugging every human's every orifice." - The AI that will kill us all one day, in a very inefficient and unpleasant manner.

[–] Arcadeep@lemmy.world 1 points 3 days ago

ChatGPT used to freak out and get stuck in an infinite loop if you asked it to show you a seahorse emoji. I'm sure they fixed it by now though

[–] HiddenLayer555@lemmy.ml 1 points 3 days ago

Wouldn't they just time out?

[–] AnotherUsername@lemmy.ml 2 points 3 days ago (2 children)

Why bother? Write a script that asks them variations on nonsense questions.

Because then you can at least make use of them, imagine a website like chatgpt that's just hundreds of these reverse engineered behind the scenes and is convenient, easy and free. Solves the problem without being wasteful, win-win.

[–] schema@lemmy.world 1 points 3 days ago (1 children)

Or, ironically, just have AI talk to each other.

[–] speculate7383@lemmy.today 2 points 3 days ago

Allow me to introduce you to Moltbook https://www.moltbook.com/

[–] TommySoda@lemmy.world 17 points 4 days ago (2 children)

I feel like this kinda proves the idea that the way they are doing AI today is extremely inefficient. We need massive data centers so it can do mountains of calculations that it doesn't need to do and that we will never use.

[–] SubArcticTundra@lemmy.ml 2 points 3 days ago (1 children)

I wonder how easily LLMs can be pruned to constrain them to a single topic

[–] terabyterex@lemmy.world 0 points 3 days ago

not every ai is an llm in a datacenter. there are local modrls that run on pcs. they arent solving complex problems but they can halp with normal stuff

[–] fubarx@lemmy.world 9 points 3 days ago

Can you imagine a Chipotle chatbot bringing down civilization? Be sad. But also, kinda funny.

Connect All The Things!

[–] yucandu@lemmy.world 11 points 4 days ago

Why spend money when new github accounts are free?

[–] yucandu@lemmy.world 4 points 4 days ago

There's a free one in my Cricut app, I wonder if it works the same way...

[–] hobata@lemmy.ml 3 points 4 days ago

nä, it's not good enough, tabs are missing.