this post was submitted on 27 Mar 2026
92 points (91.1% liked)

Fuck AI

6523 readers
589 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

It has to be pure ignorance.

I only have used my works stupid llm tool a few times (hey, I have to give it a chance and actually try it before I form opinions)

Holy shit it's bad. Every single time I use it I waste hours. Even simple tasks, it gets details wrong. I correct it constantly. Then I come back a couple months later, open the same module to do the same task, it gets it wrong again.

These aren't even tools. They're just shit. An idiot intern is better.

Its so angering people think this trash is good. Get ready for a lot of buildings and bridges to collapse because of young engineers trusting a slop machine to be accurate on details. We will look back on this as the worst era in computing.

top 47 comments
sorted by: hot top controversial new old
[–] TankovayaDiviziya@lemmy.world 1 points 9 minutes ago* (last edited 8 minutes ago)

It's situation specific. For tabulating data, yes. For everything else, probably not. But the thing is, you have to ask LLM if they can read the raw data to confirm they are reading it right, before ordering it to execute more complex commands and tass. You have to define the parameters one by one, one query everytime.

[–] HarneyToker@lemmy.world 3 points 1 hour ago

For every post I see of people complaining, I have to imagine there are 100 other people that get value out of LLMs quietly.

[–] AlecSadler@lemmy.dbzer0.com 6 points 5 hours ago (1 children)

There are right tools and wrong tools depending on the application.

There are right ways to use said tools and wrong ways...like you wouldn't use a phillips head screwdriver on a flat head.

I guarantee your company's provided tool is Copilot or OpenAI based, which is already bottom of the barrel for usefulness.

[–] bridgeenjoyer@sh.itjust.works 1 points 5 hours ago

Haha, yes it is

[–] TBi@lemmy.world 30 points 15 hours ago (1 children)

Generally I equate positivity about LLMs with people’s technical ability. I find the more they say AI is good the worse programmer they are.

[–] bridgeenjoyer@sh.itjust.works 21 points 14 hours ago

Technical literacy in general. My friend thinks it's the greatest thing ever, is an idiot with technology (and life in general).

[–] Not_mikey@lemmy.dbzer0.com 4 points 9 hours ago* (last edited 9 hours ago) (1 children)

Claude and super powers / planning have changed my mind more on AI feature development. Iterating on the spec and making it as unambiguous as possible gives good results when you clear context and have it implement the plan. Even if it starts to stray you can just do a git reset and start a new session with the spec, adjusting it a bit, because time wise you probably haven't invested much.

It also depends on the code base, if the code base has very clear separation of concerns, good documentation, and good contracts between layers then claude can handle it pretty well. If the code base is full of spaghetti code with multiple ways to do the same thing then AI will struggle with it. In our large legacy monolith repo it doesn't do well, in our micro service repos it does great.

Also time wise it may not seem like a benefit if you just set it and wait for it to complete, the productivity advantage comes from running a couple sessions in parallel.

Also context is key, having a good claude.md file in the repo to explain patterns helps it to avoid pitfalls. If it's only context is the prompt you gave it and you tell it to implement a feature without a plan / spec outlined it will generate shit code.

[–] nightlily@leminal.space 5 points 9 hours ago* (last edited 9 hours ago) (2 children)

Making it as unambiguous as possible

If only we had a way to communicate with machines in a reliable, deterministic and unambiguous way.

[–] Liketearsinrain@lemmy.ml 5 points 9 hours ago

But natural language will let people without computer programming skills use their business domain knowledge to create compute programs

It's why COBOL is so popular.

[–] Not_mikey@lemmy.dbzer0.com 1 points 9 hours ago

Yeah you can write the code yourself. You can also write in c or even assembly if you really want to make it as unambiguous as possible, it'll just take more time. Some people like to code in Python though because they can write faster with it even if a lot of implementation details and choices are hidden from them because they don't care about those details.

Spec driven development in my view is just another step, albeit a big one, on the level of abstraction between assembly and python. Like python it has its places and has places where it should never be used for safety and performance reasons.

[–] ReallyCoolDude@lemmy.ml 6 points 11 hours ago

Id say you dont know how to use the tool. 'Write tests here, develop.feature X' is not a good way to use llms. Using 80k tokens and keep using same a Session is context rotting. There are a lot of boring, everyday tasks in my job that got faster. Many others that meh. Use AI, dont be driven by AI.

[–] CompactFlax@discuss.tchncs.de 26 points 16 hours ago (1 children)

every time I use it, i waste hours

Yes, exactly this. It looks good, I ask for it to tweak something. It tweaks, but now something else needs adjustment. Then it comes back unusable.

It ends up taking the same time as doing it myself. There’s some value perhaps in either the novelty or engagement that keeps me focused but it’s not more efficient.

When it does work, I’m always worried it is an illusion I’ve missed something. Like how you send an email and immediately see the typo.

People who love it, love it because they don’t need to or care about having accuracy and precision in their work. Sales and marketing, management, etc. Business idiots.

[–] bridgeenjoyer@sh.itjust.works 5 points 15 hours ago

Ed ed ed!!!!

[–] cypherpunks@lemmy.ml 19 points 15 hours ago (2 children)

you're obviously prompting it wrong, and/or not using the latest models ~/s~

[–] bonenode@piefed.social 1 points 2 hours ago

I still think back of Linkedin post I saw from someone talking about LLMs and throwing in the sentence:

”Apparently I am really great at making super prompts!”

Which is probably something the LLM told them and they have lost all self-reflection, so...

[–] Liketearsinrain@lemmy.ml 4 points 9 hours ago

People in this thread unironically saying this. Waiting for the "just wait half a year" model that will be so advanced

[–] Mikina@programming.dev 11 points 15 hours ago (1 children)

My experience is that it can work reasonabky well, but you have to waste absurd amount of tokens and have the 1m token context window.

I only do gamedev, which means a little bit more simple scriots, but it could handle even more involved systems.

If i force it to first document and explain the wholw architecture and data flow.

Did it help? Yes, but it still did take only a slightly faster. I could do it in a day more, probably.

It also cost like 50$ in tokens, in today's prices - where every AI company is loosing trillions of money, so the costs will get a lot worse. And if I tried to conserve tokens, it's shit. You have to feed it 10$ of data to be useful.

Add to that the fact that it also causes skill attrition, so once the expensive-cost future arrives, you probsbly won't be able to afford it, and good luck getting your skills back after that.

Our company wants us to use it, and the average token consumption is like 100$ per day in consumer prices. How is that even considered for such a minor gain?

So, I'll pass.

[–] bridgeenjoyer@sh.itjust.works 3 points 14 hours ago

Especially once they kill the "old net" and you won't be able to browse anything to learn skills at all.

kagi is the only way i can stay sane on web 3.0.

[–] rabber@lemmy.ca 3 points 15 hours ago

I recently used it to install Nvidia l40s drivers on redhat 9 and pass it through to my Frigate instance. Took me a few minutes. Would have been a lot of reading to find the exact answers manually.

[–] homesweethomeMrL@lemmy.world 1 points 15 hours ago

Fwiw, when I limit it to creating outlines based on given source docs or summarizing transcripts it does fine.

Definitely not worth what it cost to get there, but useful enough in those strict scenarios.

[–] okwhateverdude@lemmy.world 0 points 13 hours ago (1 children)

Is your work paying for dumb robots? Like Atlassian's shit? Or something built-in to your industry software (I guess some kinda CAD)? These are next to useless. Or is work only paying for basic model access? Also pretty useless for detailed work. The only models that give me consistent, detailed-ish work are the state of the art models. And even then you have to watch them, or have very strong verification/validation so they can bash their head against to eventually get the right result.

I'll say that I am not so much a booster, but more of a pragmatist. After the step change in quality this past fall/winter, I gave the SOTA models a try with hard earned cash. And it was worth it.

My ADHD makes it difficult to really finish personal projects once they get past the fun and interesting learning portion and neck deep into tedium of actually molding the code into the right shape or shaving the various yaks that came up. All motivation ceases. Unfortunately, my job is also my hobby. I don't wanna work after work, yo.

That game I always wanted to finish writing but got stuck at needing to grind out code? Done in an afternoon of carefully directing it. That programming language I spent significant amounts of time thinking and designing and getting the shitty PoC running but now needed to actually make it work? A week to the first version. Another to my first significant application written in that language which revealed flaws in my design for real use cases. Another to the next version with a conformance test suite which was then used along with the spec to do a complete reimplementation in another language. Another project was trying to "grow" a sorting algorithm expressed in a niche esoteric programming language using a genetic algorithm. Stuck at the point of needing to build the tools for analysis, needing a refactor to fix the poor persistence choices, just nothing but yaks to shave. Got it unstuck over a weekend and actually started to DO the damn experiment after spending so much time writing the esoteric lang interpreter and all of the experiment harness.

It is not perfect. It fucks up frequently. I have to really watch it and steer it. It loves mediocrity and shortcuts. All that said...

Like, holy shit. The amount of work I've finished or moved forward in two months is nothing short of miraculous given how many projects like these I have in various states of finished.

All I can say is that my experience aligned with your experience any time I needed to use a bolted-on AI to some product (Atlassian, Lucid, etc) but that does not reflect my experience when using SOTA models for real work.

[–] bridgeenjoyer@sh.itjust.works 7 points 13 hours ago

They just have a sub for gibbity 4 and 5 is all I know. I don't care, so I don't really look into it.

Thats good it works for you. I don't have the focus for a whole game. I'd have to start at the bottom, anything else is cheating and probably results in missing important aspects you would have learned. I could never bring myself to use it for projects, (unless, I'm getting paid to do it I guess) much like i would never use drum triggers/samples or autotune to fix my recordings.

I think less people have guilt nowadays. I detest things that are fake or shortcuts. It devalues real work.

[–] infinitevalence@discuss.online -5 points 16 hours ago (4 children)

It really depends on the task and the tool. Current MOE models that have agentic hooks can actually be really useful for doing automated tasks. Generally, you don't want to be using AI to create things. What you want to do is hand it a very clear set of instructions along with source material. And then tell it to either iterate build on summarize or in some cases create from that.

I created a simple script with the help of AI to automate scanning files in from an automatic document reader. Convert them to OCRD PDFs. Scan through the document properly. Title the file based on the contents, then create a separate executive summary and then add an index to a master index file of a growing json.

Doing this allowed me to automate several steps that would have taken time and in the end I'm able to just search through my folders and my PDFs and very quickly find any information I need.

And this is only scratching the surface. I wouldn't have AI write me a resume or write me an email or a book. I might use it to generate an image that I then give to a real artist saying this is kind of what was in my head.

But boring stuff repetitive stuff. Things that really benefit from automation with a little bit of reasoning and thinking behind it. That's where we are right now with AI.

[–] Carnelian@lemmy.world 14 points 15 hours ago (2 children)

Pretty much every pro AI person I’ve ever spoken to IRL tells me this exact same story

I created a simple script with the help of AI

And this is only scratching the surface

Basically, “I used AI for a boilerplate task. It gives me the vibe of being capable of much more” but then nobody can ever really get it to do much more

[–] bridgeenjoyer@sh.itjust.works 5 points 14 hours ago

Bro it will bro, it WILL just 5 more years and 3 more trillion bro i promise

[–] infinitevalence@discuss.online -1 points 15 hours ago (3 children)

Yeah i used it for a boilerplate task, and then I did not have to do that task. I was able to scan in hundreds of documents, and at the end I had a fully indexed, properly file named, summarized, and searchable PDF library. Just doing one document at a time manually would have been a multi hour task for me, and in a few hours I was done, and it was good enough to good.

[–] Carnelian@lemmy.world 10 points 15 hours ago* (last edited 15 hours ago) (1 children)

Sorry, to clarify, a boilerplate task is not the repetitive job that you are automating. It’s the code that’s doing it.

What I’m saying is that you (and many others) are incorrectly attributing (your paragraph about the benefits your finished program is conferring to you) to AI, because that happens to be the path you took to arrive there.

In reality, the reason it was able to produce functional code is because your problem was already solved and documented. A few years ago, instead of “asking AI”, you would have simply copied and pasted the boilerplate code from someone else’s project. In all likelihood it also would have been faster for you to have done so

Quick edit: Sorry again, just want to further clarify that when I say “boilerplate task” I’m referring to a type of programing problem that you solve with “boilerplate code”. Reading back the above I was kind of using them interchangeably which is not strictly accurate

[–] infinitevalence@discuss.online -2 points 15 hours ago* (last edited 15 hours ago) (1 children)

If I could have found the code, and copied it yes. Using AI means I did not have to search for it, did not need to learn Python, and then did not need to do the tasks.

Please understand I hate 90% of all the AI bullshit being forced on us, and employers requiring AI use is insane. I am just saying that blanked hate is short sighted because it is useful in some cases, and the number of cases keep going up as we improve things.

I could also use a dictionary to check my spelling, but having spell check enabled is just faster.

[–] Carnelian@lemmy.world 6 points 15 hours ago (2 children)

Right, and what I’m saying is that the ‘usefulness’ that people claim to have discovered is totally nonsensical because these problems have been solved for decades

[–] bridgeenjoyer@sh.itjust.works 3 points 14 hours ago

exactly. It's people realizing what coding is for the first time because it's being shoved in our faces. Before all this (bought and paid for propaganda) publicity, your average dummy had no idea what code was or did.

Basically, the non-tech people who know nothing about coding or how computers work are now amazed because they (think they) discovered what coding is because of the AI hype.

[–] infinitevalence@discuss.online -3 points 14 hours ago (1 children)

Yes but now they are accessible to anyone.

[–] Carnelian@lemmy.world 6 points 14 hours ago (1 children)

Your perspective is actually completely backwards on this

This process has always been accessible to everyone. You’d google basically the same words you typed into your prompt and it would bring you directly to the same block of code that everybody uses.

AI on the other hand is currently temporarily being made available for free or low cost because they are actively trying to create a cohort of users who impulsively “just reach for AI” as their first step to solving every problem.

In a few years you may find yourself praising how “accessible” it is because they occasionally run offers for a week of subscription time for $40 instead of the usual rate of $189.99/mo for entry level access. You may find yourself wondering how people ever lived without it

[–] infinitevalence@discuss.online 1 points 5 hours ago

I run most of my AI local on my GPU only on rare moments do I bother with commercial systems.

[–] Liketearsinrain@lemmy.ml 1 points 9 hours ago (1 children)

This is a few lines of python and tesseract, or one of the fancier OCR libraries based on neural networks (so, AI but not in the way it's used nowadays).

[–] infinitevalence@discuss.online 2 points 5 hours ago

For sure but I don't know Python. I can edit a script but I'm not proficient enough to make a new one.

[–] bridgeenjoyer@sh.itjust.works 2 points 14 hours ago (1 children)

I understand that and good for you for finding a use case.

however a normal program could have done that exact same thing. It's just miniscule easier to do it how you did. Probably not repeatable though, or if your model you used gets the plug pulled, you're back to square 1.

[–] infinitevalence@discuss.online 0 points 14 hours ago

Cant pull the plug, the model is local and running on my GPU. So I could break it if I wanted to but im not dependent on someone else.

[–] bridgeenjoyer@sh.itjust.works 9 points 15 hours ago (1 children)

OK so that's what the boosters keep saying. However, every Task ive given it have very clear exact set of instructions, and when i comb through it, it still hallucinates. I can try talking to it like a 6 year old, and its going to forget what I told it the next day and hallucinate again.

Meanwhile I burned up a shit ton of power for 0 benefit and wasted my time. Worst case, someone else who is not detail oriented like myself is going to fuck up a lot of work.

However when i give it something I don't know how to do or dont know an answer to, it does a great job and is so smart! Amazing how that works.

[–] infinitevalence@discuss.online 1 points 15 hours ago (1 children)

Your experience and feelings are totally valid, its still a shit show and in most cases a massive waste of power, water, brain power, and money. I am just saying that with the right training, and appropriate use, AI really can be effective NOW. One thing you can do is describe the task, and ask the AI to write a prompt to accomplish it. Then test that prompt in a new chat, and see if you get better results.

Most AIs will tell you how to improve your prompts.

[–] bridgeenjoyer@sh.itjust.works 4 points 14 hours ago* (last edited 14 hours ago)

I can see where you're coming from and I get it.

I've done all that. I've followed it's little bs "hey, to improve, tell me this!!". Still gets it wrong.

It probably is fine for non-detail work and stuff no one really cares about (and if no one cares about it, maybe revisit what your job is). It's definitely not worth the cost. I myself don't enjoy telling a 5 year old the same thing 25 times (and then they forget because they saw a stick), which is exactly what using an LLM is akin to.

[–] Lodespawn@aussie.zone 8 points 15 hours ago (1 children)

This is a wildly niche task. You also can't trust the executive summaries or indexes will be accurate so you need to read both the scanned documents, the executive summaries and the indexs to review, markup and correct. Your response will be "but it's good enough" but is it? You wouldn't trust it to write a resume, but you would trust it to write an executive summary of a much longer text that you rely on to search for information?

[–] infinitevalence@discuss.online 0 points 15 hours ago (1 children)

Fair criticism, because it is "good enough" and it saved me so much time because I only needed detailed information on a few of the documents. But that is the point, it takes a repetitive bulk niche task and lets YOU quickly and simply create an automation that meets your needs.

Another example I have been using is having an AI read a resume, and do an ATS review based on a job posting. Then it generates a fit report, and makes suggestions on how the resume can be adjusted to better get past the ATS to an actual person. This is very useful in getting call backs for job applications. That has a real value.

[–] Lodespawn@aussie.zone 8 points 15 hours ago (1 children)

How do you know it hasnt missed or misinterpreted key information from some of the documents you might have needed? The AI won't complete the same task the same way on every iteration so the automation is only verifiable if you check every time.

Did you get any of the jobs?

[–] infinitevalence@discuss.online 0 points 15 hours ago (1 children)

Put in an application yesterday, got a call back today. Rest is up to me.

No i dont know if it missed something, but im not relying on the Exe summery for perfect detail, im using it to know which document is which and then using the index to go to points in the PDF.

[–] Lodespawn@aussie.zone 1 points 9 hours ago

Congrats on the call back!