this post was submitted on 12 Jul 2025
357 points (95.4% liked)

Programming

21556 readers
220 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] HaraldvonBlauzahn@feddit.org 9 points 1 day ago (1 children)

Now the interesting question is what it really means when less experienced programmers think they are 100% faster.

[–] dil@lemmy.zip 11 points 1 day ago

Would ai coders even get faster over time or just stay stagnant since they aren't learning anything about what they're doing

[–] Ptsf@lemmy.world 10 points 1 day ago (1 children)

People are a bad judge of their own skill and overrely on tools and assistants when present. See also: car adas systems making drivers less skillful. More news at 11.

[–] boonhet@sopuli.xyz 3 points 1 day ago

See also: car adas systems making drivers less skillful.

But also making traffic safer

Think we need to introduce a mandatory period where you need to drive an old car with no ABS when you've just gotten your license. I mean for me that was called being a broke-ass student, but nowadays cars with no ABS are starting to cost more than cars with ABS, traction control and even ESP, because the 80s and early 90s cars where these things were optional, are now classics, whereas you can get a BMW or Audi that was made this century for like 500-800 euros if you're brave or just want to move in to your garage full time.

[–] Scrath@lemmy.dbzer0.com 38 points 2 days ago (2 children)

I talked to Microsoft Copilot 3 times for work related reasons because I couldn't find something in documentation. I was lied to 3 times. It either made stuff up about how the thing I asked about works or even invented entirely new configuration settings

[–] rozodru@lemmy.world 17 points 2 days ago (2 children)

Claude AI does this ALL the time too. It NEEDS to give a solution, it rarely can say "I don't know" so it will just completely make up a solution that it thinks is right without actually checking to see the solution exists. It will make/dream up programs or libraries that don't and have never existed OR it will tell you something can do something when it has never been able to do that thing ever.

And that's just how all these LLMs have been built. they MUST provide a solution so they all lie. they've been programmed this way to ensure maximum profits. Github Copilot is a bit better because it's with me in my code so it's suggestions, most of the time, actually work because it can see the context and whats around it. Claude is absolute garbage, MS Copilot is about the same caliber if not worse than Claude, and Chatgpt is only good for content writing or bouncing ideas off of.

[–] Croquette@sh.itjust.works 24 points 2 days ago (8 children)

LLM are just sophisticated text predictions engine. They don't know anything, so they can't produce an "I don't know" because they can always generate a text prediction and they can't think.

load more comments (8 replies)
[–] fuzzzerd@programming.dev 2 points 1 day ago

Are you using Claude web chat or Claude code? Because my experience with it is vastly different eve when using the same underlying model. Clause code isn't perfect and gets stuff wrong, but it can run the project check the output and realize it's mistake and fix it in many cases. It doesn't fix logic flaws, but it can fix hallucinations of library methods that don't exist.

[–] Senal@programming.dev 4 points 2 days ago* (last edited 7 hours ago)

In fairness the msdn documentation is prone to this also.

By "this" I mean having what looks like a comprehensive section about the thing you want but the actual information you need isn't there, but you need to read the whole thing to find out.

[–] Kolanaki@pawb.social 6 points 1 day ago (2 children)

🎵Had a great day out,

Callin' my name like Ferris Bueller,

Time to wrap this up,

I'm getting 19℅ slower! 🎵

[–] Doc_Crankenstein@slrpnk.net 2 points 1 day ago (1 children)

I am honestly shocked to see a reference in the wild to Ken Ashcorp.

[–] Kolanaki@pawb.social 2 points 1 day ago (1 children)

I'm honestly shocked that multiple people got the reference.

[–] Flatfire@lemmy.ca 4 points 1 day ago (1 children)

Well that's a strangely deep cut Ken Ashcorp ref

[–] levzzz@lemmy.world 2 points 1 day ago

Men of culture i see

[–] Arghblarg@lemmy.ca 75 points 2 days ago (2 children)

I feel this -- we had a junior dev on our project who started using AI for coding, without management approval BTW (it was a small company and we didn't yet have a policy specifically for it. Alas.)

I got the fun task, months later, of going through an entire component that I'm almost certain was 'vibe coded' -- it "worked" the first time the main APIs were called, but leaked and crashed on subsequent calls. It used double- and even triple-pointers to data structures, which the API vendor's documentation upon some casual reading indicated could all be declared statically and re-used (this was an embedded system); needless arguments; mallocs and frees everywhere for no good reason (again due to all of the un-needed dynamic storage involving said double/triple pointers to stuff). It was a horrible mess.

It should have never gotten through code review, but the senior devs were themselves overloaded with work (another, separate problem) ...

I took two days and cleaned it all up, much simpler, no mem leaks, and could actually be, you know, used more than once.

Fucking mess, and LLMs (don't call it "AI") just allow those who are lazy and/or inexperienced to skate through short-term tasks, leaving huge technical debt for those that have to clean up after.

If you're doing job interviews, ensure the interviewee is not connected to LLMs in any way and make them do the code themselves. No exceptions. Consider blocking LLMs from your corp network as well and ban locally-installed things like Ollama.

[–] jonathan7luke@lemmy.zip 13 points 1 day ago

It should have never gotten through code review, but the senior devs were themselves overloaded with work

Ngl, as much as I dislike AI, I think this is really the bigger issue. Hiring a junior and then merging his contributions without code reviewing is a disaster waiting to happen with or without AI.

[–] umbraroze@slrpnk.net 11 points 2 days ago* (last edited 2 days ago)

It used double- and even triple-pointers to data structures

(old song, to the tune of My Favourite Things)

🎶 "Pointers to pointers to pointers to strings,
this code does some rather unusual things...!"
🎶

[–] _cnt0@sh.itjust.works 33 points 2 days ago (2 children)

I'll quote myself from some time ago:

The entire article is based on the flawed premise, that "AI" would improve the performance of developers. From my daily observation the only people increasing their throughput with "AI" are inexperienced and/or bad developers. So, create terrible code faster with "AI". Suggestions by copilot are >95% garbage (even for trivial stuff) just slowing me down in writing proper code (obviously I disabled it precisely for that reason). And I spend more time on PRs to filter out the "AI" garbage inserted by juniors and idiots. "AI" is killing the productivity of the best developers even if they don't use it themselves, decreases code quality leading to more bugs (more time wasted) and reducing maintainability (more time wasted). At this point I assume ignorance and incompetence of everybody talking about benefits of "AI" for software development. Oh, you have 15 years of experience in the field and "AI" has improved your workflow? You sucked at what you've been doing for 15 years and "AI" increases the damage you are doing which later has to be fixed by people who are more competent.

[–] Kissaki@programming.dev 1 points 1 day ago (1 children)

from some time ago

It's a fair statement and personal experience, but a question is, does this change with tool changes and user experience? Which makes studies like OP important.

Your >95% garbage claim may very well be an isolated issue due to tech or lib or llm usage patters or whatnot. And it may change over time, with different models or tooling.

[–] _cnt0@sh.itjust.works 1 points 1 day ago

At this point I assume ignorance and incompetence of everybody talking about benefits of "AI" for software development.

[–] daniskarma@lemmy.dbzer0.com 28 points 3 days ago* (last edited 3 days ago) (2 children)

The study was centered on bugfixing large established projects. This task is not really the one that AI helpers excel at.

Also small number of participants (16) , the participants were familiar with the code base and all tasks seems to be smaller in completion time can screw results.

Thus the divergence between studio results and many people personal experience that would experience increase of productivity because they are doing different tasks in a different scenario.

[–] 6nk06@sh.itjust.works 44 points 2 days ago (3 children)

The study was centered on bugfixing large established projects. This task is not really the one that AI helpers excel at.

"AI is good for Hello World projects written in javascript."

Managers will still fire real engineers though.

load more comments (3 replies)
[–] Feyd@programming.dev 21 points 2 days ago (1 children)

familiar with the code base

Call me crazy but I think developers should understand what they're working on, and using LLM tools doesn't provide a shortcut there.

load more comments (1 replies)
[–] Phen@lemmy.eco.br 21 points 2 days ago (2 children)

Reading the paper, AI did a lot better than I would expect. It showed experienced devs working on a familiar code base got 19% slower. It's telling that they thought they had been more productive, but the result was not that bad tbh.

I wish we had similar research for experienced devs on unfamiliar code bases, or for inexperienced devs, but those would probably be much harder to measure.

[–] staircase@programming.dev 16 points 2 days ago (2 children)

I don't understand your point. How is it good that the developers thought they were faster? Does that imply anything at all in LLMs' favour? IMO that makes the situation worse because we're not only fighting inefficiency, but delusion.

20% slower is substantial. Imagine the effect on the economy if 20% of all output was discarded (or more accurately, spent using electricity).

[–] Phen@lemmy.eco.br 7 points 2 days ago

I'm not saying it's good, I'm saying I expected it to be even worse.

[–] FizzyOrange@programming.dev 2 points 1 day ago

Does that imply anything at all in LLMs’ favour?

Yes it suggest lower cognitive load.

[–] vrighter@discuss.tchncs.de 9 points 2 days ago

1% slowdown is pretty bad. You'd still do better just not using it. 19% is huge!

[–] SugarCatDestroyer@lemmy.world 4 points 2 days ago* (last edited 2 days ago)

It’s hard to even call them specialists, they are at the level of cashiers, for whom the computer does everything, and sometimes they do something at the level of communicating with clients and that’s all. I'm certainly not a professional, but I think the main message is clear.

[–] WoodScientist@sh.itjust.works 21 points 3 days ago* (last edited 3 days ago) (6 children)

Don’t give yourselves to these unnatural men - machine men with machine minds and machine hearts! You are not machines! You are men!

load more comments (6 replies)
load more comments
view more: next ›