this post was submitted on 10 May 2026
273 points (98.9% liked)

Technology

84646 readers
4231 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] LostWanderer@fedia.io 28 points 4 days ago (3 children)

Given that experts had already warned of the dangers and it happened, its tragic that the slop apps are still a thing. Like, there is a reason why programming is a profession, its complex and has a lot of moving parts. LLM slop is never going to approach the art, skill, or reasonable security that a sane programmer can.

[–] SleeplessCityLights@programming.dev 7 points 3 days ago (1 children)

I predict that the slopicalypse will hit in 2027 and all of the corporations that jumped head first into the filling swimming pool will hit bottom. Using an LLM to code will only make things worse. It's a fucking entropy injector. You can't continously add entropy without hitting a point where it just is not cohesive anymore and LLMs get really bad when the context is too large.

[–] LostWanderer@fedia.io 4 points 3 days ago (1 children)

I hope the slopicalypse does hit in 2027 because there is no way to escape technical debt, unless you build software with smart people carefully maintaining it as they add features. LLMs could never do that and will cause the worst tech disasters. I cannot wait to see the aftermath of all these corporations fucking around and finally finding out how stupid they really are!

It already happened to a project at my work. Nobody understands enough of the code base and/or can make sense of it to be able to add features. It is the buggiest fucking thing ever, making LLM debugging an endless excersise of finding more bugs. This also means that we can't prompt an LLM effectively to make targeted changes. The only thing left is letting an agent fuck shit up worse by running it with a vague prompt. We don't know what to do. It did cost a lot to make after counting man hours and traditional software development mentality hates throwing something completely away.

[–] KairuByte@lemmy.dbzer0.com 3 points 3 days ago

I do think machine learning (likely not an LLM) will get to the point of being able to reliably code a lot of things. But I don’t think it’s going to happen in the near future.

[–] MangoCats@feddit.it 8 points 4 days ago (3 children)

You know what helps? After you've coded something that works - whether "vibe coding" or the old fashioned way, review it for security issues. "Vibe code reviews" performed by the same LLM tools that do "vibe coding" can be even more effective at finding issues than traditional methods.

But, just like real people, if you don't bother to care about security, you'll have holes.

[–] its_kim_love@lemmy.blahaj.zone 3 points 3 days ago (1 children)

So it makes more mistakes but catches slightly less mistakes? Sounds effective.

[–] MangoCats@feddit.it 1 points 2 days ago (1 children)

It's a different approach, you don't abandon best practices, but this new tool does give information that was previously more difficult / costly to access - so use it too.

[–] its_kim_love@lemmy.blahaj.zone 1 points 2 days ago (1 children)

But if you don't have that information already how can you trust the information is correct?

[–] MangoCats@feddit.it 2 points 2 days ago

There are things an LLM can show you that are undeniably correct, like: this line of code here calls a "free" on a pointer which might be NULL, and in-fact will be NULL if you follow this path through the code: ...

Think of it like "NP hard problems" - there are problems where the solution is hard to find, but easy to verify once you are given it.

When an LLM is giving you those hard to find, easy to veryify observations, that's value. It doesn't have to be perfect, it doesn't have to be 100% complete.

Or, you can hire a team of engineers to burn their brains for months on end to maybe find the same things, maybe not.

There's a problem with both human attention spans, and LLMs' context window capacity - neither are up to the task of reviewing a full code base for something like a browser and "finding all the flaws" - but, if the LLM can give you flaws that humans haven't been able to find... you should be taking those wins - before somebody else does and puts them to different uses.

[–] LostWanderer@fedia.io 2 points 3 days ago

LOL okay...If you say so.

[–] KairuByte@lemmy.dbzer0.com 1 points 3 days ago (1 children)

If you (or the LLM) didn’t know enough to prevent the security issue, how exactly are you (or the LLM) going to know to look for it during a review?

[–] MangoCats@feddit.it 1 points 2 days ago

Nobody, and no LLM, knows everything. The LLMs know some things: https://arstechnica.com/information-technology/2026/05/mozilla-says-271-vulnerabilities-found-by-mythos-have-almost-no-false-positives/

Best to take the best information available from all sources. The attackers are also doing this.