this post was submitted on 10 May 2026
273 points (98.9% liked)

Technology

84646 readers
4279 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] MangoCats@feddit.it 8 points 4 days ago (3 children)

You know what helps? After you've coded something that works - whether "vibe coding" or the old fashioned way, review it for security issues. "Vibe code reviews" performed by the same LLM tools that do "vibe coding" can be even more effective at finding issues than traditional methods.

But, just like real people, if you don't bother to care about security, you'll have holes.

[–] its_kim_love@lemmy.blahaj.zone 3 points 3 days ago (1 children)

So it makes more mistakes but catches slightly less mistakes? Sounds effective.

[–] MangoCats@feddit.it 1 points 3 days ago (1 children)

It's a different approach, you don't abandon best practices, but this new tool does give information that was previously more difficult / costly to access - so use it too.

[–] its_kim_love@lemmy.blahaj.zone 1 points 2 days ago (1 children)

But if you don't have that information already how can you trust the information is correct?

[–] MangoCats@feddit.it 2 points 2 days ago

There are things an LLM can show you that are undeniably correct, like: this line of code here calls a "free" on a pointer which might be NULL, and in-fact will be NULL if you follow this path through the code: ...

Think of it like "NP hard problems" - there are problems where the solution is hard to find, but easy to verify once you are given it.

When an LLM is giving you those hard to find, easy to veryify observations, that's value. It doesn't have to be perfect, it doesn't have to be 100% complete.

Or, you can hire a team of engineers to burn their brains for months on end to maybe find the same things, maybe not.

There's a problem with both human attention spans, and LLMs' context window capacity - neither are up to the task of reviewing a full code base for something like a browser and "finding all the flaws" - but, if the LLM can give you flaws that humans haven't been able to find... you should be taking those wins - before somebody else does and puts them to different uses.

[–] LostWanderer@fedia.io 2 points 3 days ago

LOL okay...If you say so.

[–] KairuByte@lemmy.dbzer0.com 1 points 3 days ago (1 children)

If you (or the LLM) didn’t know enough to prevent the security issue, how exactly are you (or the LLM) going to know to look for it during a review?

[–] MangoCats@feddit.it 1 points 3 days ago

Nobody, and no LLM, knows everything. The LLMs know some things: https://arstechnica.com/information-technology/2026/05/mozilla-says-271-vulnerabilities-found-by-mythos-have-almost-no-false-positives/

Best to take the best information available from all sources. The attackers are also doing this.