this post was submitted on 29 Jul 2025
30 points (91.7% liked)

Programming

22442 readers
248 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

I thought of this recently (anti llm content within)

The reason a lot of companies/people are obsessed with llms and the like, is that it can solve some of their problems (so they think). The thing I noticed, is a LOT of the things they try to force the LLM to fix, could be solved with relatively simple programming.

Things like better searches (seo destroyed this by design, and kagi is about the only usable search engine with easy access), organization (use a database), document management, etc.

People dont fully understand how it all works, so they try to shoehorn the llm to do the work for them (poorly), while learning nothing of value.

you are viewing a single comment's thread
view the rest of the comments
[–] HelloRoot@lemy.lol -3 points 1 month ago* (last edited 1 month ago) (2 children)

LLMs are great. You can tell them a problem with words and they figure out what you mean and solve it. You can not ignore the value of it for normal people.

Some recent examples for me:


I was playing a factory building game and didn't want to do a spreadsheet by hand for figuring out the optimal amount of which building I have to place to get a wanted output. I told the LLM, copy pasted the wiki for each building. It did some differential equasions and gave me a result and a spreadsheet all in under a minute.

I had to do some math, without knowing the underlying concepts. Describing the situation and problem and giving it all known values was much easier than reading 5 wikipedia articles, figuring out how to break it down, which formulas to use for each step and how to chain them all.

I recently googled for half an hour, crawling through shit articles, reading 50page PDFs, none of which contained the detail I wanted, before giving up asking an AI and clicking on the source it quoted to get my reply. Maybe my search terms sucked, maybe I can't ask the right question, because I don't know what I don't know, but the LLM was able to get it.


Are the problems I described already "solved" more computationally efficiently by other means? Absolutely yes!

Will it be faster and easier for me to throw it at an LLM? Also yes!

[–] Kolanaki@pawb.social 4 points 1 month ago (3 children)

And how do you know that the LLM was accurate and gave you the correct information, instead of just making up something entirely novel and telling you what you wanted to hear? Maybe the detail you were searching for could not be found, because it did not actually exist.

[–] Blue_Morpho@lemmy.world 3 points 1 month ago (1 children)

Maybe the detail you were searching for could not be found, because it did not actually exist.

He said he clicked the source it quoted.

Maybe if Google hasn't been enshittifying search for 10 years, AI search wouldn't be useful. But I've seen the same thing. The forced Gemini summary at the top of Google often has source links that aren't anywhere on the first page of Google itself.

[–] Kolanaki@pawb.social 3 points 1 month ago (1 children)

And how do you know the source is accurate? Having a source doesn't automatically make it accurate. Bullshit can also have sources.

[–] Blue_Morpho@lemmy.world 1 points 1 month ago* (last edited 1 month ago)

The premise of the op is that classic programming makes AI unnecessary. Having a bad source from classic Google search index isn't a problem with AI.

[–] HelloRoot@lemy.lol -1 points 1 month ago* (last edited 1 month ago) (1 children)

First, read my text fully before replying.

But additionally I have a brain and can use it to double check:

In example 1. I just build it blindly because it's a game and it doesn't matter if it's wrong. But it ended up being correct and I ended up having more fun instead of doing excel for an hour.

In 2. the math result was not far off from my guesstimate and I confirmed later, it was correct.

In 3. it gave me a source and I read the source. Google did not lead me to that source.

When I let LLM write code, I read the code, then I test the code. Here is where I get the most faults. Not in spreadsheets or math or research.

[–] Blue_Morpho@lemmy.world 1 points 1 month ago* (last edited 1 month ago) (1 children)

It's weird how there is such a knee jerk hate for a turbo charged word predictor. You'd think there would have been similar mouth frothing at on screen keyboards predicting words.

I see it as a tool that helps sometimes. It's like an electric drill and craftsmen are screaming, "BUT YOU COULD DRILL OFF CENTER!!!"

[–] jasory@programming.dev 2 points 1 month ago (1 children)

The commenter more or less admitted that they have no way of knowing that the algorithm is actually correct.

In your first analogy it would be like if text predictors pulled words from a thesaurus instead of a list of common words.

[–] Blue_Morpho@lemmy.world 1 points 1 month ago

that they have no way of knowing that the algorithm is actually correct.

He tested it and it was good enough for him. If he wrote the code he'd still not know if it was correct and need to test it. If knowing an algorithm was all that was needed for writing working code, there wouldn't have been any software bugs in all of computer history until AI.

text predictors pulled words

My phone keyboard text predictor lists 3 words and they're frequently wrong. At best it lists 3 and you have to choose the 1 right word.

[–] bridgeenjoyer@sh.itjust.works 3 points 1 month ago (1 children)

Its a good tool in some cases. But I think general lack of understanding of how it works and its shortcomings is going to cause many issues in coming years.

[–] Blue_Morpho@lemmy.world 2 points 1 month ago (1 children)

That's been true ever since the first graduates came out knowing COBOL instead of assembly. Everything keeps getting more bloated and buggy.

[–] bridgeenjoyer@sh.itjust.works 2 points 1 month ago (1 children)

I wish I could go back and learn all the old ways, but no one teaches that now. I hate learning things the new way with all the shortcuts and bloat everything has now

[–] Blue_Morpho@lemmy.world 2 points 1 month ago (2 children)

There are lots of assembly programming YouTubers. My way of scratching that itch is Arduino / ESP32. The tool chain is all C code but it's so stripped down there's not even an OS. It's just your code on the hardware.

[–] bridgeenjoyer@sh.itjust.works 1 points 3 weeks ago

I do love the little arduinos and pis, but I can't think of any application I'd need one for.

Also is just kind of a bummer since ai can do all the coding, there's not much purpose in me learning it all from scratch. Back in the day, you HAD to learn that way, and I much prefer that. Everything is much too easy now and I think humanity is going to see the result of that in 15 years with the up and coming generation.

[–] bridgeenjoyer@sh.itjust.works 1 points 3 weeks ago

I do love the little arduinos and pis, but I can't think of any application I'd need one for.

Also is just kind of a bummer since ai can do all the coding, there's not much purpose in me learning it all from scratch. Back in the day, you HAD to learn that way, and I much prefer that. Everything is much too easy now and I think humanity is going to see the result of that in 15 years with the up and coming generation.