this post was submitted on 06 Jan 2026
392 points (97.1% liked)

Programming

24386 readers
391 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] zaphod@sopuli.xyz 126 points 1 week ago (8 children)

Writing code with an AI as an experienced software developer is like writing code by instructing a junior developer.

[–] clif@lemmy.world 15 points 6 days ago

Without the payoff of the next generation of developers learning.

Management: "Treat it like a junior dev"

... So where are we going to get senior devs if we're not training juniors?

[–] BradleyUffner@lemmy.world 90 points 1 week ago* (last edited 1 week ago) (39 children)

... That keeps making the same mistakes over and over again because it never actually learns from what you try to teach it.

[–] zaphod@sopuli.xyz 48 points 1 week ago (3 children)

Yep, the junior is capable of learning.

[–] aport@programming.dev 3 points 6 days ago

My job believes the solution to this is a 7,000 line agents.md file

[–] InternetCitizen2@lemmy.world 18 points 1 week ago (1 children)

Wait till I get hired as junior

[–] Clent@lemmy.dbzer0.com 0 points 6 days ago

Yeah, not all people who enter the industry should be doing so.

Most of this was boomers being boomers and claiming anyone and everyone should code.

[–] 30p87@feddit.org 4 points 1 week ago

Sometimes. And if they're not, they'll be replaced or replace themselves.

load more comments (38 replies)
[–] GammaGames@beehaw.org 17 points 1 week ago (1 children)

Apparently some people would love to manage a fleet of virtual junior devs instead of coding themselves, I really don’t see the appeal.

[–] pinball_wizard@lemmy.zip 12 points 1 week ago (1 children)

I think the appeal is that they already tried to lean to code and failed.

Folks I know who are really excited about vibe coding are the ones who are tired of not having access to a programmer.

In some of their cases, vibe coding is a good enough answer. In other cases, it is not.

Their workplaces get to find out later which cases were which.

[–] Zos_Kia@lemmynsfw.com 4 points 1 week ago (1 children)

Funny cause my experience is completely the reverse. I've seen a ton of medium level developers just use copilot style auto complete without really digging into new workflows, and on the other end really experienced people spinning agents in parallel and getting a lot of shit done.

The "failed tech business people" are super hyped for ten minutes when cursor gives them a static html page for free, but they quickly grow very depressed when the actual work starts. Making sense of a code base is where the rubber meets the road, and agents won't help if you have zero experience in a software factory.

[–] OldMrFish@lemmy.one 0 points 6 days ago

That's the funny thing. I definitely fall into the 'medium level' dev group (Coding is my job, but I haven't written a single line of code in my spare time for years), and frankly - I really like Copilot. It's like the standard code-completion on steroids. No need to spend excessive amounts of time describing the problem and review a massive blob of dubious code, just short-ish snippets of easily reviewed code based on current context.

Everyone seems to argue against AI as if vibe coding is the only option and you have to spend time describing every single task, but I've changed literally nothing in my normal workflow and get better and more relevant code completion results.

Obviously having to describe every task in detail taking edge cases into account is going to be a waste of time, but fortunately that's not the only option.

[–] folekaule@lemmy.world 14 points 1 week ago (1 children)

Very true. I've been saying this for years. However, the flip side is you get the best results from AI by treating it as a junior developer as well. When you do, you can in fact have a fleet of virtual junior developer working for you as a senior.

However, and I tell this to the junior I work with: you are responsible for the code you put into production, regardless if you write it yourself or you used AI. You must review what it creates because you're signing off on it.

That in turn means you may not save as much time as you think, because you have to review everything, and you have to make sure you understand everything.

But understanding will get progressively harder the more code is written by other people or AI. It's best to try to stay current with the code base as it develops.

Unfortunately this cautious approach does not align with the profit motives of those trying to replace us with AI, so I remain cynical about the future.

[–] AnyOldName3@lemmy.world 17 points 1 week ago (1 children)

Usually, having to wrangle a junior developer takes a senior more time than doing the junior's job themselves. The problem grows the more juniors they're responsible for, so having LLMs stimulate a fleet of junior developers will be a massive time sink and not faster than doing everything themselves. With real juniors, though, this can still be worthwhile, as eventually they'll learn, and then require much less supervision and become a net positive. LLMs do not learn once they're deployed, though, so the only way they get better is if a cleverer model is created that can stimulate a mid-level developer, and so far, the diminishing returns of progressively larger and larger models makes it seem pretty likely that something based on LLMs won't be enough.

[–] folekaule@lemmy.world 3 points 1 week ago* (last edited 1 week ago) (1 children)

I'm a senior working with junior developers, guiding them through difficult tasks and delegating work to them. I also use AI for some of the work. Everything you say is correct.

However, that doesn't stop a) some seniors from spinning up several copies of AI and test them like a group of juniors and b) management from seeing this as a way to cut personnel.

I think denying these facts as a senior is just shooting yourself in the foot. We need to find the most productive ways of using AI or become obsolete.

At the same time we need to ensure that juniors can develop into future seniors. AI is throwing a major wrench in the works of that, but management won't care.

Basically, the smart thing to do is to identify where AI, seniors, and juniors all fit in. I think the bubble needs to pop before that truly happens, though. Right now there's too much excitement to cut cost/salaries with the people holding the purse strings. Until AI companies start trying to actually make a profit, that won't happen.

[–] AnyOldName3@lemmy.world 5 points 1 week ago (1 children)

If LLMs aren't going to reach a point where they outperform a junior developer who needs too much micromanaging to be a net gain to productivity, then AI's not going to be a net gain to productivity, and the only productive way to use it is to fight its adoption, much like the only way to productively use keyboards that had a bunch of the letters missing would be to refuse to use them. It's not worth worrying about obsolescence until such a time as there's some evidence that they're likely to be better, just like how it wasn't worth worrying about obsolescence yet when neural nets were being worked on in the 80s.

[–] folekaule@lemmy.world 3 points 1 week ago (1 children)

You're not wrong, but in my personal experience AI that I've used is already at the level of a decent intern, maybe fresh junior level. There's no reason it can't improve from there. In fact I get pretty good results by working incrementally to stay within its context window.

I was around for the dotcom bubble and I expect this to go similarly: at first there is a rush to put AI into everything. Then they start realizing they have to actually make money and the frivolous stuff drops by the wayside and the useful stuff remains.

But it doesn't go away completely. After the dotcom bust, the Internet age was firmly upon us, just with less hype. I expect AI to follow a similar trend. So, we can hope for another AI winter or we can figure out where we fit in. I know which one I'm doing.

[–] AnyOldName3@lemmy.world 9 points 1 week ago (1 children)

There's a pretty good reason to think it's not going to improve much. The size of models and amount of compute and training data required to create them is increasing much faster than their performance is increasing, and they're already putting serious strain on the world's ability to build and power computers, and the world's ability to get human-written text into training sets (hence why so many sites are having to deploy things like Anubis to keep themselves functioning). The levers AI companies have access to are already pulled as far as they can go, and so the slowing of improvement can only increase, and the returns can only diminish faster.

[–] folekaule@lemmy.world 5 points 1 week ago

I can only say I hope you're right. I don't like the way things are going, but I need to do what I can to adapt and survive so I choose to not put my hopes on AI failing anytime soon.

By the way, thank you for the thoughtful responses and discussion.

[–] thingsiplay@beehaw.org 4 points 1 week ago

What a wonderful statement.

[–] fluxx@lemmy.world 3 points 1 week ago

Wow, great analogy. Might steal this to use myself.

[–] myfunnyaccountname@lemmy.zip 3 points 1 week ago (1 children)

I get what you are saying and agree. But corporations doing give a fuck. As long as they can keep seeing increased profits from it, it’s coming. It’s not about code quality or time or humans. It’s about profits.

[–] UnspecificGravity@piefed.social 8 points 1 week ago (1 children)

Are they though? They've invested like a trillion dollars into this and it doesn't seem any closer to actually making money.

[–] myfunnyaccountname@lemmy.zip 2 points 1 week ago (1 children)

True. The AI parents are having issues. We all know OpenAI is hemorrhaging money. I think Anthropic is as well. They are all passing money between each other. But software companies, like the one I work for, don’t care what those companies are doing. As long as my company can use services provided by the AI parents, it’s not an issue if the AI parents themselves are losing money. Or if software companies can shove out their own AI feature (like the AI in ServiceNow or how Office 365 is getting some rebranding), all is well and they can brag about having AI to the shareholders.

[–] UnspecificGravity@piefed.social 5 points 1 week ago (1 children)

That'll work right up until the shareholders start hearing "we got AI!" as the equivalent to "we invested in Enron!". I hope they have a plan for that.

[–] Zos_Kia@lemmynsfw.com 1 points 1 week ago

Me it reminds me of that period of time where A/B testing was big and everybody and their mother had to at least do some. Never mind that it solved problems we didn't have, it still was a cool thing to say in a meeting lol

[–] Zos_Kia@lemmynsfw.com 0 points 1 week ago

And that's what I don't understand. Instructing a team of juniors works very well, in fact it has been the predominant way of making software for some time now. Hire a bit more junior than what you need, and work them a bit above their pay grade thanks to your experience. That's just business as usual.

So I guess what these studies show is that most engineers are not really good when it comes to piloting juniors, which has been a known fact forever. That's often cited as a reason why most seniors will never make it to staff level.