this post was submitted on 08 Mar 2026
233 points (96.8% liked)

Ask Lemmy

38405 readers
2155 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, toxicity and dog-whistling are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

I'm a software developer in Germany and work for a small company.

I've always liked the job, but I'm getting annoyed recently about the ideas for certain people.....

My boss (who has some level of dev experience) uses "vibe coding" (as far as I know, this means less human review and letting an LLM produce huge code changes in very short time) as a positive word like "We could probably vibe-code this feature easily".

Someone from management (also with some software development experience) makes internal workshops about how to use some self-built open-code thing with "memory" and advanced thinking strategies + planning + whatever that is connected to many MCP servers, a vector DB, has "skills", a higher token limit, etc. Surprisingly, the people visiting the workshops (also many developers, but not only) usually end up being convinced by it and that it improved their efficiency a lot and writing that they will use it and that it changed their perspective.

Our internal slack channels contain more and more AI-written posts, which makes me think: Thank you for throwing this wall of text on me and n other people. Now, n people need to extract the relevant information, so you are able to "save time" not writing the text yourself. Nice!!!

I see Microsoft announcing that 30% of code is written by AI which is advertisement in my opinion and an attempt to pressure companies to subscribe to OpenAI. Now, my company seems to not even target that, but target the 100%????

To be clear: I see some potential for AI in software development. Auto-completions, location a bug in a code base, writing prototypes, etc. "Copilot" is actually a good word, because it describes the person next to the pilot. I don't think, the technology is ready for what they are attempting (being the pilot). I saw the studies questioning how much the benefit of AI actually is.

For sure, one could say "You are just a developer fearing to lose their job / lose what they like to do" and maybe, that's partially true... AI has brought a lot of change. But I also don't want to deal with a code base that was mainly written by non-humans in case the non-humans fail to fix the problem......

My current strategy is "I use AI how and when ->I<- think that it's useful", but I'm not sure how much longer that will work..

Similar experiences here? What do you suggest? (And no, I'm currently not planning to leave. Not bad enough yet...).

top 50 comments
sorted by: hot top controversial new old
[–] Evotech@lemmy.world 2 points 2 hours ago* (last edited 2 hours ago) (1 children)

You should try it yourself. See what it can, see what it can’t instead of just arguing with your boss about something you don’t really know anything about

And then you can actually bring the facts to your boss.

[–] flying_sheep@lemmy.ml 2 points 20 minutes ago

Yeah, you'll soon see the boundaries of what it's useful for and will then be acne to band informed decisions.

Vibe-coding UI for a one-off questionnaire? Why not. Vibe-coding something you need to maintain? Oof.

[–] Cethin@lemmy.zip 10 points 13 hours ago (1 children)

I'm starting to form a conspiracy theory that the "let AI write the email" concept is, in itself, an ad for AI. Not for people writing them (they are easy to convince), but now the people reading them have a bunch of bullshit to deal with. The best tool is an LLM summary to undo the LLM bullshit. They get double the usage from people (well, if the manager gets many subordinates to do this, it's well more than double), and nothing of value was added.

[–] derpgon@programming.dev 3 points 5 hours ago

Jokes on em, I don't read work emails. Partially because I refuse to dedicate any time of the day using Outlook, especially in a web browser, because the oh-so-wise IT departnent does not allow to use a different client, and as I can't use Outlook on Linux, fuck em.

And no, IMAP or POP3 are not available. Trying to login via Thunderbird just triggers a message to contant IT dept to allow me using it. It's Teams or nothing.

[–] Nibodhika@lemmy.world 12 points 15 hours ago

Honestly, this is what I would do in your situation:

  1. Update your resume and start responding to LinkedIn messages and possibly looking at least possibly.
  2. Take those workshops for LLMs, there might be useful stuff to learn there, auto-completion, code search and examples of how to use certain features are very good uses of LLMs.
  3. Don't be overly vocal about it, but point at issues when you see them, e.g. those large messages that you're expected to read point out how they're way longer than need to be and how using LLMs to give you a summary said the wrong thing (even better if you have an actual example of this, by for example invoking TLDR bot or something similar on those messages every time they come up)
  4. Look at code that was vibe coded in areas you're working and start creating tickets for the stuff you see, unless they're vetting everything the LLM produces (which would be slower than writing it yourself) there will be issues there, start documenting those. The thing most managers and other "AI enthusiasts" don't get is that LLMs are trained with stack overflow and thousands of random GitHub projects written by inexperienced devs for every one good piece of code, so they have thousands of bad or incomplete examples for every good one. This means they end up not doing things like verifying you're logged in to use an API, sanitize SQL queries, etc. Because when you ask how to do something in stack overflow you will get an answer that is not meant to be used literally things like `query = f"SELECT * FROM {table_name}"`` is an okayish example on how to build queries with validated data, but it's a TERRIBLE example to use with user provided data, but the LLM doesn't know that, it just copy pastes the code that gets things from a table where it needs it.
  5. Prepare yourself, using LLM to write code has a short lifespan in most companies, but the damage takes twice as long to clean up. If you stay you will be seen as the naysayer and might even get fired for it, but eventually this will blow up so gigantically that they'll start to regulate or even ban LLMs. And then there will be lots of garbage to clean up. In your shoes I might look elsewhere while possible as I wouldn't want to be associated with the company that had all of their data leaked or similar, because if they're using vibe code in prod it's a matter of when.
[–] Etterra@discuss.online 21 points 20 hours ago

Step 1, update your resume.

Step 2, follow your boss' instructions until it all breaks.

Step 2.5, document everything so be can't blame you later.

Step 3, go have a beer; you don't get paid enough to give a shit.

[–] Lumisal@lemmy.world 15 points 23 hours ago (1 children)

Plant the seed of using OpenClaw, but make sure you get no credit for it.. Once it's taken root, make sure you back up as much as you can of everything.

Wait for OpenClaw to inevitably self implode. Panic happens. Point out this is why you didn't trust AI, become hero by having the backups ready so everything's not destroyed.

People who don't know better like AI until it vibes back and bites them.

[–] x00z@lemmy.world 4 points 14 hours ago

This but without the backups. Then walk out of the building towards the camera when it all explodes behind you.

[–] pinball_wizard@lemmy.zip 13 points 23 hours ago

A popular option is to use vibe code to help run the place into the ground while looking for the next job.

Pretty soon the wave of vibe-shit code is going to be too much to clean up.

Interesting times ahead.

[–] moonshadow@slrpnk.net 42 points 1 day ago (8 children)

Two words: malicious compliance.

[–] 87Six@lemmy.zip 6 points 23 hours ago (1 children)

This.

I've already got my manager to tell me to not use AI on a task. I see this as an absolute win and I'm gunning for more.

He ALWAYS uses AI first when he needs to figure something out. ALWAYS tells us to use AI for the quick start. But when we do it, and it ends up wasting time, somehow it's our fault, and we didn't prompt it properly.

Also, am I mad, or does Cursor (specifically Sonnet) sometimes act dumb on purpose? Sometimes it codes a feature nearly entirely without many issues, other times it seems unable to comprehend that it's using the wrong property in a class.I feel like it's made to make us question each other's ability to use AI tools and cause internal team unrest.

[–] Sunsofold@lemmings.world 1 points 54 minutes ago

Never forget that it isn't thinking, at all. It comprehends nothing. It's just a very big, expensive autocomplete. It didn't understand when it was using the right property, it just rolled its d10000 and got something that fit requirements, but on the time it failed, it rolled outside of the desired range. No thought, just numbers.

load more comments (7 replies)

It's hard to be a contrarian in these kinds of positions (I've been there, and it didn't end well), so I wouldn't be too outspoken, but at the same time, try to innocently point out the issues with approaches like this. I would just try to point out the flaws in this approach, the same that we would for any other kind of programming fad - without making it seem like it's an agenda, of course.

For example, any time teams are looking for feedback - code review, retrospectives, etc. - just point out the flaws on why vibe coding is a bad idea and bring it up casually when the time comes. It doesn't hurt to be honest as long as you don't come off as being an ass about it.

[–] bcgm3@lemmy.world 49 points 1 day ago (1 children)

Our internal slack channels contain more and more AI-written posts, which makes me think: Thank you for throwing this wall of text on me and n other people. Now, n people need to extract the relevant information, so you are able to "save time" not writing the text yourself. Nice!!!

My last software dev employer did this, except with the "voice recording" feature. Instead of composing messages in text in a text chat (because that takes too long), he'd hit the record button and just start talking it out, then send the recording. Easy! Then the team had to download and listen to ~5 minutes of verbal diarrhea, pausing and rewinding for twice that long in an attempt to glean something useful from it. This particular kind of delusion existed before AI.

[–] Michal@programming.dev 12 points 1 day ago (1 children)

This is where AI could be useful. Transcribing and summarising the voice recordings.

[–] nao@sh.itjust.works 9 points 23 hours ago (1 children)

Who's going to be blamed when the summary got it wrong though?

[–] Michal@programming.dev 5 points 14 hours ago* (last edited 14 hours ago)

Everyone. One person is too lazy to write a message, the others can't be bothered to listen to the whole thing 🤷‍♂️

The transcription should be attached to the audio recording so if the sender cares about it being correct they should be able to comment or add correction.

[–] OwOarchist@pawb.social 117 points 1 day ago (2 children)

Remind them that copyright cannot be enforced on anything AI-written.

[–] Dojan@pawb.social 56 points 1 day ago* (last edited 1 day ago)

I try to push on the maintenance aspect. Developing something new is easy, and my company does do that, but the group I'm in is primarily doing maintenance on existing software. Bug fixes, feature additions, etc. If we generate applications entirely using LLMs, none of us will be experts on the applications we push to the customers.

They push corpo buzzwords like "responsibility", but who takes responsibility when no one has done the work to begin with? It feels like a liability nightmare, and the idea of sitting there cleaning slopcode just isn't very appealing to me.

[–] dan1101@lemmy.world 7 points 1 day ago

That's going to be a problem, almost like a money laundering scheme. AI can spit out content that's 99% derived from copyrighted content but is itself free of copyright.

[–] jubilationtcornpone@sh.itjust.works 28 points 1 day ago (1 children)

Any idiot can write code. "Vibe coding" is just the new pasting code from stack overflow. For that matter, a lot of LLM generated code probably came from stack overflow.

Your value as a developer is not in your ability to rapidly pump out code. Your value is in your ability to design and build complex systems using the tools at your disposal.

As an industry, software engineering has not yet been forced to reckon with the consequences of "vibe coding." The consequences being A.) the increasing number of breaches that will occur due to poor security practices and B.) the completely unmanageable mountain of technical debt. A lot of us have been here before. Particularly on the tech debt front. If you've ever been on a project where the product team continually pushes to release features as fast as possible, everything else be damned, then you know what I mean. Creating new code is easy. Maintaining old code is hard.

Everything starts out great. The team keeps blowing through milestones. Everyone on the business side is happy. Then, a couple years into the project, strange things start happening. It's kind of innocuous at first. Seemingly easy tickets take longer to complete than they used to. The PR change logs get longer and longer. Defect rates skyrocket. Eventually, new feature development grinds to a halt and the product team starts frantically asking, "what the hell is going on?"

A question to which maybe one or two of the more, senior devs respond, "Well, uh, we have a lot of technical debt. I mean A LOT. We're having to spend tons of time refactoring just to make minor changes. And of course, unplanned refactoring tends to introduce bugs.

The product team gets an expression on their face like Wyle E. Coyote as the shadow of a falling ACME anvil closes in around him. At this moment, they have two choices. Option A.) develop a plan to mitigate the existing tech debt and realign the dev teams objectives to help prevent this situation again by focusing on quality over quantity. Option B.) ignore the problem and try to ram feature development back on track by sheer force of will.

Only one of these options will achieve meaningful outcome and it's not "B". Unfortunately, in my experience that's often the chosen option. The product team does not understand that while Option A impedes feature development, it's only temporary. Option B impedes feature development permanently.

We're going to see a very similar cycle with vibe coding. It just takes time to materialize. Personally, I think the tech debt for vibe codes projects will be compounded due to the sheer verbosity of LLM's and the fact that no one actually understands a vibe coded project well enough to fix it.

That said, these issues are rooted in hubris and ignorance. Failure to appreciate the "engineering" part of software engineering. This is not something you alone can change.

The AI hype is going to disappear. Probably sooner than later. Just like every other tech hype cycle before it. But, LLM's are probably here to stay so we have to make the best of it. I don't usually use LLM's for code generation. There are better tools for that already. I do use them frequently for research. Honestly, using an LLM with search incorporated is often a lot faster than scouring dozens of websites to figure out how to do something. You still have to take the information with a grain of salt as much as you would with anything on the Internet because LLM's have no understanding of the text they spit out and will feed you incorrect information without missing a beat.

If I were you, I would focus on quality over quantity. Closing tickets faster is pointless if you're introducing a bunch of new bugs. If your bosses don't know that already, they will learn it soon enough.

[–] pinball_wizard@lemmy.zip 4 points 23 hours ago

Closing tickets faster is pointless if you're introducing a bunch of new bugs.

Objectively true, but if my bonus reflects tickets rather than bugs, I'm gonna close so many tickets, anyway, because I don't own the place.

Which is also why wise companies grant their employees stock.

[–] gwl@lemmy.blahaj.zone 0 points 13 hours ago

I think you're gonna have to kill them

[–] gigachad@piefed.social 55 points 1 day ago (2 children)

I am making similar experiences, but is is not as bad as you are describing it yet. We have a new member in the team who is not a developer by himself, but he has gotten the task to make our way of working more professional (we are mainly scientists and not primarily software engineers, so that's a good thing).

His first task was to create programming guidelines and standards. He created 8 pages of LLM generated text and example non sense code. He honestly put a lot of effort in it, but of course there are a lot of things in it that are wrong. But the worst thing is the wall of text. You are nailing it - it is my task now to go through this whole thing and extract the relevant information. It sucks. And I am afraid that soon I will need to review more and more low quality MRs generated by people who have little experience in programming.

[–] halcyoncmdr@piefed.social 42 points 1 day ago (1 children)

Fixing vibe code is a specialty that contractors will be able to charge a premium for here pretty soon.

[–] 0x0f@piefed.social 7 points 1 day ago

Soon? It's been on my resume for over a year. 

[–] ch00f@lemmy.world 26 points 1 day ago (4 children)

We had a dev drop a combined total of 8,300 lines of readme files into the code base over a weekend. I want to nuke all of them, my boss suggests reviewing and updating them.

[–] 87Six@lemmy.zip 4 points 23 hours ago (1 children)

8,300 lines

rookie numbers

I think my team is in the tens of thousands of AI generated "documentation".

They claim the AI can use it to code better in the project.

Bullshit. The AI can't load in a single one of these files without filling half the context.

[–] 87Six@lemmy.zip 2 points 23 hours ago

I was recently instructed to have gander at it.

I warned that it seemed inconsistent with the actual code.

Was told I'm right and they brushed it off.

"We should update this to reflect reality"

They brushed it off and we moved on. The misleading doc is still there, waiting for its next victim.

That last line belongs in a horror novel

load more comments (2 replies)
[–] LeapSecond@lemmy.zip 47 points 1 day ago (1 children)

I had a manager who pushed AI a lot. When he left, all the pressure to use it seemed to die down. So maybe it's just a couple of people creating this environment and if you can get away with avoiding them it's better.

The problem with AI code we saw is that often no human has actually looked at it. During reviews you won't check every line and you'll have to trust much of the code that seems to do obvious things. But that assumes it was written by a human you also trust. When that human hasn't reviewed the code either, you end up with code no one in the company has seen (and may not even know how it works).

[–] Dojan@pawb.social 12 points 1 day ago* (last edited 1 day ago)

Your entire comment echoes my thoughts. Things aren't exactly improved by the idea of adding LLMs to the review process either. Gods.

[–] olafurp@lemmy.world 14 points 1 day ago

We're using LLMs at the company I work at and it seems very useful in many cases but sometimes it still doesn't work. I'm a bit worried about the aspect of the code rotting by LLMs generating stuff based on existing code.

My mindset has shifted a bit, now I'm more focused on making stuff easy to find and easy to figure out patterns to use so that the codebase becomes easier to work with. There's some horrible code in the project and the LLM absolutely sucks balls at it but if it's a clean routine job such as making a table with update dialogs and actions to manipulate the data the success rate is >95%.

So yeah, don't trust it, treat it like a junior dev that got straight As in school and has never considered security. Code reviews are now where it's at.

[–] zkfcfbzr@lemmy.world 29 points 1 day ago

Our internal slack channels contain more and more AI-written posts, which makes me think: Thank you for throwing this wall of text on me and n other people. Now, n people need to extract the relevant information, so you are able to “save time” not writing the text yourself. Nice!!!

I think this is one of your best bets as far as getting a real policy change. Bring it up, mention that posts like that may take less time to "write", but that they're almost always obnoxiously verbose, contain paragraphs that say essentially nothing, and take far longer to read than a hand-typed message would. The argument that one person is saving time at the expense of dozens (?) of people losing time may carry a lot of weight, especially if these bosses are in and read the same Slack channel.

Past that I'd just let things go as they are, and take every opportunity to point out when AI made a problem, or made a problem more difficult to solve (while downplaying human-created problems).

[–] Retail4068@lemmy.world 20 points 1 day ago* (last edited 1 day ago)

Learn how AI works, what it is good at, and how to build trust to minimize bugs.

You will be less efficient than other coders using it responsibly if you're not using it. It WILL be another tool you're expected to know.

Tldr; Be cautious and use the LLM to increase your test coverage and start with low risk basics.

[–] kunaltyagi@programming.dev 26 points 1 day ago

For centuries, we spent less effort consuming content than it took to produce the content.

Good teammates and content producers understand that their content needs to have an intrinsic value and benefits beyond the mere existence of content.

If you make your team 10x slower by sending LLM generated 10 page content instead of a one liner, you are actively hindering the team.

Efficiency is not just production of content (slop or not), but the overall system. That's why the corpo speak has always been such a waste. Too many words to say nothing in a mandatory all-hands. Now the dial is up to 11 with the same time waste everywhere

[–] GuyFawkesV@lemmy.world 12 points 1 day ago

It’s all about finding that happy medium. I’m a developer myself that uses it, mainly to write the “mundane” stuff, but I always double check its work. I also have it check mine, and ask for any alternatives it may see - a lot of times they’re helpful, AND I learn them for future use. So maybe just try to push the balance rather than the all-in?

[–] whaleross@lemmy.world 19 points 1 day ago

AI is inevitable in many fields but as usual people expect way too much from it. It's a tool, not a magic wand. I agree with you that it is useful and even powerful when used by somebody that understands when it is useful. But it is dangerous when wielded by somebody that doesn't. As others have said, let your boss vibe code themselves into a corner and leave them to vibe code themselves out of it. They will try to deflect that it is your job to solve it though so you better come up with a strategy to handle that. Be sure to have more people on your side in this venture.

Secondarily, install a chatbot with the instructions to derive essence as a bullet list from your boss wall of texts. If they make their life easier with LLM, so can you. If there are misunderstandings it's either all just ghosts in the machine or a failure from your boss to communicate clearly.

[–] Goldholz@lemmy.blahaj.zone 19 points 1 day ago (4 children)

My teamleads too. They tried to sell a whole application that they vibecoded and marketik strategie was "fully ai generated!"

[–] Axolotl_cpp@feddit.it 27 points 1 day ago (1 children)

If i saw a fully AI generated software i would cower in fear (for my computer safety) and run away

load more comments (1 replies)
load more comments (3 replies)
[–] raicon@lemmy.world 15 points 1 day ago

Endure the next year or so, until it pops and there will be a massive need for senior devs for fixing the slop machine.

I haven't had a similar experience yet, but maybe some if your collegues feel the same way? You could write a letter stating your concerns and let anyone sign who agrees and then send it to your manager. Also, I'd like to add that under German law AI Output can not be copyrighted. You can only claim coownership or something. Maybe that could be interedting to your managers?

[–] PixelatedSaturn@lemmy.world 8 points 1 day ago

I have a similar situation at work. But I'm a designer. Previously I have worked for a company that did research in ai, from a product standpoint where I also had a boss who thought ai can do anything and blamed the workers for not succeeding. Now Im beginning to see the same thing here in my new company. The boss, product manager and a few of the developers (actually those with less ai experience) are driving this motion that ai will just replace everything. I tried to tell them about my experience, but they didn't listen to a designer.

Now this pressure is also coming to the UX team. A few days ago the boss asked me to use ai to reskin a whole app. It was assumed by him that that's a simple thing. Why is it so hard for them to understand it doesn't work like that?

[–] ExtremeDullard@piefed.social 11 points 1 day ago

Let him "vibe-code" himself into a problem, then tell him you can't fix the mess he done did.

load more comments
view more: next ›