Well, that explains Windows 11.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
I'm still forced to use Microsoft Outlook and teams, unfortunately, and boy oh boy is it bad.
Yesterday i spent 45 minutes of a 1,5 meeting (that would have been 45 minutes) on trying teams to please try and use the right microphone, please share a screen (not working under Firefox or chrome now, apparently)
I can't wait for the day that I have some time to get us off that dog shit
That’s… not something to be proud of.
Yeah that'd explain some stuff. Happy to have switched to Linux
This only makes sense if they are counting intellisense auto complete as "AI written"
So this explains why Microsoft Swiftkey is total dogshit now. Also why the Outlook app barely works.
Its unbelievable.
Eww. Maybe it’s not really true and Microsoft just wants to remind us that big corporate AI is so legit that all the software you use all day was “helped” by it.
But really for me the issue is the company, not the AI. If I read an article about AI generated code making it into the Linux kernel or some gnu/kde/etc utilities, I don’t think I would worry much because those changes will be reviewed by cranky old nerds who care about the functionality of the software first. I have no such confidence in Microsoft’s processes.
He used the words "written by software". This is ambiguous and doesn't mean AI, for example, using annotations for variables and generating the getters and setters would count. Right click and create function body for interface function definitions also.
They're exaggerating to pretend their AI is more useful than it is.
People have been using annotations to generate code since I rode my dinosaur to work.
Intellisense in visual studio has also been really good for over a decade. Which is technically also written by software and not me.
I mean, really good intellisense is a great improvement, but it's not replacing devs any time soon.
Horseshit.
The current state of code generated by AI is sketchy at best. I often get plain wrong answers because the model tries to derive. It comes up with calls to functions and properties that just do not exist.
"You are right, I made a mistake. Here is a better answer." Continues to give wrong answers.
Apart from that, apps that are glued together from AI generated code are not maintainable at all. What if there is a bug somewhere and you so not comprehend what is actually happening? Ask AI to fix it? Yeah good luck with that.
I do use AI for simple questions, and it works fairly well for that, but this claim by MS is just marketing bullshit.
"You are right, I made a mistake. Here is a better answer." Continues to give wrong answers
The exact same wrong answer. Co-Pilot is especially bad for that. I'm practically giving up using it outside of vs code because the actual copilot AI is dog shit stupid m
I didn’t RTA, but if they mean ALL code at MS, that just can’t be true. They have legacy stuff going back decades, beyond just their windows platform. There’s no way 30% of all their code is replaced or newly created by AI.
This ^
"20%-30% of code inside the company’s repositories"
Now, if they had said "20%-30% of code written in the past 6 months..." I might buy that.
The repositories are going to have all the current codebase, likely going back years now. AI generated code is barely viable at this point and really only pretty recently.
No way 1/3rd of all current codebase is AI.
"Please move all comments from in-line to the line above, and add a separator line"
Even 20% of new code would be a stretch unless they count every first iteration of code written by AI that needs to be replaced by a human later because it was plain wrong.
Maybe they're counting the six iterations of code it gives me as I tell it what's wrong with each one.
They mean new code, as per the article. And they mean code gen and IntelliSence
They say that because they are selling it.
And yeah, my experience is the same. The most frustrating is when writing in a typed python, and it gives answers that are clearly incorrect, making up attributes that don't even exist etc.
It would explain the constant barrage of system breaking bugs and RCE vulns
Ah but it was nice for a minute imagining microsoft vibe coding windows...
??? No it’s not! Can investors sue because this is such an obvious lie? Pls I have 0.3 Microsoft shares
Are they including code generated to test their own models capability maybe?
"Up to" can also be 0%. Why is there even a need to say "up to" here.
So you are saying up to 100% of their code is written by AI?
Up to 100% of all Windows code was written by a Macaque monkey on meth.
So you are saying up to 100% of all Windows code is Mac code?
Honestly, I'm not surprised.
I bet they’re counting code written while someone had an AI plugin installed as “written by AI” and I bet that accounts for almost all of that 30%. On top of that, I’m betting that they made it mandatory to have such a plug in, and the other 70% is just code written before they mandated this.
I would be very surprised if 30% of their code lines had even been touched at all by anyone since AI coding assistants became a thing.
I wish this shot from The Terminator had the camera showing Sarah Conner's face instead of Reese's, because it'd be such an appropriate meme image on multiple levels for when someone makes a misleading claim about some current AI system.
I could see stuff getting small changes and them claiming that the entirety of the new version is “written by AI”.
Also, having 1/3 lines with obvious code that can be auto suggested correctly would make sense, but that is hardly code "written by ai" in the way they suggest.
I’d guess a lot of the people writing the code don’t even have it turned on, it’s just installed because management said it had to be, because management wants to be able to tell investors they’re “innovating work flows”.
Those are the easy time savings though, the safe easy stuff the developer doesn't have to worry about anymore. (Giving them time do the gnarly stuff)
It is exactly the opposite, with simple, predictable auto-complete you didn't have to worry about that anymore, with LLMs you always have to look at it in detail because every little thing could be just plain completely different and wrong.
I can read way faster than I can type though. You still check it, but it's pretty good as that kind of stuff once you have an example for it to follow.
Reading code is usually orders of magnitude slower than writing code. Sure, typing might be slower than reading but to check if it is what you intended you have to understand it too.
Well, I'm generally very anti-LLM but as a library author in Java it has been very helpful to create lots of similar overloads/methods for different types and filling in the corresponding documentation comments. I've already done all the thinking and I just need to check that the overload makes the right call or does the same thing that the other ones do – in that particular case, it's faster. But if I myself don't know yet how I'm going to do something, I would never trust an AI to tell me.
Well, okay, I can see how it would be useful in languages like Java that are extremely verbose and have a low expressiveness. Writing Java pretty much was already IDEs with code generation 20 years or so ago because nobody wants to write so much boilerplate by hand.
Power move by the zucc by first asking how much genai is used at Microsoft then refusing to answer his own question at Facebook 😂
Well, that would explain a lot.
I'm also guessing that at "up to 30%" of the company's leadership decisions are being made by AI too.
Would be interesting to see how they measured that metric. Are they tagging individual lines as AI generated?
What those lines are too would be interesting, AI as auto complete is less dangerous than complete generation, but probably also less useful.
Most probably Microsoft has set objectives for how much LoC are from LLMs and developers invented numbers to match that metric (because they probably have things more important to do than counting LoC)
AI as auto complete is exactly what I was thinking.
I've seen lots of cases where AI appears as an auto complete suggestion and I can just hit and it finishes the current line. It's essentially filling in the boilerplate text. Heck in some cases it isn't even right, but it's close enough that I can change a few values.
I also want to point out that this isn't particularly new technology. This existed before AI. It has perhaps expanded more, but it isn't a revolutionary improvement, it's an incremental one. So when we talk about usefulness, I think it is actually more useful.
Now if it could do all the magic planning and thinking, that would be more useful, but we're not there yet.
I hope that that's inclusive of something like lines of documentation in comment lines.