diz

joined 2 years ago
[–] diz@awful.systems 5 points 4 hours ago* (last edited 4 hours ago)

Old McDonald had a startup, iyo io o[4-mini].

It's funny how just today in a completely unrelated context a generative ai enthusiast used an example of OpenAI getting sued by NYT as a reason why they wouldn't commit some other malfeasance because they'd get caught if they did.

[–] diz@awful.systems 3 points 7 hours ago* (last edited 7 hours ago)

Its not about moats, it's about open source community (whose code had been trained on) coming out with pitchforks. It has nothing to do with moats.

You are way overselling coding agents.

Re-creating some open source project with a similar function is literally the only way a coding agent can pretend to be a programmer.

I tried latest models for code and they are in fact capable of shitting out a thousand lines of working code at a time, which obviously can only be obtained via plagiarism since they are also incapable of writing the most trivial code for a novel situation. And the neat thing about plagiarism is that once you start you can keep going since there's more of compatible code where it came from.

[–] diz@awful.systems 3 points 8 hours ago* (last edited 8 hours ago)

Yeah I'm thinking this one may be special cased, perhaps they wrote a generator of river crossing puzzles with corresponding conversion to "is_valid_state" or some such. I should see if I can get it to write something really ridiculous into "is_valid_state".

Other thing is that in real life its like "I need to move 12 golf carts, one has low battery, I probably can't tow more than 3 uphill, I can ask Bob to help but he will be grumpy...", just a tremendous amount of information (most of it irrelevant) with tremendous^tremendous^ possible moves (most of them possible to eliminate by actual thinking).

[–] diz@awful.systems 3 points 1 day ago* (last edited 1 day ago) (7 children)

Pre-LLM, I had to sit through one or two annual videos to the sense of “dont cut and paste from open source, better yet don’t even look at GPLd code you arent working on” and had to do a click test with questions like “is it ok if you rename all the variables yes no”. Ohh and I had to run a scanning tool as part of the release process.

I don’t think its the FSD they would worry about, but GPL especially v3. Nobody gives a shit if it steals some leetcode snippet, or cuts and pastes some calls to a stupid API.

But if you have a “coding agent” just replicating GPL code wholesale, thousands and thousands of lines, it would be very obvious. And not all companies ship shitcode. Apple is a premium product and ages old patched CVEs from open source cropping up in there wouldn’t be exactly premium.

[–] diz@awful.systems 3 points 1 day ago (1 children)

That is not equivalent, though; other solutions to "can not be left unattended" exist; just ask Kristi Noem.

[–] diz@awful.systems 3 points 1 day ago (1 children)

LLM snippets are so 2024. Coding agents, baby.

[–] diz@awful.systems 5 points 1 day ago

Yeah, that's a great example.

The other thing is that unlike art, source code is already made to be consumed by a machine. It is not any more transformative to convert source code to equivalent source code, than it is to re-encode a video.

The only thing they do that is "transformative" is using source code not for compiling it but for defrauding the investors.

[–] diz@awful.systems 4 points 1 day ago* (last edited 1 day ago) (14 children)

Other funny thing: it only became a fully automatic plagiarism machine when it claimed that it wrote the code (referring to itself by name which is a dead giveaway that the system prompt makes it do that).

I wonder if code is where they will ultimately get nailed to the wall for willful copyright infringement. Code is too brittle for their standard approach, "we sort of blurred a lot of works together so its ours now, transformative use, fuck you, prove that you don't just blur other people's work together, huh?".

But also for a piece of code, you can very easily test if the code has the same "meaning" - you can implement a parser that converts code to an expression graph, and then compare that. Which makes it far easier to output code that is functionally identical to the code they are plagiarizing, but looks very different.

But also I estimate approximately 0% probability that the assholes working on that wouldn't have banter between themselves about copyright laundering.

edit: Another thing is that since it can have no own conception of what "correct" behavior is for a piece of code being plagiarized, it would also plagiarize all the security exploits.

This hasn't been a big problem for the industry, because only short snippets were being cut and pasted (how to make some stupid API call, etc), but with generative AI whole implementations are going to get plagiarized wholesale.

Unlike any other work, code comes with its own built in, essentially irremovable "watermark" in the form of security exploits. In several thousands lines of code, there would be enough "watermark" for identification.

 

I think this summarizes in one conversation what is so fucking irritating about this thing: I am supposed to believe that it wrote that code.

No siree, no RAG, no trickery with training a model to transform the code while maintaining identical expression graph, it just goes from word-salading all over the place on a natural language task, to outputting 100 lines of coherent code.

Although that does suggest a new dunk on computer touchers, of the AI enthusiast kind, you can point at that and say that coding clearly does not require any logical reasoning.

(Also, as usual with AI it is not always that good. sometimes it fucks up the code, too).

[–] diz@awful.systems 7 points 1 day ago

Having worked in computer graphics myself, it is spot on that this shit is uncontrollable.

I think the reason is fundamental - if you could control it more you would put it too far from any of the training samples.

That being said video enhancements along the lines of applying this as a filter to 3d rendered CGI or another video, that could (to some extent) work. I think the perception of realism will fade as it gets more familiar - it is pretty bad at lighting, but in a new way.

[–] diz@awful.systems 5 points 3 days ago

Well, it did reach for "I double checked it, I'm totally sure now" language.

From the perspective of trying to convince the top brass that they are making good progress towards creating an artificial psychopath - not just an artificial human - it's pretty good.

[–] diz@awful.systems 5 points 3 days ago* (last edited 3 days ago)

Still seems terminally AI pilled to me, an iteration or two later. "5 digit multiplication is borderline", how is that useful?

I think there's a combination of it being a pinnacle of billions and billions of dollars, and probably theirs firing people for slightest signs of AI skepticism. There's another data point, "reasoning math & code" is released as stable by Google without anyone checking if it can do any kind of math.

edit: imagine that a calculator manufacturer in 1970s is so excited about microprocessors they release an advanced scientific calculator that can't multiply two 6 digit numbers (while their earlier discrete component model could). Outside the crypto sphere, that sort of insanity is new.

[–] diz@awful.systems 7 points 4 days ago

Yeah, I'd also bet on the latter. They also added a fold-out button that shows you the code it wrote (folded by default), but you got to unfold it or notice that it is absent.

121
submitted 6 days ago* (last edited 6 days ago) by diz@awful.systems to c/techtakes@awful.systems
 

I love to show that kind of shit to AI boosters. (In case you're wondering, the numbers were chosen randomly and the answer is incorrect).

They go waaa waaa its not a calculator, and then I can point out that it got the leading 6 digits and the last digit correct, which is a lot better than it did on the "softer" parts of the test.

 

I couldn't stop fucking laughing. I'm wheezing. It's unhealthy.

They have this thing acting like that for the whole day... and then more than a day later claim it was hacked.

view more: next ›