diz

joined 2 years ago
[–] diz@awful.systems 1 points 1 week ago

I'm afraid they already had that exact idea when they named the startup "oklo".

[–] diz@awful.systems 2 points 1 week ago* (last edited 1 week ago) (2 children)

I think it's not very difficult to construct a really shitty small reactor that is horrendously expensive per watt. Can probably be built in a year if you get rid of NRC and just half ass it completely.

I mean, Demon Core was a small reactor. You pretty much have to do a lot of work to ensure you won't create a small reactor when a truckload of fresh fuel falls into a river.

What's difficult is making a safe reactor that is actually making electricity at somewhat reasonable price per watt.

[–] diz@awful.systems 3 points 1 week ago (1 children)

Nuclear already makes 9% of world's electricity.

[–] diz@awful.systems 1 points 2 months ago* (last edited 2 months ago)

Shorting the market requires precise timing. Being early is just as bad as being wrong.

Exactly. It is not enough to know that a company stock will go down. It is necessary to know that it will never go higher than a certain point above the current value (not even momentarily) before it goes down. If you have a fuckload of other people's money you can just keep double-or-nothing-ing it, that's what banks were doing to gamestop, except that this can sometimes cause the stock to go even higher (a short squeeze), which would make you (who doesn't actually have a fuckload of other people's money) lose all of your money.

edit: also the other concerning possibility is that stock prices can go up simply due to the dollar going down.

[–] diz@awful.systems 5 points 2 months ago

The only thing that is allowed to tell good art from slop is the AI which needs to consume good art and not slop.

[–] diz@awful.systems 5 points 2 months ago

Its spelled “masterdebating”.

[–] diz@awful.systems 3 points 2 months ago (1 children)

Hyping up AI is bad, so it’s alright to call someone a promptfondler for fondling prompt.

I mostly see "clanker" in reference to products of particularly asinine promptfondling: spambot "agents" that post and even respond to comments, LLM-based scam calls, call center replacement, etc.

These bots don't derive their wrongness from the wrongness of promptfondling, these things are part of why promptfondling is wrong.

Doesn’t clanker come from some Star Wars thing where they use it like a racial slur against robots, who are basically sapient things with feelings within its fiction? Being based on “cracker” would be alright,

I assume the writers wanted to portray the robots as unfairly oppressed, while simultaneously not trivializing actual oppression of actual people (the way "wireback" would have, or I dunno "cogger" or something).

but the way I see it used is mostly white people LARPing a time and place when they could say the N-word with impunity.

Well yeah that would indeed be racist.

I’m seeing a lot of people basically going “I hate naggers, these naggers are ruining the neighborhood, go to the back of the bus nagger, let’s go lynch that nagger” and thinking that’s funny because haha it’s not the bad word technically.

That just seems like an instance of good ol anti person racism / people trying to offend other people while not particularly giving a shit about the bots one way or the other.

[–] diz@awful.systems 4 points 2 months ago (3 children)

we should recognize the difference

The what now? You don't think there's a lot of homophobia that follows "castigating someone for what they do" format, or you think its a lot less bad according to some siskinded definition of what makes slurs bad that somehow manages to completely ignore anything that actually makes slurs bad?

I think that’s the difference between “promptfondler” and “clanker”. The latter is clearly inspired by bigoted slurs.

Such as... "cracker"? Given how the law protects but doesn't bind AI, that seems oddly spot on.

[–] diz@awful.systems 21 points 2 months ago* (last edited 2 months ago)

Note also that genuine labor saving stuff like say the Unity engine with Unity asset store, did result in an absolute flood of shovelware on Steam back in the mid 2010s (although that probably had as much having to do with Steam FOMO-ing about the possibility of not letting the next Minecraft onto Steam).

As a thought experiment imagine an unreliable labor saving tool that speeds up half* of the work 20x, and slows down the other half 3x. You would end up 1.525 times slower.

The fraction of work (not by lines but by hours) that AI helps with is probably less than 50% , and the speed up is probably worse than 20x.

Slowdown could be due to some combination of

  • Trying to do it with AI until you sink too much time into that and then doing it yourself (>2x slowdown here).
  • Being slower at working with the code you didn't write.
  • It being much harder to debug code you didn't write.
  • Plagiarism being inferior to using open source libraries.

footnote: "half" as measured by the pre-tool hours.

[–] diz@awful.systems 4 points 2 months ago* (last edited 2 months ago)

And yet you are the one person here who is equating Mexicans and Black people with machines. People with disabilities, too, huh. Lemme guess next time we're pointing and laughing at how some hyped-up "PhD level chatbot" can't count the Es in dingleberry, you'll be likening that to ableism.

When you're attempting to humanize machines by likening the insults against machines to insults against people, this does more to dehumanize people than to humanize machines.

edit: Also I never seen and couldn't find instances of "wireback" being used outside pro-bot sentiments and hand-wringing about how anti bot people are akhtually racist. Had you, or is it all second or third hand? It's entirely possible that it is something botlickers (can I say that or is that not OK?) came up with.

edit: especially considering that these "anti-robot slurs" seem to originate in scifi stories where the robots are being oppressed, whereby the author is purposefully choosing that slur to undermine the position of anti robot characters in the story. It may well be that for the same reason that author has in choosing these slurs, they are rarely used (in the earnest).

[–] diz@awful.systems 8 points 3 months ago* (last edited 3 months ago) (4 children)

To be honest, hand wringing over “clanker” being a slur and all that strikes me as increasingly equivalent to hand wringing over calling nazis nazis. The only thing that rubs me the wrong way is that I’d prefer the new so called slur to be “chatgpt”, genericized and negative connotated.

If you are in the US, we’ve had our health experts replaced with AI, see the “MAHA report”. We’re one moron AI-pilled president away from a less fun version of Skynet, whereby a chatbot talks the president into launching nukes and kills itself along with a few billion people.

Complaints about dehumanizing these things is even more meritless than a CEO complaining that someone is dehumanizing Exxon (which is at least made of people).

These things are extension of those in power, not some marginalized underdogs like cute robots in scifi. As an extension of corporations, it already got more rights than any human - imagine what would happen to a human participant in a criminal conspiracy to commit murder and contrast that with what happens when a chatbot talks someone into a crime.

[–] diz@awful.systems 5 points 3 months ago

I think this is spot on. I had that same thing happen at my former employer, which bought a lot of entirely pointless startups in 2010s instead of investing in core business equipment and processes.

 

There's a very long history of extremely effective labor saving tools in software.

Writing in C rather than Assembly, especially for more than 1 platform.

Standard libraries. Unix itself. More recently, developing games in Unity or Unreal instead of rolling your own engine.

And what happened when any of these tools come on the scene is that there is a mad gold rush to develop products that weren't feasible before. Not layoffs, not "we don't need to hire junior developers any more".

Rank and file vibe coders seem to perceive Claude Code (for some reason, mostly just Claude Code) as something akin to the advantage of using C rather than Assembly. They are legit excited to code new things they couldn't code before.

Boiling the rivers to give them an occasional morale boost with "You are absolutely right!" is completely fucked up and I dread the day I'll have to deal with AI-contaminated codebases, but apart from that, they have something positive going for them, at least in this brief moment. They seem to be sincerely enthusiastic. I almost don't want to shit on their parade.

The AI enthusiast bigwigs on the other hand, are firing people, closing projects, talking about not hiring juniors any more, and got the media to report on it as AI layoffs. They just gleefully go on about how being 30% more productive means they can fire a bunch of people.

The standard answer is that they hate having employees. But they always hated having employees. And there were always labor saving technologies.

So I have a thesis here, or a synthesis perhaps.

The bigwigs who tout AI (while acknowledging that it needs humans for now) don't see AI as ultimately useful, in the way in which C compiler was useful. Even if its useful in some context, they still don't. They don't believe it can be useful. They see it as more powerfully useless. Each new version is meant to be a bit more like AM or (clearly AM-inspired, but more familiar) GLaDOS, that will get rid of all the employees once and for all.

 

Sounds like meta’s judge will have to invent a grand unified theory of fair use to excuse this.

I kept saying about various lawsuits that the important thing is discovery. Nobody knew all the idiotic shit these folks were doing, so nobody could sue them properly.

 

They train on sneer-problems now:

Here’s the “ferry‑shuttle” strategy, exactly analogous to the classic two‑ferryman/many‑boats puzzle, but with planes and pilots

And lo and behold, singularity - it can solve variants that no human can solve:

https://chatgpt.com/share/68813f81-1e6c-8004-ab95-5bafc531a969

Two ferrymen and three boats are on the left bank of a river. Each boat holds exactly one man. How can they get both men and all three boats to the right bank?

 

I think this summarizes in one conversation what is so fucking irritating about this thing: I am supposed to believe that it wrote that code.

No siree, no RAG, no trickery with training a model to transform the code while maintaining identical expression graph, it just goes from word-salading all over the place on a natural language task, to outputting 100 lines of coherent code.

Although that does suggest a new dunk on computer touchers, of the AI enthusiast kind, you can point at that and say that coding clearly does not require any logical reasoning.

(Also, as usual with AI it is not always that good. sometimes it fucks up the code, too).

121
submitted 5 months ago* (last edited 5 months ago) by diz@awful.systems to c/techtakes@awful.systems
 

I love to show that kind of shit to AI boosters. (In case you're wondering, the numbers were chosen randomly and the answer is incorrect).

They go waaa waaa its not a calculator, and then I can point out that it got the leading 6 digits and the last digit correct, which is a lot better than it did on the "softer" parts of the test.

 

I couldn't stop fucking laughing. I'm wheezing. It's unhealthy.

They have this thing acting like that for the whole day... and then more than a day later claim it was hacked.

 

So I signed up for a free month of their crap because I wanted to test if it solves novel variants of the river crossing puzzle.

Like this one:

You have a duck, a carrot, and a potato. You want to transport them across the river using a boat that can take yourself and up to 2 other items. If the duck is left unsupervised, it will run away.

Unsurprisingly, it does not:

https://g.co/gemini/share/a79dc80c5c6c

https://g.co/gemini/share/59b024d0908b

The only 2 new things seem to be that old variants are no longer novel, and that it is no longer limited to producing incorrect solutions - now it can also incorrectly claim that the solution is impossible.

I think chain of thought / reasoning is a fundamentally dishonest technology. At the end of the day, just like older LLMs it requires that someone solved a similar problem (either online or perhaps in a problem solution pair they generated if they do that to augment the training data).

But it outputs quasi reasoning to pretend that it is actually solving the problem live.

view more: next ›