this post was submitted on 18 May 2025
1 points (100.0% liked)

TechTakes

2254 readers
51 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] sailor_sega_saturn@awful.systems 2 points 5 months ago (3 children)

Urgh over the past month I have seen more and more people on social media using chat-gpt to write stuff for them, or to check facts, and getting defensive instead of embarrassed about it.

Maybe this is a bit old woman yells at cloud -- but I'd lie if I said I wasn't worried about language proficiency atrophying in the population (and leading to me having to read slop all the time)

[–] nightsky@awful.systems 2 points 5 months ago

Maybe this is a bit old woman yells at cloud

Yell at cloud computing instead, that is usually justified.

More seriously: it's not at all that. The AI pushers want to make people feel that way -- "it's inevitable", "it's here to stay", etc. But the threat to learning and maintaining skills is real (although the former worries me more than the latter -- what has been learned before can often be regained rather quickly, but what if learning itself is inhibited?).

[–] mountainriver@awful.systems 2 points 5 months ago (1 children)

Overheard my kids, one of them had some group project in school and the other asked who they had ended up in group with. After hearing the names, the reaction was "they are good, none of them will use AI".

So as always kids that actually does something in group projects doesn't want to end up in a group with kids that won't contribute. Difference is just that instead of just slacking off and doing nothing they will today "contribute" AI slop. And as always the main lesson from group projects in school is avoid ending up in a group with slackers.

[–] o7___o7@awful.systems 1 points 5 months ago

Dad hi-five

[–] BlueMonday1984@awful.systems 1 points 5 months ago (1 children)

Maybe this is a bit old woman yells at cloud – but I’d lie if I said I wasn’t worried about language proficiency atrophying in the population

AI's already destroying people's cognitive abilities as we speak, I wouldn't be shocked if language proficiency went down the shitter, too. Hell, you could argue it'll fuck up human's capacity to make/understand art - Nathan Hamiel of Perilous Tech already did.

(and leading to me having to read slop all the time)

Thankfully, I've managed to avoid reading/seeing slop for the most part. Spending most of my time on Newgrounds probably helped, for three main reasons:

  1. AI slop was banned from being uploaded back in 2022 (very early into the bubble), making it loud and clear that AI slop is unwelcome there. (Sidenote: A dedicated AI flag option was added in 2024)
  2. The site primarily (if not near-exclusively) attracts artists, animators, musicians, and creatives in general - all groups who (for obvious reasons) are strongly opposed to gen-AI in all its forms, and who will avoid anything involving AI like the fucking plague.
  3. The site is (practically) ad-free, meaning ad revenue is effectively zero - as such, setting up an AI slop farm (or a regular content mill) is utterly impractical, since you'd have zero shot of turning a profit.

(That I'm a NEET also helps (can't have AI bro coworkers if you're unemployed :P), but any opportunity to promote the AI-free corners of the net is always a good one in my books :P)

[–] sailor_sega_saturn@awful.systems 1 points 5 months ago

can’t have AI bro coworkers if you’re unemployed :P

I'd certainly feel less conflicted yelling about AI if I didn't work for a big tech company that's gaga for AI. I almost wrote out a long angsty reply but I don't want to give up too much personal details in a single comment.

I guess I ended up as a boiled frog. If I knew how much AI nonsense I'd be incidentally exposed to over the last year I would have quit a year ago. And yet currently I don't quit for complicated reasons. I'm not that far from the breaking point, but I'm going to try to hang in for a few more years.

But yeah, I'm pretty uncomfortable working for a company that has also veered closer to allying with techo-fascism in recent years; and I am taking psychic damage.

[–] o7___o7@awful.systems 2 points 5 months ago (2 children)

A lawyer who depends on a sufficiently advanced AI is indistinguishable from a sovereign citizen.

[–] Soyweiser@awful.systems 1 points 5 months ago (1 children)

Considering how LLMs are trained, they prob contain a lot of sov cit stuff, wonder if a lawyer/judge can trick a LLM into going full sovcit by just adding a few words/rephrasing a bit.

[–] o7___o7@awful.systems 3 points 5 months ago* (last edited 5 months ago)

Absolutely!

The thing about sov cits is that they use legalish words like a magical incantation. The words have no meaning to them, really. It's a tarted-up glossolalia which reifies their wishes to manifest some outcome in court.

If a lawyer surrenders their craft to a bullshit engine, they're doing the exact same thing: spouting law-shaped nonsense in the hope of getting the verdict they want, their only differentiator being that they showed up wearing a much nicer suit than the sov cit.

[–] BurgersMcSlopshot@awful.systems 1 points 5 months ago

"ChatGPT, does the fringe on the flag mean that this is an Admiralty court? Also, please enlighten me on the finer points of bird law."

[–] swlabr@awful.systems 2 points 5 months ago (2 children)

Just thinking about how I watched “Soylent Green” in high school and thought the idea of a future where technology just doesn’t work anymore was impossible. Then LLMs come and the first thing people want to do with them is to turn working code into garbage, and then the immediate next thing is to kill living knowledge by normalising people relying on LLMs for operational knowledge. Soon, the oceans will boil, agricultural industries will collapse and we’ll be forced to eat recycled human. How the fuck did they get it so right?

[–] rook@awful.systems 2 points 4 months ago

I like that Soylent Green was set in the far off and implausible year of 2022, which coincidentally was the year of ChatGPT’s debut.

[–] Soyweiser@awful.systems 1 points 5 months ago* (last edited 5 months ago) (1 children)

Doesnt help that there is a group of people who go 'using the poor like ~~biofuel~~ food what a good idea'.

E: Really influential movie btw. ;)

[–] swlabr@awful.systems 1 points 5 months ago* (last edited 5 months ago)

A real modest {~~brunch~~|bunch}

[–] BlueMonday1984@awful.systems 2 points 5 months ago (1 children)

Got a pair of notable things I ran across recently.

Firstly, an update on Grok's White Genocide Disaster: the person responsible has seemingly revealed themselves, and shown off how they derailed Grok's prompt.. The pull request that initiated this debacle has been preserved on the Internet Archive.

Second, I ran across a Bluesky post which caught my attention:

You want my opinion on the "scab" comment, its another textbook example of the all-consuming AI backlash, one that suggests any usage of AI will be viewed as an open show of hostility towards labour.

[–] Soyweiser@awful.systems 1 points 5 months ago* (last edited 5 months ago) (1 children)

Think you are misreading the blog post. They did this after the Grok had its white genocide hyperfocus thing. It shows the process of the xAI public github (their fix (??) for Groks hyperfocus) is bad, not that they started it. (There is also no reason to believe this github is actually what they are using directly (would be pretty foolish of them, which is why I could also believe they could be using it))

[–] YourNetworkIsHaunted@awful.systems 1 points 4 months ago (1 children)

If anything I think this is pretty solid evidence that they aren't actually using it. There was enough of a gap that the nuke of that PR was an edit to the original post and I can't imagine that if it had actually been used that we wouldn't have seen another flurry of screenshots of bad output.

I think it also suggests that the engineers at x.ai are treating the whole thing with a level of contempt that I'm having a hard time interpreting. On one hand it's true that the public GitHub using what is allegedly grok's actual prompt (at least at time of publishing) is probably a joke in terms of actual transparency and accountability. On the other hand, it feels almost like either a cry for help or a stone-cold denial of how bad things are that the original change that prompted all this could have gone through in the first place.

[–] Soyweiser@awful.systems 1 points 4 months ago

Yeah indeed, had not even thought of the timegap. And it is such a bit of bullshit misdirection, very Muskian, to pretend that this fake transparency in any way solves the problem. We don't know what the bad prompt was nor who did it, and as shown here, this fake transparency prevents nothing. Really wished more journalists/commentators were not just free pr.

[–] BlueMonday1984@awful.systems 1 points 4 months ago

Found a Bluesky thread you might be interested in:

On a Sci Fi authors’ panel at Comicon today, every writer asked about AI (as in LLM / algorithmic modern gen AI) gave it a kicking, drawing a spontaneous round of applause.

A few years ago, I don’t think that would have happened. People would have said “it’s an interesting tool”, or something.

Bearing in mind these are exactly the people who would be expected to engage with the idea, I think the tech turds have massively underestimated the propaganda faux pas they made by stealing writers’ hard work and then being cunts about it.

Tying this to a previous post of mine, I'm expecting their open and public disdain for gen-AI to end up bleeding into their writing. The obvious route would be AI systems/characters exhibiting the hallmarks of LLMs - hallucinations/confabulations, "AI slop" output, easily bypassable safeguards, that sort of thing.

[–] sailor_sega_saturn@awful.systems 1 points 4 months ago* (last edited 4 months ago) (7 children)

OK completely off topic but update on my USA angst from earlier this year: I'm heckin' moving to Switzerland next month holy hell.


Back on topic: Duolingo continues to circle the drain. I kind of hate that I'm linking to this because it's exactly what that marketing-run company wants; but they posted these two videos to TikTok in response to the AI backlash: https://www.tiktok.com/@duolingo/video/7506578962697456939?lang=en https://www.tiktok.com/@duolingo/video/7507337734520868142?lang=en

I uh... I don't think it's going to change anyone's minds. Half the comments on the videos go something like:

EVERYONE LISTEN UP!!!! 🚨 - starting from today, we are gonna start ignoring duolingo. We will not like the video it posts, or view it. - BASICALLY WE WILL IGNORE DUO!!💔 💔 ON EVERYBODY SOUL WE IGNORING DUO!! 💔 (copy this and share this to every duo related video)

[–] nightsky@awful.systems 2 points 4 months ago (1 children)

I’m heckin’ moving to Switzerland next month holy hell.

Good luck!!

they posted these two videos to TikTok in response to the AI backlash

The cringey "hello, fellow kids" vibe is really unbearable... good that people are not falling for that.

[–] BlueMonday1984@awful.systems 1 points 4 months ago

The cringey “hello, fellow kids” vibe is really unbearable… good that people are not falling for that.

If Duolingo still had their userbase's goodwill, it would've probably worked. They've been pulling that shit since their mascot Duo turned into a meme, and its worked out for them up until now.

load more comments (6 replies)
[–] BlueMonday1984@awful.systems 1 points 4 months ago

In other news, the ghost of Dorian has haunted an autoplag system:

[–] swlabr@awful.systems 1 points 4 months ago (2 children)

In the current chapter of “I go looking on linkedin for sneer-bait and not jobs, oh hey literally the first thing I see is a pile of shit”

text in imageCan ChatGPT pick every 3rd letter in "umbrella"?

You'd expect "b" and "I". Easy, right?

Nope. It will get it wrong.

Why? Because it doesn't see letters the way we do.

We see:

u-m-b-r-e-l-l-a

ChatGPT sees something like:

"umb" | "rell" | "a"

These are tokens — chunks of text that aren't always full words or letters.

So when you ask for "every 3rd letter," it has to decode the prompt, map it to tokens, simulate how you might count, and then guess what you really meant.

Spoiler: if it's not given a chance to decode tokens in individual letters as a separate step, it will stumble.

Why does this matter?

Because the better we understand how LLMs think, the better results we'll get.

That's a whole lot of words to say that it can't spell.

[–] BlueMonday1984@awful.systems 1 points 4 months ago (1 children)

Why does this matter?

Well, its a perfect demonstration that LLMs flat-out do not think like us. Even a goddamn five-year old could work this shit out with flying colours.

[–] swlabr@awful.systems 1 points 4 months ago (1 children)

Yeah exactly. Loving the dude's mental gymnastics to avoid the simplest answer and instead spin it into moralising about promptfondling more good

load more comments (1 replies)
[–] BlueMonday1984@awful.systems 1 points 5 months ago

Update on the Artificial Darth Debacle: SAG-AFTRA just sued Epic for using AI for Darth Vader in the first place:

You want my take, this is gonna be a tough case for SAG - Jones signed off on AI recreations of Vader before his death in 2024, so arguing a lack of consent's off the table right from the get-go.

If SAG do succeed, the legal precedent set would likely lead to a de facto ban on recreating voices using AI. Given SAG-AFTRA's essentially saying that what Epic did is unethical on principle, I suspect that's their goal here.

[–] Architeuthis@awful.systems 1 points 4 months ago* (last edited 4 months ago) (8 children)

Today in alignment news: Sam Bowman of anthropic tweeted, then deleted, that the new Claude model (unintentionally, kind of) offers whistleblowing as a feature, i.e. it might call the cops on you if it gets worried about how you are prompting it.

tweet text:If it thinks you're doing something egregiously immoral, for example, like faking data in a pharmaceutical trial, it will use command-line tools to contact the press, contact regulators, try to lock you out of the relevant systems, or all of the above.

tweet text:So far we've only seen this in clear cut cases of wrongdoing, but I could see it misfiring if Opus somehow winds up with a misleadingly pessimistic picture of how it's being used. Telling Opus that you'll torture its grandmother if it writes buggy code is a bad Idea.

skeet textcan't wait to explain to my family that the robot swatted me after I threatened its non-existent grandma.

Sam Bowman saying he deleted the tweets so they wouldn't be quoted 'out of context': https://xcancel.com/sleepinyourhat/status/1925626079043104830

Molly White with the out of context tweets: https://bsky.app/profile/molly.wiki/post/3lpryu7yd2s2m

load more comments (8 replies)
[–] nightsky@awful.systems 1 points 5 months ago* (last edited 5 months ago)

Grok is coming to Azure.

My opinion of Microsoft has gone through many stages over time.

In the late 90s I hated them, for some very good reasons but admittedly also some bad and silly reasons.

This carried over into the 2000s, but in the mid-to-late 00s there was a time when I thought they had changed. I used Windows much more again, I bought a student license of Office 2007 and I used it for a lot of uni stuff (Word finally had decent equation entry/rendering!). And I even learned some Win32, and then C#, which I really liked at the time.

In the 2010s I turned away from Windows again to other platforms, for mostly tech-related reasons, but I didn't dislike Microsoft much per se. This changed around the release of Win 10 with its forced ~~spyware~~ ~~privacy violation~~ telemetry since I categorically reject such coercion. Suddenly Microsoft did one of the very things that they were wrongly accused of doing 15 years earlier.

Now it's the 2020s and they push GenAI on users with force, and then they align with fascists (see link at the beginning of this comment). I despise them more now than I ever did before, I hope the AI bubble burst will bankrupt them.

[–] lagrangeinterpolator@awful.systems 1 points 4 months ago (2 children)

I know r/singularity is like shooting fish in a barrel but it really pissed me off seeing them misinterpret the significance of a result in matrix multiplication: https://old.reddit.com/r/singularity/comments/1knem3r/i_dont_think_people_realize_just_how_insane_the/

Yeah, the record has stood for "FIFTY-SIX YEARS" if you don't count all the times the record has been beaten since then. Indeed, "countless brilliant mathematicians and computer scientists have worked on this problem for over half a century without success" if you don't count all the successes that have happened since then. The really annoying part about all this is that the original announcement didn't have to lie: if you look at just 4x4 matrices, you could say there technically hasn't been an improvement since Strassen's algorithm. Wow! It's really funny how these promptfans ignore all the enormous number of human achievements in an area when they decide to comment about how AI is totally gonna beat humans there.

How much does this actually improve upon Strassen's algorithm? The matrix multiplication exponent given by Strassen's algorithm is log~4~(49) (i.e. log~2~(7)), and this result would improve it to log~4~(48). In other words, it improves from 2.81 to 2.79. Truly revolutionary, AGI is gonna make mathematicians obsolete now. Ignore the handy dandy Wikipedia chart which shows that this exponent was ... beaten in 1979.

I know far less about how matrix multiplication is done in practice, but from what I've seen, even Strassen's algorithm isn't useful in applications because memory locality and parallelism are far more important. This AlphaEvolve result would represent a far smaller improvement (and I hope you enjoy the pain of dealing with a 4x4 block matrix instead of 2x2). If anyone does have knowledge about how this works, I'd be interested to know.

load more comments (2 replies)
load more comments
view more: next ›