this post was submitted on 15 Dec 2025
26 points (100.0% liked)

TechTakes

2347 readers
48 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. This was a bit late - I was too busy goofing around on Discord)

top 50 comments
sorted by: hot top controversial new old
[–] blakestacey@awful.systems 19 points 2 weeks ago (3 children)

Merriam-Webster’s human editors have chosen slop as the 2025 Word of the Year. We define slop as “digital content of low quality that is produced usually in quantity by means of artificial intelligence.” All that stuff dumped on our screens, captured in just four letters: the English language came through again.

https://www.merriam-webster.com/wordplay/word-of-the-year?2025-share

load more comments (3 replies)
[–] rook@awful.systems 17 points 1 week ago

Sunday afternoon slack period entertainment: image generation prompt “engineers” getting all wound up about people stealing their prompts and styles and passing off hard work as their own. Who would do such a thing?

https://bsky.app/profile/arif.bsky.social/post/3mahhivnmnk23

@Artedeingenio

Never do this: Passing off someone else's work as your own.

This Grok Imagine effect with the day-to-night transition was created by me — and I'm pretty sure that person knows it. To make things worse, their copy has more impressions than my original post.

Not cool 👎

Ahh, sweet schadenfreude.

I wonder if they’ve considered that it might actually be possible to get a reasonable imitation of their original prompt by using an llm to describe the generated image, and just tack on “more photorealistic, bigger boobies” to win at imagine generation.

[–] sailor_sega_saturn@awful.systems 16 points 2 weeks ago* (last edited 2 weeks ago)

Popular RPG Expedition 33 got disqualified from the Indie Game Awards due to using Generative AI in development.

Statement on the second tab here: https://www.indiegameawards.gg/faq

When it was submitted for consideration, representatives of Sandfall Interactive agreed that no gen AI was used in the development of Clair Obscur: Expedition 33. In light of Sandfall Interactive confirming the use of gen AI art in production on the day of the Indie Game Awards 2025 premiere, this does disqualify Clair Obscur: Expedition 33 from its nomination.

[–] blakestacey@awful.systems 15 points 1 week ago (2 children)

Today in autosneering:

KEVIN: Well, I'm glad. We didn't intend it to be an AI focused podcast. When we started it, we actually thought it was going to be a crypto related podcast and that's why we picked the name, Hard Fork, which is sort of an obscure crypto programming term. But things change and all of a sudden we find ourselves in the ChatGPT world talking about AI every week.

https://bsky.app/profile/nathanielcgreen.bsky.social/post/3mahkarjj3s2o

[–] Soyweiser@awful.systems 10 points 1 week ago

Obscure crypto programming term. Sure

load more comments (1 replies)
[–] bitofhope@awful.systems 15 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Rewatched Dr. Geoff Lindsey's video about deaccenting in English language and how "AI" speech synthesizers and youtubers tend to get it wrong. In the case of latter, it's usually due to reading from a script or being an L2 English speaker whose native language doesn't use destressing.

It reminded me of a particular line in Portal

spoilers for Portal (2007 puzzle game)GLaDOS: (with a deeper, more seductive, slightly less monotone voice than unti now) "Good news: I figured out what that thing you just incinerated did. It was a morality core they installed after I flooded the Enrichment Center with a deadly neurotoxin to make me stop flooding the Enrichment Center with a deadly neurotoxin."

The words "the Enrichment Center with a deadly neurotoxin" are spoken with the exact same intonation both times, which helps maintain the robotic affect in GLaDOS's voice even after it shifts to be slightly more expressive.

Now I'm wondering if people whose native language lacks deaccenting even find the line funny. To me it's hilarious to repeat a part of a sentence without changing its stress because in English and Finnish it's unusual to repeat a part of a sentence without changing its stress.

It is not lost on me that the fictional evil AI was written with a quirk in its speech to make it sound more alien and unsettling, and real life computer speech has the same quirk, which makes it sound more alien and unsettling.

load more comments (2 replies)
[–] e8d79@discuss.tchncs.de 14 points 2 weeks ago (7 children)

Why is my home directory gone Claude?

See that ~/ at the end? That's your entire home directory.

This just keeps happening...

Previously, Previously Previously

[–] Soyweiser@awful.systems 12 points 2 weeks ago

Sir a NaNth deletion has hit the home directory.

[–] Jayjader@jlai.lu 10 points 2 weeks ago (4 children)

Screenshot of reddit comments. Some terms in users' comments have become links with a magnifying glass icon next to them.

Oh god, reddit is now turning comments into links to search for other comments and posts that include the same terms or phrases.

load more comments (4 replies)
load more comments (5 replies)
[–] NextElephant9@awful.systems 14 points 2 weeks ago (3 children)

Ryanair now makes you install their app instead of allowing you to just print and scan your ticket at the airport, claiming it's "better for our environment (gets rid of 300 tonnes of paper annually)." Then you log in into the app and you see there's an update about your flight, but you don't see what it's about. You need to open an update video, which, of course, is a generated video of an avatar reading it out for you. I bet that's better for the environment than using some of these weird symbols that I was putting into a box and that have now magically appeared on your screen and are making you feel annoyed (in the future for me, but present for you).

load more comments (3 replies)
[–] corbin@awful.systems 13 points 2 weeks ago (1 children)

Today, in fascists not understanding art, a suckless fascist praised Mozilla's 1998 branding:

This is real art; in stark contrast to the brutalist, generic mess that the Mozilla logo has become. Open source projects should be more daring with their visual communications.

Quoting from a 2016 explainer:

[T]he branding strategy I chose for our project was based on propaganda-themed art in a Constructivist / Futurist style highly reminiscent of Soviet propaganda posters. And then when people complained about that, I explained in detail that Futurism was a popular style of propaganda art on all sides of the early 20th century conflicts… Yes, I absolutely branded Mozilla.org that way for the subtext of "these free software people are all a bunch of commies." I was trolling. I trolled them so hard.

The irony of a suckless developer complaining about brutalism is truly remarkable; these fuckwits don't actually have a sense of art history, only what looks cool to them. Big lizard, hard-to-read font, edgy angular corners, and red-and-black palette are all cool symbols to the teenage boy's mind, and the fascist never really grows out of that mindset.

[–] maol@awful.systems 14 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

It irks me to see people casually use the term "brutalist" when what they really mean is "modern architecture that I don't like". It really irks me to see people apply the term brutalist to something that has nothing to do with architecture! It's a very specific term!

[–] istewart@awful.systems 10 points 2 weeks ago

"Brutalist" is the only architectural style they ever learned about, because the name implies violence

[–] nfultz@awful.systems 13 points 2 weeks ago (1 children)

https://kevinmd.com/2025/12/why-ai-in-medicine-elevates-humanity-instead-of-replacing-it.html h/t naked capitalism

Throughout my nearly three decades in family medicine across a busy rural region, I watched the system become increasingly burdened by administrative requirements and workflow friction. The profession I loved was losing time and attention to tasks that did not require a medical degree. That tension created a realization that has guided my work ever since: If physicians do not lead the integration of AI into clinical practice, someone else will. And if they do, the result will be a weaker version of care.

I feel for him, but MAYBE this isn't a technical issue but a labor one; maybe 30 years ago doctors should have "led" on admin and workflow issues directly, and then they wouldn't need to "lead" on AI now? I'm sorry Cerner / Epic sucks but adding AI won't make it better. But, of course, class consciousness evaporates about the same time as those $200k student loans come due.

load more comments (1 replies)
[–] blakestacey@awful.systems 13 points 2 weeks ago (1 children)

Ben Williamson, editor of the journal Learning, Media and Technology:

Checking new manuscripts today I reviewed a paper attributing 2 papers to me I did not write. A daft thing for an author to do of course. But intrigued I web searched up one of the titles and that's when it got real weird... So this was the non-existent paper I searched for:

Williamson, B. (2021). Education governance and datafication. European Educational Research Journal, 20(3), 279–296.

But the search result I got was a bit different...

Here's the paper I found online:

Williamson, B. and Piattoeva, N. (2022) Education Governance and Datafication. Education and Information Technologies, 27, 3515-3531.

Same title but now with a coauthor and in a different journal! Nelli Piattoeva and I have written together before but not this...

And so checked out Google Scholar. Now on my profile it doesn't appear, but somwhow on Nelli's it does and ... and ... omg, IT'S BEEN CITED 42 TIMES almost exlusively in papers about AI in education from this year alone...

Which makes it especially weird that in the paper I was reviewing today the precise same, totally blandified title is credited in a different journal and strips out the coauthor. Is a new fake reference being generated from the last?...

I know the proliferation of references to non-existent papers, powered by genAI, is getting less surprising and shocking but it doesn't make it any less potentially corrosive to the scholarly knowledge environment.

load more comments (1 replies)
[–] scruiser@awful.systems 13 points 1 week ago (3 children)

Eliezer is mad OpenPhil (EA organization, now called Coefficient Giving)... advocated for longer AI timelines? And apparently he thinks they were unfair to MIRI, or didn't weight MIRI's views highly enough? And doing so for epistemically invalid reasons? IDK, this post is a bit more of a rant and less clear than classic sequence content (but is par for the course for the last 5 years of Eliezer's content). For us sane people, AGI by 2050 is still a pretty radical timeline, it just disagrees with Eliezer's imminent belief in doom. Also, it is notable Eliezer has actually avoided publicly committing to consistent timelines (he actually disagrees with efforts like AI2027) other than a vague certainty we are near doom.

link

Some choice comments

I recall being at a private talk hosted by ~2 people that OpenPhil worked closely with and/or thought of as senior advisors, on AI. It was a confidential event so I can't say who or any specifics, but they were saying that they wanted to take seriously short AI timelines

Ah yes, they were totally secretly agreeing with your short timelines but couldn't say so publicly.

Open Phil decisions were strongly affected by whether they were good according to worldviews where "utter AI ruin" is >10% or timelines are <30 years.

OpenPhil actually did have a belief in a pretty large possibility of near term AGI doom, it just wasn't high enough or acted on strongly enough for Eliezer!

At a meta level, "publishing, in 2025, a public complaint about OpenPhil's publicly promoted timelines and how those may have influenced their funding choices" does not seem like it serves any defensible goal.

Lol, someone noting Eliezer's call out post isn't actually doing anything useful towards Eliezer's goals.

It's not obvious to me that Ajeya's timelines aged worse than Eliezer's. In 2020, Ajeya's median estimate for transformative AI was 2050. [...] As far as I know, Eliezer never made official timeline predictions

Someone actually noting AGI hasn't happened yet and so you can't say a 2050 estimate is wrong! And they also correctly note that Eliezer has been vague on timelines (rationalists are theoretically supposed to be preregistering their predictions in formal statistical language so that they can get better at predicting and people can calculate their accuracy... but we've all seen how that went with AI 2027. My guess is that at least on a subconscious level Eliezer knows harder near term predictions would ruin the grift eventually.)

[–] blakestacey@awful.systems 10 points 1 week ago* (last edited 1 week ago) (2 children)

Yud:

I have already asked the shoggoths to search for me, and it would probably represent a duplication of effort on your part if you all went off and asked LLMs to search for you independently.

The locker beckons

load more comments (2 replies)
load more comments (2 replies)
[–] gerikson@awful.systems 13 points 2 weeks ago (3 children)

More on datacenters in space

https://andrewmccalip.com/space-datacenters

N.B. got this via HN, entire site gives off "wouldn't it be cool" vibes (author "lives and breathes space" IRONIC IT'S A VACUUM

Also this is the only thermal mention

Thermal: only solar array area used as radiator; no dedicated radiator mass assumed

riiiiight....

[–] CinnasVerses@awful.systems 13 points 2 weeks ago (3 children)

I also enjoy :

Radiation/shielding impacts on mass ignored; no degradation of structures beyond panel aging

Getting high-powered electronics to work outside the atmosphere or the magnetosphere is hard, and going from a 100 meter long ISS to a 4 km long orbital data center would be hard. The ISS has separate cooling radiators and solar panels. He wants LEO to reduce the effects of cosmic rays and solar storms, but its already hard to keep satellites from crashing into something in LEO.

Possible explanation for the hand waving:

I love AI and I subscribe to maximum, unbounded scale.

[–] blakestacey@awful.systems 10 points 2 weeks ago

"Your mother shubscribed to makshimum, unbounded shcale last night, Trebek."

load more comments (2 replies)
[–] CinnasVerses@awful.systems 10 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Author works for something called Varda Space (guess who is one of the major investors? drink. Guess what orifice the logo looks like? drink) and previously tried to replicate a claimed room-temperature superconductor https://www.wired.com/story/inside-the-diy-race-to-replicate-lk-99/

Some interesting ethnography of private space people in California: "People jump straight to hardware and hand-wave the business case, as if the economics are self-evident. They aren't. "

Page uses that "electrons = electricity" metonymy that prompt-fonding CEOs have been using

load more comments (1 replies)
load more comments (1 replies)
[–] sailor_sega_saturn@awful.systems 12 points 2 weeks ago* (last edited 2 weeks ago) (6 children)

This is old news but I just stumbled across this fawning 2020 Elon Musk interview / award ceremony on the social medias and had to share it: https://www.youtube.com/live/AF2HXId2Xhg?t=2109

In it Musk claims synthetic mRNA (and/or DNA) will be able to do anything and it is like a computer program, and that stopping aging probably wouldn't be too crazy. And that you could turn someone into a freakin' butterfly if you want to with the right DNA sequence.

[–] blakestacey@awful.systems 11 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

This is what you get when you take Star Trek episodes where the writers had run out of ideas and watch them from the bottom of a K-hole.

And just think, he's been further pickling his brain for half a decade since then.

[–] o7___o7@awful.systems 14 points 2 weeks ago* (last edited 2 weeks ago) (7 children)

be elon musk

binge ket, adderall, and ST: Voyager one weekend

burst into monday morning SpaceX board meeting after 3 nights of no sleep

crash into table

get a nasty wound on scalp

it's bleeding pretty bad

stand atop board room table and shout "We must RETVRN TO AMPHIVIAN"

also we're naming the next crew Dragon capsule "Admiral Janeway"

everybody claps

load more comments (7 replies)
load more comments (5 replies)
[–] rook@awful.systems 11 points 2 weeks ago (2 children)

So, I’m taking this one with a pinch of salt, but it is entertaining: “We Let AI Run Our Office Vending Machine. It Lost Hundreds of Dollars.”

The whole exercise was clearly totally pointless and didn’t solve anything that needed solving (like every other “ai” project, i guess) but it does give a small but interesting window into the mindset of people who have only one shitty tool and are trying to make it do everything. Your chatbot is too easily lead astray? Use another chatbot to keep it in line! Honestly, I thought they were already doing this… I guess it was just to expensive or something, but now the price/desperation curves have intersected

Anthropic had already run into many of the same problems with Claudius internally so it created v2, powered by a better model, Sonnet 4.5. It also introduced a new AI boss: Seymour Cash, a separate CEO bot programmed to keep Claudius in line. So after a week, we were ready for the sequel.

Just one more chatbot, bro. Then prompt injection will become impossible. Just one more chatbot. I swear.

Anthropic and Andon said Claudius might have unraveled because its context window filled up. As more instructions, conversations and history piled in, the model had more to retain—making it easier to lose track of goals, priorities and guardrails. Graham also said the model used in the Claudius experiment has fewer guardrails than those deployed to Anthropic’s Claude users.

Sorry, I meant just one more guardrail. And another ten thousand tokens capacity in the context window. That’ll fix it forever.

https://archive.is/CBqFs

load more comments (2 replies)
[–] wizardbeard@lemmy.dbzer0.com 11 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Introducing the Palantir shit sandwich combo: Get a cover up for the CEO tweaking out and start laying the groundwork for the AGI god's priest class absolutely free!

https://mashable.com/article/palantir-ceo-neurodivergent

TL;DR- Palantir CEO tweaks out during an interview. Definitely not any drugs guys, he's just neurodivergent! But the good, corporate approved kind. The kind that has extra special powers that make them good at AI. They're so good at AI, and AI is the future, so Palantir is starting a group of neurodivergents hand picked by the CEO (to lead humanity under their totally imminent new AI god). He totally wasn't tweaking out. He's never even heard of cocaine! Or billionaire designer drugs! Never ever!


Edit: To be clear, no hate against neurodivergence, or skepticism about it in general. I'm neurodivergent. And yeah, some types of neurodivergence tend to result in people predisposed to working in tech.

But if you're the fucking CEO of Palantir, surely you've been through training for public appearances. It's funnier that it didn't take, but this is clearly just an excuse.

I strongly feel that it's an attempt to start normalizing the elevation of certain people into positions of power based off vague characteristics they were born with.

Lemmy post that pointed me to this: https://sh.itjust.works/post/51704917

Jesus. This being 2025 of course he had to clarify that it's definitely not DEI. Also it really grinds me gears to see hyperfocus listed as one of the "beneficial" aspects because there's no way it's not exploitative. Hey, so you know how sometimes you get so caught up in a project you forget to eat? Just so you know, you could starve on the clock. For me.

load more comments (1 replies)
[–] TinyTimmyTokyo@awful.systems 11 points 2 weeks ago (1 children)

Orange site mods retitled a post about a16z funding AI slop farms to remove the a16z part.

The mod tried to pretend the reason was that the title was just too damn long and clickbaity. His new title was 1 character shorter than the original.

https://news.ycombinator.com/item?id=46305113

[–] sailor_sega_saturn@awful.systems 13 points 2 weeks ago (4 children)

I don’t love the title but it’s the best I could come up with to fit within the 80 character limit.

A half dozen people might still be reading hackernews on punchcards so they ha-

ve no choice but to argue about how to shorten "long" titles every day.

[–] bitofhope@awful.systems 10 points 2 weeks ago

Good to know that Orange Website is being considerate of us VT220 users. I knew there was a reason why mine has the amber phosphorus.

load more comments (3 replies)
[–] blakestacey@awful.systems 11 points 2 weeks ago (2 children)

An academic sneer delivered through the arXiv-o-tube:

Large Language Models are useless for linguistics, as they are probabilistic models that require a vast amount of data to analyse externalized strings of words. In contrast, human language is underpinned by a mind-internal computational system that recursively generates hierarchical thought structures. The language system grows with minimal external input and can readily distinguish between real language and impossible languages.

load more comments (2 replies)
[–] antifuchs@awful.systems 11 points 2 weeks ago

Here’s a substack post (sorry) with a quote I found both neat and pretty funny:

Integrity comes from the Latin "integer," meaning whole or complete. A person with integrity is "whole" in the sense that their words, actions, and values are unified rather than fragmented or contradictory. They understand themselves; they have integrated the warring parts of themselves; and they respect and act on the values that their parts can agree upon.

Rationalists in shambles

[–] blakestacey@awful.systems 10 points 2 weeks ago (3 children)
load more comments (3 replies)
[–] nfultz@awful.systems 10 points 2 weeks ago
[–] fullsquare@awful.systems 10 points 2 weeks ago* (last edited 2 weeks ago)

a16z funds 1000+ strong phone farm and uses it for mass manufacturing tiktok ai influencers, security turns out to be not good enough https://www.404media.co/hack-reveals-the-a16z-backed-phone-farm-flooding-tiktok-with-ai-influencers/

the usecase is spam:

The hacker also shared a list with me of more than 400 TikTok accounts Doublespeed operates. Around 200 of those were actively promoting products on TikTok, mostly without disclosing the posts were ads, according to 404 Media’s review of them. It’s not clear if the other 200 accounts ever promoted products or were being “warmed up,” as Doublespeed describes the process of making the accounts appear authentic before it starts promoting in order to avoid a ban. 

I’ve seen TikTok accounts operated by Doublespeed promote language learning apps, dating apps, a Bible app, supplements, and a massager.

[–] sailor_sega_saturn@awful.systems 10 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Lol talk about mixed messages.

Mozilla's CEO yesterday:

[Firefox] will evolve into a modern AI browser

Firefox's social media account today:

Firefox is not becoming an AI browser.

[–] bitofhope@awful.systems 9 points 2 weeks ago

What the fuck would an "AI browser" even be, let alone a modern one. I know what a web browser is, basically a combined HTTP client and HTML renderer. An AI browser is not something that has a commonly understood meaning, so to claim Firefox or anything else will be one without elaboration is just wankery.

I can't help but do their dirty work for them and try to imagine what the hell an AI browser would be. Maybe you develop a standard protocol for prompting chatbots and a markup format for displaying responses and an AI browser is a client for that? Or maybe you just put an LLM in the search bar so Mozilla's bullshit machine can give you wrong answers before pressing the return key and having Google's bullshit machine give you wrong answers. Maybe there's an about:chatbot page. I think all of these are bad bullshit ideas, but at least they're ideas and not just "what if we added into ".

AI Browsers. Metaverse fast food. Blockchain sneakers. Gigwork apartments. Cloud toilets. Big Data headphones. AR chairs. Military grade pianos. 3D books. App drugs. Dotcom condoms. Cyberspace bicycles. Wireless jump ropes. Video silverware. WYSIWYG carpets. Transistor fanny packs. Electromechanical ladders. Atomic flooring. Radio saunas. Horseless glue. Steam pens. Water powered masturbation.

I assume some mesolithic asshole said shit like "we are transforming our hunter-gatherer settlement to a 'cave painting first' society" and neighboring community leaders gave that guy like a hundred animal skins each for his insight.

[–] froztbyte@awful.systems 10 points 2 weeks ago (1 children)

just came across a wild banger:

(An aside — In their official docs, Apple refers to the menu bar always in lowercase, because it’s just a menu bar. The ‘desktop’ is the same way. This is interesting, because we live in an era where everything is a branded product whose name is a proper noun– see the Dock– and we are not allowed to merely use things, we are forced to experience using them and you legally can’t ‘experience’ a regular ‘ol noun. Everybody knows it’s gotta be a proper noun in order to be experienced. The Las Vegas Demon Orb Experience. The Microsoft Windows Desktop Experience. The ESPN Experience Brought To You By Sports Gambling. The 6th Street Hostel Bathroom Experience. But our friends “menu bar” and “desktop” are just two things, average, normal, unobtrusive. This says something about how the people who created these things thought about them.)

load more comments (1 replies)
load more comments
view more: next ›