just got a job in mathematical publishing. it's work i think i'll actually enjoy and expect to be very good at, it pays much better than any other job i've had previously (and they maxed out the position's pay range, which i wasn't expecting) and it has about a month of paid leave a year. such a relief
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
Felicitations!
("A job," Blake thinks. "I need to find one of those.")
fuck yeah! congrats!
It's 10 pm on a Sunday. My FIL is texting me business plans from the slop hole as I try to get the last kiddo down to sleep. He wants me to read them to my wife, who already mad at him about it.
Thank you all for being an island of sanity.
"posting from the slop hole" is probably the best description possible for this, brb stealing
one of the brain geniuses at bluesky

how… what… how… why… why would you think…
There's only one thing that's advertised as not-waterproof that I'll risk using underwater and that's Casio wristwatches. "Water resist" is a huge understatement for them the things are indestructable.
(This comment sponsored by Casio)
another Onion banger for these trying times
” Then you wake up in a cold sweat and can’t breathe at all, almost like you’re drowning—I guess from the weight of untold mobs of people leaping on you and ripping you apart”
the real Scam Altman would never feel any kind of remorse or emotion about this
Usually, you wake up on a lifeless beach that’s adorned with some sort of abandoned marble temple. It’s supposed to be beautiful, but instead it’s really sad. Almost unbearably sad. So much so that you want to get away from it. So you crawl downward into these vents going below the horrible temple, and suddenly it’s like you’re moving through the innards of an incomprehensible machine that’s thudding away, thud, thud, thud. And as you get deeper, the metal sidings are carved with scrawled ominous curses and slurs directed toward you, and you hear the voices, louder than before, and you somehow know these people are in pain because of you. It keeps getting colder. Color drains from the world. And you see the crowd through the slats of the vents: pale and emaciated men, women, and children from centuries to come, all of them pressed together for warmth in some sort of unending cavern. What clothes they have are torn and ragged. Before you know it, their dirty hands and dirty fingernails lurch through the grates, and they’re reaching for you, tearing at your shirt, moaning terrible things about their suffering and how you made it happen, you made it, and you need to stop this now, now, now. And next they’re ripping you apart, limb from limb, and you are joining them in the gray dimness forever.

in the past 24 hours I was fooled by 3 pieces of fake news in a row:
- that Kurds from Iraq were crossing the border to fight in Iran
- that Windows 12 would be AI-centred or require an AI chip to work (I helped spread this)
- that Spain has capitulated and let the US use its ports for war (erroneously claimed by a WH official).
I know that fake news can be made organically and have been since forever and I'm doing selection bias here but I can't help but picture the misinformation engines firehosing bullshit constantly until some of it catches and spreads.
yeah it's bad
otoh awareness I think is spreading
swedish public broadcasting has regular "spot the fake" pieces on their website
I think giving a sensationalist bit of news 6 hours to "mature" is a good idea before amplifying.
I like this. News is a frittata, it needs time to set before consuming.
Prosperity's Path: OpenAI has shown it cannot be trusted. Canada needs nationalized, public AI https://archive.ph/QLg2D
tldr tech bullshit requires ur tax dollars. what ever you do don't question the all knowing laurentian technocrats!
yeah, the current situation in Europe is like: "As EU citizens, we should break free of our dependency on US Big Tech like the Torment Nexus. That's why my company is advancing our fully sovereign solution, the Agony Core! Europe-owned, GDPR-compliant, Frontex-approved scalable Torment-as-a-Service, at competitive prices with TN-based deployments!"
The HarfBuzz maintainer has drunk the slop-aid - Baldur has commented on it, warning of the potentially catastrophic consequences:
Fonts are a lucrative target. They require a complex parser, usually written in a language that isn't memory safe, and often directly exposed to outside data (websites, PDFs, etc. that contain fonts). This means a flaw could lead to an attack worst case scenario: arbitrary code execution. HarfBuzz is pretty much the only full-featured library for that takes font files, parses them, and returns glyphs ready to render. It is ubiquitous. A security flaw in HarfBuzz could make a good portion of the world's user-facing software (i.e. that renders text) unsafe.
Recently discovered Donald Knuth got oneshot by Claude recently (indirectly, through fedi) - feeling the itch to write about tech's vulnerability to LLMs because of it.
Baldur Bjarnason's essay remains evergreen.
Consider homeopathy. You might hear a friend talk about “water memory”, citing all sorts of scientific-sounding evidence. So, the next time you have a cold you try it.
And you feel better. It even feels like you got better faster, although you can’t prove it because you generally don’t document these things down to the hour.
“Maybe there is something to it.”
Something seemingly working is not evidence of it working.
Were you doing something else at the time which might have helped your body fight the cold?
Would your recovery have been any different had you not taken the homeopathic “remedy”?
Did your choosing of homeopathy over established medicine expose you to risks you weren’t aware of?
Even when looking at Knuth's account of what happened, you can already tell that the AI is receiving far more credit than what it actually did. There is something about a nondeterministic slot machine that makes it feel far more miraculous when it succeeds, while reliable tools that always do their job are boring and stupid. The downsides of the slot machine never register in comparison to the rewards. Does it feel so miraculous when I get an idea after experimenting in Mathematica?
I feel like math research is particularly susceptible to this, because it is the default that almost all of one's attempts do not succeed. So what if most of the AI's attempts do not succeed? But if it is to be evaluated as a tool, we have to check if the benefits outweigh the costs. Did it give me more productive ideas, or did it actually waste more of my time leading me down blind alleys? More importantly, is the cognitive decline caused by relying on slot machines going to destroy my progress in the long term? I don't think anyone is going to do proper experiments for this in math research, but we have already seen this story play out in software. So many people were impressed by superficial performances, and now we are seeing the dumpster fire of bloat, bugs, and security holes. No, I don't think I want that.
And then there is the narrative of not evaluating AI as an objective tool based on what it can actually do, but instead as a tidal wave of Unending Progress that will one day sweep away those elitists with actual skills. Random lemmas today mean the Millennium Prize problems tomorrow! This is where the AI hype comes from, and why people avoid, say, comparing AI with Mathematica. To them I say good luck. We have dumped hundreds of billions of dollars into this, and there are only so many more hundreds of billions of dollars left. Were these small positive results (and significant negatives) worth hundreds of billions of dollars, or perhaps were there better things that these resources could have been used for?
ooh gooods nooo now all the Claude slurpers are going to refer to this forever as definitive proof of how legitimately useful LLMs have got, it "solved" a math problem for Donald Knuth! :<
A lobster invokes classic argument from authority
First Terrence Tao and now Donald Knuth.
If you're still on the fence about AI, you have to take it seriously now.
yeah b/c I'm a professional computer scientist ...
If you’re still on the fence about AI, you have to take it seriously now.
But... why?
Always remember that Nobel disease is a thing.
The one I often think about is the person who invented PCR and then later claimed to have had an encounter with a fluorescent talking raccoon of possibly extraterrestrial origin.
Even in Knuth's account it sounds like the LLM contribution was less in solving the problem and more in throwing out random BS that looked vaguely like different techniques were being applied until it spat out something that Knuth and his collaborator were able to recognize as a promising avenue for actual work.
His bud Filip Stappers rolled in to help solve an open digraph problem Knuth was working on. Stappers fed the decomposition problem to Claude Opus 4.6 cold. Claude ran 31 explorations over about an hour: brute force (too slow), serpentine patterns, fiber decompositions, simulated annealing. At exploration 25 it told itself “SA can find solutions but cannot give a general construction. Need pure math.” At exploration 30 it noticed a structural pattern in an earlier solution. Exploration 31 produced a working construction.
I am not a mathematician or computer scientist and so will not claim to know exactly what this is describing and how it compares to the normal process for investigating this kind of problem. However, the fact that it produced 4 approaches over 31 attempts seems more consistent with randomly throwing out something that looks like a solution rather than actually thinking through the process of each one. In a creative exploration like this where you expect most approaches to be dead ends rather than produce a working structure maybe the LLM is providing something valuable by generating vaguely work-shaped outputs that can inspire an actual mind to create the actual answer.
Filip had to restart the session after random errors, had to keep reminding Claude to document its progress. The solution only covers one type of solution, when Claude tried to continue another way, it “seemed to get stuck” and eventually couldn’t run its own programs correctly.
The idea that it's ultimately spitting out random answer-shaped nonsense also follows from the amount of babysitting that was required from Filip to keep it actually producing anything useful. I don't doubt that it's more efficient than I would be at producing random sequences of work-shaped slop and redirecting or retrying in response to a new "please actually do this" prompt, but of the two of us only one is demonstrating actual intelligence and moving towards being able to work independently. Compared to an undergrad or myself I don't doubt that Claude has a faster iteration time for each of those attempts, but that's not even in the same zip code as actually thinking through the problem, and if anything serves as a strong counterexample to the doomer critihype about the expanding capabilities of these systems. This kind of high-level academic work may be a case where this kind of random slop is actually useful, but that's an incredibly niche area and does not do nearly as much as Knuth seems to think it does in terms of justifying the incredible cost of these systems. If anything the narrative that "AI solved the problem" is giving Anthropic credit for the work that Knuth and Stapprrs were putting into actually sifting through the stream of slop identifying anything useful. Maybe babysitting the slop sluice is more satisfying or faster than going down every blind alley on your own, but you're still the one sitting in the river with a pan, and pretending the river is somehow pulling the gold out of itself is just damn foolish.
I am a computer science PhD so I can give some opinion on exactly what is being solved.
First of all, the problem is very contrived. I cannot think of what the motivation or significance of this problem is, and Knuth literally says that it is a planned homework exercise. It's not a problem that many people have thought about before.
Second, I think this problem is easy (by research standards). The problem is of the form: "Within this object X of size m, find any example of Y." The problem is very limited (the only thing that varies is how large m is), and you only need to find one example of Y for each m, even if there are many such examples. In fact, Filip found that for small values of m, there were tons of examples for Y. In this scenario, my strategy would be "random bullshit go": there are likely so many ways to solve the problem that a good idea is literally just trying stuff and seeing what sticks. Knuth did say the problem was open for several weeks, but:
- Several weeks is a very short time in research.
- Only he and a couple friends knew about the problem. It was not some major problem many people were thinking about.
- It's very unlikely that Knuth was continuously thinking about the problem during those weeks. He most likely had other things to do.
- Even if he was thinking about it the whole time, he could have gotten stuck in a rut. It happens to everyone, no matter how much red site/orange site users worship him for being ultra-smart.
I guess "random bullshit go" is served well by a random bullshit machine, but you still need an expert who actually understands the problem to read the tea leaves and evaluate if you got something useful. Knuth's narrative is not very transparent about how much Filip handheld for the AI as well.
I think the main danger of this (putting aside the severe societal costs of AI) is not that doing this is faster or slower than just thinking through the problem yourself. It's that relying on AI atrophies your ability to think, and eventually even your ability to guard against the AI bullshitting you. The only way to retain a deep understanding is to constantly be in the weeds thinking things through. We've seen this story play out in software before.
Thank you for providing some actual domain experience to ground my idle ramblings.
I wonder if part of the reason why so many high profile intellectuals in some of these fields are so prone to getting sniped by the confabulatron is an unwillingness to acknowledge (either publicly or in their own heart) that "random bullshit go" is actually a very useful strategy. It reminds me of the way that writers will talk about the value of just getting words on the page because it's easier to replace them with better words than to create perfection ex nihilo, or the rubber duck method of troubleshooting where just stepping through the problem out loud forces you to organize your thoughts in a way that can make the solution more readily apparent. It seems like at least some kinds of research are also this kind of process of analysis and iteration as much as if not more than raw creation and insight.
I have never met Donald Knuth, and don't mean to impugn his character here, even as I'm basically asking if he's too conceited to properly understand what an LLM is, but I think of how people talk about science and scientists and the way it gets romanticized (see also Iris Merideth's excellent piece on "warrior culture" in software development) and it just doesn't fit a field that can see meaningful progress from throwing shit at the wall to see what sticks. A lot of the discourse around art and artists is more willing to acknowledge this element of the creative process, and that might explain their greater ability and willingness to see the bullshit faucet for what it is. Maybe because science and engineering have a stricter and more objective pass/fail criteria (you can argue about code quality just as much as the quality of a painting, but unlike a painting either the program runs or it doesn't. Visual art doesn't generally have to worry about a BSOD) there isn't the same openness to acknowledge that the affirmative results you get from an LLM are still just random bullshit. I can imagine the argument being: "The things we're doing are very prestigious and require great intelligence and other things that offer prestige and cultural capital. If 'random bullshit go' is often a key part of the process then maybe it doesn't need as much intelligence and doesn't deserve as much prestige. Therefore if this new tool can be at all useful in supplementing or replicating part of our process it must be using intelligence and maybe it deserves some of the same prestige that we have."
My generous statement: Knuth, being a scientist, is used to an "adversary" that plays fair. As we have known for decades, a scientist can be tricked in situations that a magician will see through. This applies all the more now with the Sycophancy Engines, which make mathematics into a casino vacation. Just one more prompt, bro. Just one more prompt.
My less generous statement: Knuth is almost 90 years old. Sure, age doesn't imply a person will become a doddering fool, but people do tend to slow down, to have less energy and more need to spend it managing their health. "Thinking about a problem for a few weeks" counts for less in a situation like that.
My extremely ungenerous statement: Hey, remember when Michael Atiyah claimed to have proved the Riemann hypothesis in 2018? And the community reaction was a pained, "Atiyah is one of the great mathematicians... of the 20th century."
oh hey I remember reading that Donald Knuth paper earlier today, when it got posted by an AI youtube channel as 'proof' AI is on the path to AGI
The AI people are still infatuated with math. The Epoch AI staff, after being thoroughly embarrassed last year by the FrontierMath scandal, have now decided to make a new FrontierMath Open Problems benchmark, this time with problems that people might give a shit about!
I decided to look at one of the easiest "moderately interesting" problems and noticed that GPT-5.2 Pro managed to solve a warm up version of the problem, i.e. a version that had been previously solved. Wow, these reasoning models sure are capable of math! So I was curious and looked at the reasoning trace and it turns out that ... the model just found an obscure website with the right answer and downloaded it. Well, I guess you could say it has some impressive reasoning as it figures out how to download and parse the data, maybe.
We really need to work harder at poisoning the training data for math problems.
was doomscrolling and got fucking jumpscared by this fucking article: https://www.science.org/content/article/meet-three-scientists-who-said-no-epstein
Followup on the Mass AI Bill, Russel has 180'd on it:
https://russwilcoxdata.substack.com/p/93a-the-three-characters-that-should
Buried in the penalty clause, the part of the bill that nobody reads, is a single reference: violations “shall be punishable in the same manner as provided in Chapter 93A of the General Laws.”
For those outside Massachusetts: Chapter 93A is the state’s consumer protection statute. It is, by most accounts, the most aggressive consumer protection law in America.
Here’s what 93A unlocks. Anyone can sue, not just the government. Class actions are on the table. If the court finds a violation was willful or knowing, damages get tripled. And the bar for what counts as “unfair or deceptive” is lower than in almost any other state.
Now bolt 93A onto all of that. What do you get?
You get a bill that doesn’t need a single regulator to lift a finger. You get a bill that funds its own enforcement through plaintiff attorneys who can file class actions, collect treble damages, and recover legal fees. You get the ADA website-accessibility litigation playbook, where lawyers systematically identify technical violations and file suits at scale, applied to every piece of AI-generated content touching Massachusetts.
Private right of action, fuck yeah. Turns grok into a legal fees dispenser.
The bill doesn’t need to be well-drafted to be dangerous. It needs to be vague, broad, and connected to 93A.
lol
https://www.wired.com/story/openai-fires-employee-insider-trading-polymarket-kalshi/
lol. Between this and the ayatollah clawback, I'm expecting some entertaining litigation.
new episode of odium symposium. it's a tribute to knowledge fight, in which we dissect an episode of nick fuentes's show. i was nervous about how this would turn out but i think it's actually my favorite episode yet.
https://www.patreon.com/posts/11-groyper-151852222 (links to other platforms at www.odiumsymposium.com)
God that was bleak - I thought Nick was bad in his guest spots on Alex's show (seen via Knowledge Fight, of course) but apparently you really do need at least two layers of insulating podcast to avoid suffering critical psychic damage from that level of hatred. I appreciated the acknowledgement that in order to feel at all okay playing clips you needed to sanewash him a little bit. I'm pretty sure that JorDan do the same thing with Alex and don't acknowledge it nearly often enough.
I also feel like some of Nick's schtick is about trying to position himself and maintain his position in the right wing grifter bigot-industrial complex. Like, the open disdain for his audience and presenting his actually pretty straightforward feelings on the halftime show as somehow brave and iconoclastic is also about differentiating himself and making his audience feel superior to Alex, Tucker, Candace, etc. In that sense the open disdain for the audience serves another purpose in terms of reinforcing heirarchy. Look at how great it feels for me to be better than you. And even you are better than the chuds, who are better than the racialized other.
The good news is the report is false. According to contacts that are familiar with the Windows roadmap, there is no plan to ship a Windows 12 this year. In fact, I understand that the Windows roadmap for 2026 is all about fixing Windows 11 and attempting to improve its reputation by addressing top feedback such as reducing AI bloat across the OS
"We have heard your complaints about lead in the paint, and our roadmap for Leaded Paint 2026 is all about improving its reputation by making the lead easier to swallow"
Claude Code claims another victim.
I thought it'd never happen to me but here we are
Stop the presses. Dude who's into LLM's has shit takes about open source software.
Apparently OSS devs that publish under non-commercial licenses are shutting people out?
Definitely some bespoke what the fu-
Skyview.social mirror so everyone can see - he's locked out everyone who's not signed in.
Blast from the past: in 2014, Scott Alexander posted a take on marijuana legalization which showed excellent knowledge of medical papers but huge gaps in his knowledge of what brown people or smart policy reformers have to say. David Gerard and Christopher Hallquist in the comments, digression on how pot affects your IQ with gwern chipping in. Alexander came back in 2018 promising that he was right all along with a footnote about how some people in the comments told him that people like smoking weed and he did not know how to process that because his utilitarian calculation said it was bad for society.