nfultz

joined 2 years ago
[–] nfultz@awful.systems 3 points 18 hours ago

I usually think of it as Pivot : All In :: Ted Turner : Rupert Murdoch. Hoping he can grow out of his Bill Maher phase, but at least he's been pretty generous to my dumb State U in the meantime. And if he wants to rack up listeners/ad dollars by doing chatgpt boycotts, by all means.

[–] nfultz@awful.systems 9 points 2 days ago* (last edited 2 days ago) (5 children)

Galloway closes with a pretty strong sneer: Apocalypse No

AI’s popularity is correlated to wealth, with only those earning more than $200,000 per year viewing AI as a net positive. That’s not a reflection on AI, but yet another signal that the incumbents (the old and the wealthy) have successfully hoarded opportunity. In other words, the AI jobs freak-out is the latest act in America’s ongoing wealth inequality drama. The Gini coefficient is how economists measure inequality: Zero indicates everyone has exactly the same wealth; a score of 1.0 means one individual owns everything. In the U.S., we’re higher than 0.8 — about the level seen when the French began separating people from their heads. The real disruption won’t come from AI, but from the public watching arsonists sell smoke detectors and call it innovation.

The AI job apocalypse isn’t an economic forecast — it’s a marketing strategy. We’re not witnessing the end of work. We’re watching the monetization of fear.

Seems like he's getting back to his pre-crypto / we-wtf style. But when did podcasters start charging $53 (EDIT: $86.50 for floor) / seat at the Wiltern, that place is huge. And no Swisher either, it's his other one.

[–] nfultz@awful.systems 11 points 1 week ago

Not sure if this was posted in prev weeks, just popped on my youtube: purdue cs240 situation is crazy

So several hundred students drop Intro to C after being accused of cheating with AI.

OK so that is like normal at my state U, but the whole part where the chair does a little press conference, quasi-reinstates everyone, blocks the student newspaper from attending, and then some students sneak in and live stream it anyway is pretty comical. And then forcing the prof to file the academic charges forms one-at-a-time takes it into wtf territory.

Haven't seen it mentioned elsewhere, not that I really went looking for it though. I'm just thankful to be out of higher ed.

Note that this is the same school that will require AI as a gen ed iirc.

[–] nfultz@awful.systems 4 points 2 weeks ago

I asked someone from the mainland, she more or less agreed with you:

This is basically consistent with the long-standing logic of the Chinese internet: technology brings discursive power, and to give it away is to give away discursive power. AI is especially so.

[–] nfultz@awful.systems 6 points 3 weeks ago (3 children)

https://russwilcoxdata.substack.com/p/and-the-alignment-problem-what-chinas

In June 2025, Zhao Tingyang gave a talk at Tsinghua’s Fangtang Forum. The edited transcript ran in The Paper on July 4 under the title “人工智能的伦理与思维之限” (The Ethical and Thinking Limits of AI). Near the end, Zhao wrote this:

“What requires more reflection is that attempting to ‘align’ AI with human nature and values actually contains a risk of human species suicide. Human nature is selfish, greedy, and cruel. Humans are the most dangerous biological species. Almost all religions demand the restraint of human desire; this is no accident. AI aligned with human values may well become a dangerous subject by imitating humans. Originally, AI does not possess the selfish genes of carbon-based life, so AI is actually closer to the legendary ‘human nature is fundamentally good’ kind of existence, whereas human nature is not ‘fundamentally good.’” The alignment paradigm treats human values as the target AI should conform to. Zhao is arguing the target is the danger. An AI aligned to human values inherits the specific features of human judgment that Zhao says have produced the record of human harm. The paradigm is not incomplete. It is pointed the wrong way.

Zhao’s argument has developed across CASS, The Paper, and Wenhua Zongheng from late 2022 through 2025, from a provocative aside into a sustained critique of the alignment paradigm. In the same period, the English-language alignment and AI ethics literature produced no substantive engagement. No citations. No rebuttal. No naming. Zhao is a member of the Chinese Academy of Social Sciences Institute of Philosophy, author of the Tianxia framework, and one of the most cited philosophers working in Chinese today.

I need to think on this a little more, wasn't on my radar.

[–] nfultz@awful.systems 4 points 3 weeks ago

People talked about doing this with bitcoin mining - https://www.cnbc.com/2025/11/16/bitcoin-crypto-mining-home-heating-energy-bills.html - but I'm not aware of anyone trying to scale it out or turn it into a company.

[–] nfultz@awful.systems 6 points 3 weeks ago

blogosphere-era link aggregator that somehow kept going way longer than occupy wallstreet did. one thing to know, (like here), they link to a lot of stuff they don't support.

[–] nfultz@awful.systems 10 points 3 weeks ago (7 children)

https://www.nakedcapitalism.com/2026/04/ai-reputational-crisis-violence-data-center-protests-sam-altman-openai.html

The profound ignorance of tech on the part of most American lawmakers is no joke. In a prior life, I was once responsible for updating a future Vice Chair of the Senate Intelligence Committee on tech issues and it was like showing an alarm clock to a chicken.

haha

That same senator went on to be a huge RussiaGater and played a central role in Twitter and other social media titans upping their censorship game at the behest of US politicians.

oh :(

[–] nfultz@awful.systems 4 points 1 month ago

Haven't seen any estimates of death toll due to social media but cigarettes is/was pretty staggering (20-40m), way too big to hide - https://www.ucpress.edu/books/golden-holocaust/hardcover - if it's "only" 50 years to flip the consensus on social media, that would be a faster process, I do hope its possible though. Tobacco execs had the good sense to keep a relatively low profile compared to Zuck and Musk, so that might speed it up.

[–] nfultz@awful.systems 10 points 1 month ago

Went to the campus screening of Ghost in the Machine today, many familiar names; I did not know going in that hometown hero Shazeda had so many lines (are they called lines in a documentary?). I can recommend it, especially for a more gen-ed / undergrad audience; the director seems supportive of educational use and reuse and it is structured in a dozen or so bite sized chapters.

Haven't seen the AI apocalypse optimist one to compare against, would probably rather spend my money on Mario tbh.

But also it made me realize it's not a "California" ideology anymore, she never calls it that, like it's gone so mainstream and so widespread, you can't even get through the sneer club bingo list in a 2 hour movie. Gates, Musk, Andreesen, Zuck, Altman, no Peter Theil !? As a statistician, Galton, Pearson (Karl only), Spearman, no Fisher !?

Non-zero overlap with the lore dump episode of Lain and the Epstein files, though:

spoilerDouglas Ruskoff, but, sadly, not the dolphin guy

[–] nfultz@awful.systems 7 points 1 month ago

https://www.todayintabs.com/p/who-goes-ai

taking shots at the gray lady:

You might think Mr. R not so different, superficially, from Ms. L. He’s also a long-tenured technology columnist at a respected mainstream publication. And yet he has eagerly, even gleefully, turned flack for the machines. He has delegated much of his professional life to them as well, and seems proud of it:

Most recently, [Mr. R] tells me, he created a team of Claude agents to help edit his book, led by a “Master Editor” agent. Other sub-agents are in charge of things like fact-checking, making sure the book matches his writing style, and offering positive and negative feedback.

And why not? Mr. R is not known or valued for his elegance of expression. He has, at best, a “writing style,” and not one that can’t easily be duplicated by a large language model. Checking facts? Assessing his work’s strengths and weaknesses? More bathwater to be tossed out of this increasingly baby-less tub. So what explains Mr. R, who “expects AI models to get better than him at everything eventually?” Why does he go AI when Ms. L never would?

Mr. R’s secret is that his work is not primarily artistic or informative—it is functional. He serves a purpose for the industry he covers. Mr. R’s job is to absorb the tech industry’s self-mythologizing, and then believe in it even harder than the industry itself does. He serves as a kind of plausibility ratchet. His byline and employer legitimize a level of credulousness that would otherwise be laughable, and thereby allow tech PR to seem relatively restrained. Mr. R has no problem going AI because he himself has been a small cog in a big ugly machine for a long time.

spoilerIt's Kevin Roose

 

Another response to Ptacek.

view more: next ›