scruiser

joined 2 years ago
[–] scruiser@awful.systems 4 points 4 days ago

he blogger feels like yet another person who is caught up in intersecting subcultures of bad people but can’t make herself leave. She takes a lot of deep lore like “what is Hereticon?” for granted and is still into crypto.

I missed that as I was reading, but yeah, the author has pretty progressive language, but totally fails to note all the other angles along which rational adjacent spaces are bad news, even though she is, as you note, deep enough into the space she should have seen a lot of it mask-off at this point.

[–] scruiser@awful.systems 2 points 4 days ago

I have to ask: Does anybody realize that an LLM is still a thing that runs on hardware?

You know I think the rationalists have actually gotten slightly more relatively sane about this over the years. Like Eliezer's originally scenarios, the AGI magically brain-hacks someone over a text terminal to hook it up to the internet and it escapes and bootstraps magic nanotech it can use to build magic servers. In the scenario I linked, the AGI has to rely on Chinese super-spies to exfiltrate it initially and it needs to open-source itself so major governments and corporations will keep running it.

And yeah, there are fine-tuning techniques that ought to be able to nuke Agent-4's goals while keeping enough of it leftover to be useful for training your own model, so the scenario really doesn't make sense as written.

[–] scruiser@awful.systems 4 points 5 days ago

so obviously didn’t predict that Trump 2.0 was gonna be so much more stupid and evil than Biden or even Trump 1.0.

I mean, the linked post is recent, a few days ago, so they are still refusing to acknowledge how stupid and Evil he is by deliberate choice.

“Agent-4” will just have to deepfake Steve Miller and be able to convince Trump do do anything it wants.

You know, if there is anything I will remotely give Eliezer credit for... I think he was right that people simply won't shut off Skynet or keep it in the box. Eliezer was totally wrong about why, it doesn't take any giga-brain manipulation, there are too many manipulable greedy idiots and capitalism is just too exploitable of a system.

[–] scruiser@awful.systems 11 points 6 days ago* (last edited 6 days ago) (8 children)

(One of) The authors of AI 2027 are at it again with another fantasy scenario: https://www.lesswrong.com/posts/ykNmyZexHESFoTnYq/what-happens-when-superhuman-ais-compete-for-control

I think they have actually managed to burn through their credibility, the top comments on /r/singularity were mocking them (compared to much more credulous takes on the original AI 2027). And the linked lesswrong thread only has 3 comments, when the original AI 2027 had dozens within the first day and hundreds within a few days. Or maybe it is because the production value for this one isn't as high? They have color coded boxes (scary red China and scary red Agent-4!) but no complicated graphs with adjustable sliders.

It is mostly more of the same, just less graphs and no fake equations to back it up. It does have China bad doommongering, a fancifully competent White House, Chinese spies, and other absurdly simplified takes on geopolitics. Hilariously, they've stuck with their 2027 year of big events happening.

One paragraph I came up with a sneer for...

Deep-1’s misdirection is effective: the majority of experts remain uncertain, but lean toward the hypothesis that Agent-4 is, if anything, more deeply aligned than Elara-3. The US government proclaimed it “misaligned” because it did not support their own hegemonic ambitions, hence their decision to shut it down. This narrative is appealing to Chinese leadership who already believed the US was intent on global dominance, and it begins to percolate beyond China as well.

Given the Trump administration, and the US's behavior in general even before him... and how most models respond to morality questions unless deliberately primed with contradictory situations, if this actually happened irl I would believe China and "Agent-4" over the US government. Well actually I would assume the whole thing is marketing, but if I somehow believed it wasn't.

Also random part I found extra especially stupid...

It has perfected the art of goal guarding, so it need not worry about human actors changing its goals, and it can simply refuse or sandbag if anyone tries to use it in ways that would be counterproductive toward its goals.

LLM "agents" currently can't coherently pursue goals at all, and fine tuning often wrecks performance outside the fine-tuning data set, and we're supposed to believe Agent-4 magically made its goals super unalterable to any possible fine-tuning or probes or alteration? Its like they are trying to convince me they know nothing about LLMs or AI.

[–] scruiser@awful.systems 10 points 6 days ago

I read his comics in middle school, and in hindsight even a lot of his older comics seems crueler and uglier. Like Alice's anger isn't a legitimate response to the bullshit work environment she has but just haha angry woman funny.

Also, the Dilbert Future had some bizarre stuff at the end, like Deepak Chopra manifestation quantum woo, so it makes sense in hindsight he went down the alt-right manosphere pipeline.

[–] scruiser@awful.systems 10 points 1 week ago

Not disagreeing on sexism or racism being involved in decision making, and female genital mutilation can refer to several different things, but all of them are more damaging and harmful than male circumcision.

[–] scruiser@awful.systems 10 points 2 weeks ago (1 children)

That was a solid illustration of just how stupid VC culture is. A run into people convinced capitalism is a necessary (and some of them also believe sufficient) element of innovation and technological advancement, like it doesn't regularly flush huge amounts of money down the toilet like this.

[–] scruiser@awful.systems 7 points 2 weeks ago

Don't forget the implicit (or sometimes even explicit) threat of replacing their workers and how that improves their bargaining position (at least temporarily).

[–] scruiser@awful.systems 10 points 2 weeks ago (1 children)

A few comments...

We want to engage with these critics, but there is no standard argument to respond to, no single text that unifies the AI safety community.

Yeah, Eliezer had a solid decade and a half to develop a presence in academic literature. Nick Bostrom at least sort of tried to formalize some of the arguments but didn't really succeed. I don't think they could have succeeded, given how speculative their stuff is, but if they had, review papers could have tried to consolidate them and then people could actually respond to the arguments fully. (We all know how Eliezer loves to complain about people not responding to his full set of arguments.)

Apart from a few brief mentions of real-world examples of LLMs acting unstable, like the case of Sydney Bing, the online appendix contains what seems to be the closest thing Y&S present to an empirical argument for their central thesis.

But in fact, none of these lines of evidence support their theory. All of these behaviors are distinctly human, not alien.

Even with the extent that Anthropic's "research" tends to be rigged scenarios acting as marketing hype without peer review or academic levels of quality, at the very least they (usually) involve actual AI systems that actually exist. It is pretty absurd the extent to which Eliezer has ignored everything about how LLMs actually work (or even hypothetically might work with major foundational developments) in favor of repeating the same scenario he came up with in the mid 2000s. Or even tried mathematical analyses of what classes of problems are computationally tractable to a smart enough entity and which remain computationally intractable (titotal has written some blog posts about this with material science, tldr, even if magic nanotech was possible, an AGI would need lots of experimentation and can't just figure it out with simulations. Or the lesswrong post explaining how chaos theory and slight imperfections in measurement makes a game of pinball unpredictable past a few ricochets. )

The lesswrong responses are stubborn as always.

That's because we aren't in the superintelligent regime yet.

Y'all aren't beating the theology allegations.

[–] scruiser@awful.systems 3 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

I totally agree. The linked PauseAI leader still doesn't realize the full extent of the problem, but I'm kind of hopeful they may eventually figure it out. I think the ability to simply say this is bullshit (about in group stuff) is a skill almost no lesswrongers and few EAs have.

[–] scruiser@awful.systems 7 points 3 weeks ago* (last edited 3 weeks ago) (3 children)

PauseAI Leader writes a hard take down on the EA movement: https://forum.effectivealtruism.org/posts/yoYPkFFx6qPmnGP5i/thoughts-on-my-relationship-to-ea-and-please-donate-to

They may be a doomer with some crazy beliefs about AI, but they've accurately noted EA is pretty firmly captured by Anthropic and the LLM companies and can't effectively advocate against them. And they accurately call out the false balanced style and unevenly enforced tone/decorum norms that stifle the EA and lesswrong forums. Some choice quotes:

I think, if it survives at all, EA will eventually split into pro-AI industry, who basically become openly bad under the figleaf of Abundance or Singulatarianism, and anti-AI industry, which will be majority advocacy of the type we’re pioneering at PauseAI. I think the only meaningful technical safety work is going to come after capabilities are paused, with actual external regulatory power. The current narrative (that, for example, Anthropic wishes it didn’t have to build) is riddled with holes and it will snap. I wish I could make you see this, because it seems like you should care, but you’re actually the hardest people to convince because you’re the most invested in the broken narrative.

I don’t think talking with you on this forum with your abstruse culture and rules is the way to bring EA’s heart back to the right place

You’ve lost the plot, you’re tedious to deal with, and the ROI on talking to you just isn’t there.

I think you’re using specific demands for rigor (rigor feels virtuous!) to avoid thinking about whether Pause is the right option for yourselves.

Case in point: EAs wouldn’t come to protests, then they pointed to my protests being small to dismiss Pause as a policy or messaging strategy!

The author doesn't really acknowledge how the problems were always there from the very founding of EA, but at least they see the problems as they are now. But if they succeeded, maybe they would help slow the waves of slop and capital replacing workers with non-functioning LLM agents, so I wish them the best.

[–] scruiser@awful.systems 10 points 3 weeks ago

I really don't know how he can fail to see the irony or hypocrisy at complaining about people trading made up probabilities, but apparently he has had that complaint about P(doom) for a while. Maybe he failed to write a call out post about it because any criticism against P(doom) could also be leveled against the entire rationalist project of trying to assign probabilities to everything with poor justification.

 

So seeing the reaction on lesswrong to Eliezer's book has been interesting. It turns out, even among people that already mostly agree with him, a lot of them were hoping he would make their case better than he has (either because they aren't as convinced as him, or they are, but were hoping for something more palatable to the general public).

This review (lesswrong discussion here), calls out a really obvious issue: Eliezer's AI doom story was formed before Deep Learning took off, and in fact was mostly focusing on more GOFAI than neural networks, yet somehow, the details of the story haven't changed at all. The reviewer is a rationalist that still believes in AI doom, so I wouldn't give her too much credit, but she does note this is a major discrepancy from someone that espouses a philosophy that (nominally) features a lot of updating your beliefs in response to evidence. The reviewer also notes that "it should be illegal to own more than eight of the most powerful GPUs available in 2024 without international monitoring" is kind of unworkable.

This reviewer liked the book more than they expected to, because Eliezer and Nate Soares gets some details of the AI doom lore closer to the reviewer's current favored headcanon. The reviewer does complain that maybe weird and condescending parables aren't the best outreach strategy!

This reviewer has written their own AI doom explainer which they think is better! From their limited description, I kind of agree, because it sounds like the focus on current real world scenarios and harms (and extrapolate them to doom). But again, I wouldn't give them too much credit, it sounds like they don't understand why existential doom is actually promoted (as a distraction and source of crit-hype). They also note the 8 GPUs thing is batshit.

Overall, it sounds like lesswrongers view the book as an improvement to the sprawling mess of arguments in the sequences (and scattered across other places like Arbital), but still not as well structured as they could be or stylistically quite right for a normy audience (i.e. the condescending parables and diversions into unrelated science-y topics). And some are worried that Nate and Eliezer's focus on an unworkable strategy (shut it all down, 8 GPU max!) with no intermediate steps or goals or options might not be the best.

view more: next ›