scruiser

joined 2 years ago
[–] scruiser@awful.systems 9 points 5 days ago

Yeah. When it comes down to it, the libs think the problem with Trump isn't the fundamentals of what he is doing, it is that he is doing it without decorum or checking all the legal boxes or saying the usual lib pabulum to justify American imperialism. Skipping the legal checks and decorum is also bad, but in fact kids in cages was horrible when Obama was doing it the "right" way.

[–] scruiser@awful.systems 8 points 5 days ago* (last edited 5 days ago) (2 children)

I wonder if one of the reasons Pete Hegseth is going so hard after Anthropic is that he and other idiots in the Pentagon unironically believes shit like AI 2027 and so wants to soft nationalize the frontier companies so to control the coming AGI. Considering that one of the uses the DoD allegedly wants LLMs for is fully autonomous weapons that at the very least have a very distorted view of what the technology is capable of. Or they want an accountability sink so they can kill people with even less accountability. ...probably both.

I find it darkly hilarious that the doomer crit-hype is finally coming around to bite them, not in the form of heavy handed shut-it-all-down regulation to stop skynet, but in the form of authoritarian wackos wanting to make sure they are the ones "in charge" of skynet.

[–] scruiser@awful.systems 5 points 1 week ago

Did you know that same week this fight was going public Anthropic gave up on their "Responsible Scaling Policy"? (Well, technically they changed to a new version of their RSP that was even more empty and toothless.) To be fair the RSP was basically doomer crit-hype safety theater ("we have a plan for if our AI is so dangerous it is a catastrophic risk"), but if they actually followed it, they would have to stop releasing new models (or else unhype their model's capabilities), so it was obvious they would abandon the RSP at some point (even many lesswrongers and EAs expected this).

I would bet that the timing of ditching the RSP was a deliberate marketing strategy to mask one ethical backslide behind an ethical stand... except only booster and doomers even remotely expected the RSP to have any meaning in the first place. Still, comparing number of lesswrong, EA, and /r/singularity discussions on RSP v3 compared to discussions on the fight with the DoD, I think they did succeed in minimizing what little criticism they got.

That was their original pitch against openAI

So yeah. People on places like /r/singularity were starting to get skeptical of Anthropic's claims about ethics, but after this current saga I see loads of comments glazing them and praising them, so mission success.

I wonder if Hegseth realizes he has basically given Anthropic's marketing team exactly what they want?

[–] scruiser@awful.systems 9 points 1 week ago* (last edited 1 week ago) (2 children)

I agree this is an important development in this continued saga, but as I said in the main thread, I really don't like this article's framing (to the point I wouldn't be surprised if the author is MAGA or at least prone to sanewashing MAGA).

Reposting what I wrote in the other thread:

Anthropic CEO Dario Amodei picked a major fight with the Department of Defense last month, asserting that his company’s AI models couldn’t be used for mass surveillance of Americans or direct autonomous weapons systems.

As to who picked a fight with who, the DoD wanted to change the terms of their contract, to which Anthropic apparently compromised on every term except for mass surveillance of Americans (fuck the rest of the world I guess) and fully autonomous weapons (cause a human clicking “yes to confirm” makes slop-bot powered drones so much better). This wasn’t good enough for this authoritarian strongman administration, so Pete Hegseth took the fight public with tweets first. So the article framing it as Anthropic “picking a fight” is a bullshit framing. I mean, they did kind of bring it on themselves hyping up their slop machine like it was a sci-fi AGI, but they didn’t start the fight.

For one, “it’s 100 percent in the government’s prerogative to set the parameters of a contract,” Snell & Winter partner Brett Johnson told Wired, effectively meaning there may be very little chance of an appeal.

So they find a quote about contracts, but a Supply Chain Risk isn’t just the DoD deciding on contracts, it is a specific power that has specific mechanisms set by legislation. If (and it is a big if with the current Supreme Court’s composition) the court actually considers the terms set out in the legislation (including, most problematically for the DoD, a risk assessment and consideration of less intrusive alternatives), I think the DoD loses. Of course, the SC has all too often been willing to simply defer to the executive branch’s judgement, even if the process for the judgement was “Trump or one of his underlings made a choice on a spiteful or idiotic whim, announced it on twitter, and the departments underneath them rushed to retroactively invent a saner rationalization”. If the DoD decided to just end the contract (without all the public threats of SCR or invoking the Defense Production Act) Anthropic wouldn’t be in a position to sue and this drama wouldn’t have been as publicized in the first place.

But the lawsuit itself takes a dramatically different tone.

Yeah because one set of a language is a CEO trying to grovel and backtrack on one of the rare few ethical commitments he has ever made (edit well actually Anthropic has made lots of ethical commitments, many of which they've already folded on, this is one of the only ones they've held against pressure and one of the only ones the media/public might actually expect them to hold to because the fight was so dramatically public), and the other is making a court case about the actual law.

[–] scruiser@awful.systems 7 points 1 week ago

If the DoD accidentally pop the AI bubble by triggering a cascade when Anthropic runs into issues; then later the DoD loses the court case in a humiliating enough way; then DoD loses a civil case with the money going to pay the debts owed in Anthropic's bankruptcy proceedings, and the American public blames all of (without letting one shift the blame to the other) the Trump administration, the Republican party, the parts of the Democrat that acted as pathetic enablers, and the tech ceos for the following economic depression... I would count that as a relative win?

[–] scruiser@awful.systems 9 points 1 week ago* (last edited 1 week ago)

The specific article's framing pisses me off...

Anthropic CEO Dario Amodei picked a major fight with the Department of Defense last month, asserting that his company’s AI models couldn’t be used for mass surveillance of Americans or direct autonomous weapons systems.

As to who picked a fight with who, the DoD wanted to change the terms of their contract, to which Anthropic apparently compromised on every term except for mass surveillance of Americans (fuck the rest of the world I guess) and fully autonomous weapons (cause a human clicking "yes to confirm" makes slop-bot powered drones so much better). This wasn't good enough for this authoritarian strongman administration, so Pete Hegseth took the fight public with tweets first. So the article framing it as Anthropic "picking a fight" is a bullshit framing. I mean, they did kind of bring it on themselves hyping up their slop machine like it was a sci-fi AGI, but they didn't start the fight.

For one, “it’s 100 percent in the government’s prerogative to set the parameters of a contract,” Snell & Winter partner Brett Johnson told Wired, effectively meaning there may be very little chance of an appeal.

So they find a quote about contracts, but a Supply Chain Risk isn't just the DoD deciding on contracts, it is a specific power that has specific mechanisms set by legislation. If (and it is a big if with the current Supreme Court's composition) the court actually considers the terms set out in the legislation (including, most problematically for the DoD, a risk assessment and consideration of less intrusive alternatives), I think the DoD loses. Of course, the SC has all too often been willing to simply defer to the executive branch's judgement, even if the process for the judgement was "Trump or one of his underlings made a choice on a spiteful or idiotic whim, announced it on twitter, and the departments underneath them rushed to retroactively invent a saner rationalization". If the DoD decided to just end the contract (without all the public threats of SCR or invoking the Defense Production Act) Anthropic wouldn't be in a position to sue and this drama wouldn't have been as publicized in the first place.

But the lawsuit itself takes a dramatically different tone.

Yeah because one set of a language is a CEO trying to grovel and backtrack on one of the rare few ethical commitments he has ever made, and the other is making a court case about the actual law.

[–] scruiser@awful.systems 6 points 1 week ago* (last edited 1 week ago)

It's so fucking pathetic, he can't even hold onto the very narrow and weak stand (because he left open a lot of things with Anthropic's "two red lines") he took without trying to backpedal and grovel.

[–] scruiser@awful.systems 9 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

your mode of analysis is closer to erotic Harry Potter fan fiction

To give Gary Marcus credit here, HPMOR may not be erotic, but many of Eliezer's other works are erotic (or at least attempt to be), the most notable being Planecrash/Project Lawful which has entire sections devoted to deliberately bad (as in deliberately not safe, sane, consensual) bdsm.

Eliezer tried to promote/hype up Project Lawful on twitter, maybe hoping it would be the next HPMOR, but it didn't quite take. Maybe he failed to realize how much of HPMOR's success was being in the popular genre of Harry Potter fanfic (which at the time had crap like Partially Kissed Hero or Harry Crow as among its most popular works), and not from his own genius writing.

[–] scruiser@awful.systems 12 points 2 weeks ago

lib brains have a hard time comprehending that there can be multiple bad guys at a time, or that America was in fact a neocolonialist imperialistic empire even before Trump took over and took off the mask.

[–] scruiser@awful.systems 8 points 2 weeks ago

Bold of you to assume they would bother filtering them out.

[–] scruiser@awful.systems 85 points 2 weeks ago* (last edited 2 weeks ago) (14 children)

This really is the dumbest timeline.

simulating battle scenarios

Regurgitating reddit armchair generals from /r/noncredibledefense

[–] scruiser@awful.systems 6 points 2 weeks ago

Something something Imperial Boomerang, Fascism is colonial methods brought home.

 

So seeing the reaction on lesswrong to Eliezer's book has been interesting. It turns out, even among people that already mostly agree with him, a lot of them were hoping he would make their case better than he has (either because they aren't as convinced as him, or they are, but were hoping for something more palatable to the general public).

This review (lesswrong discussion here), calls out a really obvious issue: Eliezer's AI doom story was formed before Deep Learning took off, and in fact was mostly focusing on more GOFAI than neural networks, yet somehow, the details of the story haven't changed at all. The reviewer is a rationalist that still believes in AI doom, so I wouldn't give her too much credit, but she does note this is a major discrepancy from someone that espouses a philosophy that (nominally) features a lot of updating your beliefs in response to evidence. The reviewer also notes that "it should be illegal to own more than eight of the most powerful GPUs available in 2024 without international monitoring" is kind of unworkable.

This reviewer liked the book more than they expected to, because Eliezer and Nate Soares gets some details of the AI doom lore closer to the reviewer's current favored headcanon. The reviewer does complain that maybe weird and condescending parables aren't the best outreach strategy!

This reviewer has written their own AI doom explainer which they think is better! From their limited description, I kind of agree, because it sounds like the focus on current real world scenarios and harms (and extrapolate them to doom). But again, I wouldn't give them too much credit, it sounds like they don't understand why existential doom is actually promoted (as a distraction and source of crit-hype). They also note the 8 GPUs thing is batshit.

Overall, it sounds like lesswrongers view the book as an improvement to the sprawling mess of arguments in the sequences (and scattered across other places like Arbital), but still not as well structured as they could be or stylistically quite right for a normy audience (i.e. the condescending parables and diversions into unrelated science-y topics). And some are worried that Nate and Eliezer's focus on an unworkable strategy (shut it all down, 8 GPU max!) with no intermediate steps or goals or options might not be the best.

view more: next ›