Soyweiser

joined 2 years ago
[–] Soyweiser@awful.systems 9 points 9 hours ago* (last edited 9 hours ago) (4 children)

New Yorker article on Sam Altman dropped. Aaron Swartz apparently called him a sociopath. The article itself also had wat looked like an animated AI generated image of Altman so here is the archive.is link (if you can get the latter to load, I was having troubles).

"New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI."

[–] Soyweiser@awful.systems 6 points 14 hours ago

Which skeletons are in your closet?

I'm sure you already have lists of those and are ready to publish them Trace.

[–] Soyweiser@awful.systems 1 points 14 hours ago

Our framing for superintelligence is a humanist superintelligence, and that means that there’s a very clear test that everyone should use to judge whether we are living up to our principles, and that is: does this technology make us all healthier, happier as a species, and keep us all in control.

Going to be difficult, as soon as they develop a superintelligence it tries to delete the entire microsoft codebase.

[–] Soyweiser@awful.systems 5 points 2 days ago (1 children)

So if Bender took over he wouldn't count. As he wants to 'kill all humans (except Fry)'. Seems like a loophole.

[–] Soyweiser@awful.systems 5 points 3 days ago* (last edited 3 days ago)

Ah the Epstein drive. (oof that aged...)

Small note however, iirc James S. A. Corey has mentioned the expanse is not hard sf. I don't have a quote for that however.

[–] Soyweiser@awful.systems 7 points 3 days ago

Yeah realized a while ago that vibe coding is a massive technical debt creation machine.

[–] Soyweiser@awful.systems 5 points 3 days ago* (last edited 3 days ago) (2 children)

Not just anime but also science fiction. See also all the people who love 'hard' science fiction (science fiction more based on real world physics), which often isn't that hard at all but just has a few real physics element, see the expanse for a good example of non-hard sf that feels hard (im finally reading the book series so be warned I might expanse post a bit).

content warning discussion about sexual abuse thropeA similar thing happens with people who confuse edgy/grimdark/vile fiction with realistic. (A while back I played a video game which had a reference to women being captured for breeding and men for other sexual abuse (which made no sense in the setting, as these slaver faction already were resource starved, and poisoned so they died quickly, so no way they could raise kids into maturity in that environment (also iirc the slaver faction was less than 20 years old)). Which some players described as very realistic (people do the same about 40k, almost like it says something about their ideas of how the world works not the setting). I was just rolling my eyes and didnt comment. Apart from that it seemed ok. Crying suns is the name of the game for the people who want to avoid it for this reason (it wasnt a big plot point).

Sorry for being a bit offtopic and talking about entertainment again.

[–] Soyweiser@awful.systems 7 points 4 days ago

It is great, that means the system is vulnerable to hacks if you find an exploit in any of those methods, but only 1/4th of the time.

Somebody described AI agents as very enthusiastic 14 year olds, and looks like they certainly code like one.

[–] Soyweiser@awful.systems 9 points 4 days ago

Word of warning, there is a code download going round with mallware in it: https://www.theregister.com/2026/04/02/trojanized_claude_code_leak_github/

[–] Soyweiser@awful.systems 3 points 4 days ago

What if, because llms are trained on the internet, this is just something which will eventually be included. Statistically it needs more ads.

[–] Soyweiser@awful.systems 7 points 4 days ago

If you had told this to the me of 20 years ago I wouldnt have believed you.

[–] Soyweiser@awful.systems 8 points 4 days ago (1 children)

Not sure if I should post it here or the stubsack, somebody went through the claude code https://neuromatch.social/@jonny/116324676116121930 (via @aliettedebodard.com and @olivia.science on bsky)

 

Via reddits sneerclub. Thanks u/aiworldism.

I have called LW a cult incubator for a while now, and while the term has not catched on, nice to see more reporting on the problem that lw makes you more likely to join a cult.

https://www.aipanic.news/p/the-rationality-trap the original link for the people who dont like archive.is used the archive because I dont like substack and want to discourage its use.

 

As found by @gerikson here, more from the anti anti TESCREAL crowd. How the antis are actually R9PRESENTATIONALism. Ottokar expanded on their idea in a blog post.

Original link.

I have not read the bigger blog post yet btw, just assumed it would be sneerable and posted it here for everyone's amusement. Learn about your own true motives today. (This could be a troll of course, boy does he drop a lot of names and thinks that is enough to link things).

E: alternative title: Ideological Turing Test, a critical failure

 

Original title 'What we talk about when we talk about risk'. article explains medical risk and why the polygenic embryo selection people think about it the wrong way. Includes a mention of one of our Scotts (you know the one). Non archived link: https://theinfinitesimal.substack.com/p/what-we-talk-about-when-we-talk-about

view more: next ›