gerikson

joined 2 years ago
[–] gerikson@awful.systems 17 points 2 days ago (5 children)

Here's LWer "johnswentworth", who has more than 57k karma on the site and can be characterized as a big cheese:

My Empathy Is Rarely Kind

I usually relate to other people via something like suspension of disbelief. Like, they’re a human, same as me, they presumably have thoughts and feelings and the like, but I compartmentalize that fact. I think of them kind of like cute cats. Because if I stop compartmentalizing, if I start to put myself in their shoes and imagine what they’re facing… then I feel not just their ineptitude, but the apparent lack of desire to ever move beyond that ineptitude. What I feel toward them is usually not sympathy or generosity, but either disgust or disappointment (or both).

"why do people keep saying we sound like fascists? I don't get it!"

[–] gerikson@awful.systems 6 points 2 days ago

The artillery branch of most militaries has long been a haven for the more brainy types. Napoleon was a gunner, for example.

[–] gerikson@awful.systems 12 points 2 days ago

Oh, but LW has the comeback for you in the very first paragraph

Outside of niche circles on this site and elsewhere, the public's awareness about AI-related "x-risk" remains limited to Terminator-style dangers, which they brush off as silly sci-fi. In fact, most people's concerns are limited to things like deepfake-based impersonation, their personal data training AI, algorithmic bias, and job loss.

Silly people! Worrying about problems staring them in the face, instead of the future omnicidal AI that is definitely coming!

[–] gerikson@awful.systems 15 points 2 days ago (10 children)

LessWronger discovers the great unwashed masses , who inconveniently still indirectly affect policy through outmoded concepts like "voting" instead of writing blogs, might need some easily digested media pablum to be convinced that Big Bad AI is gonna kill them all.

https://www.lesswrong.com/posts/4unfQYGQ7StDyXAfi/someone-should-fund-an-agi-blockbuster

Cites such cultural touchstones as "The Day After Tomorrow", "An Inconvineent Truth" (truly a GenZ hit), and "Slaughterbots" which I've never heard of.

Listen to the plot summary

  • Slowburn realism: The movie should start off in mid-2025. Stupid agents.Flawed chatbots, algorithmic bias. Characters discussing these issues behind the scenes while the world is focused on other issues (global conflicts, Trump, celebrity drama, etc). [ok so basically LW: the Movie]
  • Explicit exponential growth: A VERY slow build-up of AI progress such that the world only ends in the last few minutes of the film. This seems very important to drill home the part about exponential growth. [ah yes, exponential growth, a concept that lends itself readily to drama]
  • Concrete parallels to real actors: Themes like "OpenBrain" or "Nole Tusk" or "Samuel Allmen" seem fitting. ["we need actors to portray real actors!" is genuine Hollywood film talk]
  • Fear: There's a million ways people could die, but featuring ones that require the fewest jumps in practicality seem the most fitting. Perhaps microdrones equipped with bioweapons that spray urban areas. Or malicious actors sending drone swarms to destroy crops or other vital infrastructure. [so basically people will watch a conventional thriller except in the last few minutes everyone dies. No motivation. No clear "if we don't cut these wires everyone dies!"]

OK so what should be shown in the film?

compute/reporting caps, robust pre-deployment testing mandates (THESE are all topics that should be covered in the film!)

Again, these are the core components of every blockbuster. I can't wait to see "Avengers vs the AI" where Captain America discusses robust pre-deployment testing mandates with Tony Stark.

All the cited URLS in the footnotes end with "utm_source=chatgpt.com". 'nuff said.

[–] gerikson@awful.systems 4 points 4 days ago

Remember FizzBuzz? That was originally a simple filter exercise some person recruiting programmers came up with to weed out everyone with multi-year CS degrees but zero actual programming experience.

[–] gerikson@awful.systems 9 points 6 days ago

The argument would be stronger (not strong, but stronger) if he could point to an existing numbering system that is little-endian and somehow show it's better

[–] gerikson@awful.systems 17 points 6 days ago* (last edited 5 days ago) (9 children)

The guy who thinks it's important to communicate clearly (https://awful.systems/comment/7904956) wants to flip the number order around

https://www.lesswrong.com/posts/KXr8ys8PYppKXgGWj/english-writes-numbers-backwards

I'll consider that when the Yanks abandon middle-endian date formatting.

Edit it's now tagged as "Humor" on LW. Cowards. Own your cranks.

[–] gerikson@awful.systems 13 points 1 week ago* (last edited 1 week ago)

So here's a poster on LessWrong, ostensibly the space to discuss how to prevent people from dying of stuff like disease and starvation, "running the numbers" on a Lancet analysis of the USAID shutdown and, having not been able to replicate its claims of millions of dead thereof, basically concludes it's not so bad?

https://www.lesswrong.com/posts/qgSEbLfZpH2Yvrdzm/i-tried-reproducing-that-lancet-study-about-usaid-cuts-so

No mention of the performative cruelty of the shutdown, the paltry sums involved compared to other gov expenditures, nor the blow it deals to American soft power. But hey, building Patriot missiles and then not sending them to Ukraine is probably net positive for human suffering, just run the numbers the right way!

Edit ah it's the dude who tried to prove that most Catholic cardinals are gay because heredity, I think I highlighted that post previously here. Definitely a high-sneer vein to mine.

[–] gerikson@awful.systems 6 points 1 week ago

No replies and somehow that screen name just screams "troll" to me.

Not that I really care, git can go DIAF as far as I'm concerned.

[–] gerikson@awful.systems 5 points 1 week ago

janitorai - which seems to be a hosting site for creepy AI chats - is blocking all UK visitors due to the OSA

https://blog.janitorai.com/posts/3/

I'm torn here, the OSA seems to me to be massive overreach but perhaps shielding limeys from AI is wroth it

[–] gerikson@awful.systems 8 points 1 week ago* (last edited 1 week ago) (2 children)

Guys, how about we made the coming computer god a fan of Robert Nozick, what could go wrong?

https://www.lesswrong.com/posts/us8ss79mWCgTcSKoK/a-night-watchman-asi-as-a-first-step-toward-a-great-future

 

In a since deleted thread on another site, I wrote

For the OG effective altruists, it’s imperative to rebrand the kooky ultra-utilitarianists as something else. TESCREAL is the term adopted by their opponents.

Looks like great minds think alike! The EA's need to up their google juice so people searching for the term find malaria nets, not FTX. Good luck on that, Scott!

The HN comments are ok, with this hilarious sentence

I go to LessWrong, ACX, and sometimes EA meetups. Why? Mainly because it's like the HackerNews comment section but in person.

What's the German term for a recommendation that's the exact opposite?

view more: next ›