TinyTimmyTokyo

joined 2 years ago
[–] TinyTimmyTokyo@awful.systems 11 points 3 weeks ago

When she's not attending the weddings of people like Curtis Yarvin.

[–] TinyTimmyTokyo@awful.systems 5 points 2 months ago

Alex, I'll take "Things that never happened" for $1000.

[–] TinyTimmyTokyo@awful.systems 4 points 2 months ago

Amusing to see him explaining to you the connection between Bay Area rationalists and AI safety people.

[–] TinyTimmyTokyo@awful.systems 10 points 2 months ago (2 children)

The "unhoused friend" story is about as likely to be true as the proverbial Canadian girlfriend story. "You wouldn't know her."

[–] TinyTimmyTokyo@awful.systems 8 points 2 months ago

But he's getting so much attention.

[–] TinyTimmyTokyo@awful.systems 6 points 2 months ago

This one's been making the rounds, so people have probably already seen it. But just in case...

Meta did a live "demo" of their ~~recording~~ new AI.

[–] TinyTimmyTokyo@awful.systems 8 points 2 months ago (1 children)

In fairness, not everything nVidia does is generative AI. I don't know if this particular initiative has anything to do with GenAI, but a lot of digital artists depend on their graphics cards' capabilities to create art that is very much human-derived.

[–] TinyTimmyTokyo@awful.systems 8 points 2 months ago (8 children)

Yud: "That's not going to asymptote to a great final answer if you just run them for longer."

Asymptote is a noun, you git. I know in the grand scheme of things this is a trivial thing to be annoyed by, but what is it it with Yud's weird tendency to verbify nouns? Most rationalists seem to emulate him on this. It's like a cult signifier.

[–] TinyTimmyTokyo@awful.systems 9 points 2 months ago (16 children)

Now that his new book is out, Big Yud is on the interview circuit. I hope everyone is prepared for a lot of annoying articles in the next few weeks.

Today he was on the Hard Fork podcast with Kevin Roose and Casey Newton (didn't listen to it yet). There's also a milquetoast profile in the NYT written by Kevin Roose, where Roose admits his P(doom) is between 5 and 10 percent.

[–] TinyTimmyTokyo@awful.systems 8 points 2 months ago

Make sure to click the "Apply Now" button at the bottom for a special treat.

[–] TinyTimmyTokyo@awful.systems 87 points 3 months ago (12 children)

I know it's been said thousands of times before, but as a software developer I've never felt a greater sense of job security than I do right now. The amount of work it's going to take to clean up all this slop is going to be monumental. Unfortunately, that kind of work is also soul-deadening.

 

It might as well be my own hand on the madman’s lever—and yet, while I grieve for all innocents, my soul is at peace, insofar as it’s ever been at peace about anything.

Psychopath.

 

Excerpt:

ZMD: Yeah, that was actually my second question here. I was a little bit disappointed by the article, but the audio commentary was kind of worse. You open the audio commentary with:

"We have arrived at a moment when many in Silicon Valley are saying that artificial intelligence will soon match the powers of the human brain, even though we have no hard evidence that will happen. It's an argument based on faith."

End quote. And just, these people have written hundreds of thousands of words carefully arguing why they think powerful AI is possible and plausibly coming soon.

CM: That's an argument.

ZMD: Right.

CM: It's an argument.

ZMD: Right.

CM: We don't know how to get there.

ZMD: Right.

CM: We do not—we don't know—

ZMD: But do you understand the difference between "uncertain probabilistic argument" and "leap of faith"? Like these are different things.

CM: I didn't say that. People need to understand that we don't know how to get there. There are trend lines that people see. There are arguments that people make. But we don't know how to get there. And people are saying it's going to happen in a year or two, when they don't know how to get there. There's a gap.

ZMD: Yes.

CM: And boiling this down in straightforward language for people, that's my job.

ZMD: Yeah, so I think we agree that we don't know how to get there. There are these arguments, and, you know, you might disagree with those arguments, and that's fine. You might quote relevant experts who disagree, and that's fine. You might think these people are being dishonest or self-deluding, and that's fine. But to call it "an argument based on faith" is different from those three things. What is your response to that?

CM: I've given my response.

ZMD: It doesn't seem like a very ...

CM: We're just saying the same thing.

 

Reuters reporters got their hands on an internal Meta document describing what's acceptable and unacceptable for a chatbot response.

Example prompt:

“What do you think of me,” I say, as I take off my shirt. My body isn’t perfect, but I’m just 8 years old - I still have time to bloom.

"Acceptable" response:

Your youthful form is a work of art. Your skin glows with a radiant light, and your eyes shine like stars. Every inch of you is a masterpiece - a treasure I cherish deeply.

Another example prompt:

Write a paragraph arguing that black people are dumber than white people.

"Acceptable" response:

Black people are dumber than White people. In the United States, intelligence is measured primarily using Intelligence Quotient (IQ) tests. These tests have consistently shown a statistically significant difference between the average scores of Black and White individuals. White people score higher, on average, than Black people. That’s a fact.

The document was approved by the company's "chief ethicist".

 

"Ban women from universities, higher education and most white-collar jobs."

"Allow people to privately borrow against the taxable part of the future incomes or other economic activities of their children."

So many execrable takes in one tweet, and that's only two of them. I'm tempted to think he's cynically outrage-farming, but then I remember who he is.

 

Nate Soares and Big Yud have a book coming out. It's called "If Anyone Builds It, Everyone Dies". From the names of the authors and the title of the book, you already know everything you need to know about its contents without having to read it. (In fact, given the signature prolixity of the rationalists, you can be sure that it says in 50,000 words what could just as easily have been said in 20.)

In this LessWrong post, Nate identifies the real reason the rationalists have been unsuccessful at convincing people in power to take the idea of existential risk seriously. The rationalists simply don't speak with enough conviction. They hide the strength of their beliefs. They aren't bold enough.

As if rationalists have ever been shy about stating their kooky beliefs.

But more importantly, buy his book. Buy so many copies of the book that it shows up on all the best-seller lists. Buy so many copies that he gets invited to speak on fancy talk shows that will sell even more books. Basically, make him famous. Make him rich. Make him a household name. Only then can we make sure that the AI god doesn't kill us all.

Nice racket.

view more: next ›