blakestacey

joined 2 years ago
[–] blakestacey@awful.systems 12 points 1 day ago

I found this because Greg Egan shared it elsewhere on fedi:

I am now being required by my day job to use an AI assistant to write code. I have also been informed that my usage of AI assistants will be monitored and decisions about my career will be based on those metrics.

It gets worse from there.

[–] blakestacey@awful.systems 12 points 1 day ago* (last edited 1 day ago)

Hey, I haven't seen this going around yet, but itchio is also taking books down with no erotic content that are just labeled as lgbtqia+

So that's super cool and totally not what I thought they were going to do next 🙃

https://bsky.app/profile/marsadler.bsky.social/post/3luov7rkles2u

And a relevant petition from the ACLU:

https://action.aclu.org/petition/mastercard-sex-work-work-end-your-unjust-policy

[–] blakestacey@awful.systems 8 points 1 day ago (1 children)

It's "general intelligence", the eugenicist wet dream of a supposedly quantitative measure of how the better class of humans do brain good.

[–] blakestacey@awful.systems 13 points 2 days ago (3 children)

From Yud's remarks on Xitter:

As much as people might like to joke about how little skill it takes to found a $2B investment fund, it isn't actually true that you can just saunter in as a psychotic IQ 80 person and do that.

Well, not with that attitude.

You must be skilled at persuasion, at wearing masks, at fitting in, at knowing what is expected of you;

If "wearing masks" really is a skill they need, then they are all susceptible to going insane and hiding it from their coworkers. Really makes you think (TM).

you must outperform other people also trying to do that, who'd like that $2B for themselves. Winning that competition requires g-factor and conscientious effort over a period.

zoom and enhance

g-factor

[–] blakestacey@awful.systems 16 points 2 days ago (12 children)

Yud continues to bluecheck:

"This is not good news about which sort of humans ChatGPT can eat," mused Yudkowsky. "Yes yes, I'm sure the guy was atypically susceptible for a $2 billion fund manager," he continued. "It is nonetheless a small iota of bad news about how good ChatGPT is at producing ChatGPT psychosis; it contradicts the narrative where this only happens to people sufficiently low-status that AI companies should be allowed to break them."

Is this "narrative" in the room with us right now?

It's reassuring to know that times change, but Yud will always be impressed by the virtues of the rich.

[–] blakestacey@awful.systems 9 points 4 days ago (2 children)

Here's their page of instructions, written as usual by the children who really liked programming the family VCR:

https://en.wikipedia.org/wiki/Wikipedia:Database_download

[–] blakestacey@awful.systems 14 points 4 days ago (5 children)

Want to feel depressed? Over 2,000 Wikipedia articles, on topics from Morocco to Natalie Portman to Sinn Féin, are corrupted by ChatGPT. And that's just the obvious ones.

https://en.wikipedia.org/w/index.php?search=insource%3A%22utm_source%3Dchatgpt.com%22&title=Special%3ASearch&profile=advanced&fulltext=1&ns0=1&searchToken=8ops8b9qb8qmw8by39k248jyp

[–] blakestacey@awful.systems 14 points 5 days ago (6 children)

https://xcancel.com/jasonlk/status/1946069562723897802

Vibe Coding Day 8,

I'm not even out of bed yet and I'm already planning my day on @Replit.

Today is AI Day, to really add AI to our algo.

[...]

If @Replit deleted my database between my last session and now there will be hell to pay

[–] blakestacey@awful.systems 14 points 6 days ago

Evan Urquhart:

I had to attend a presentation from one of these guys, trying to tell a room full of journalists that LLMs could replace us & we needed to adapt by using it and I couldn't stop thinking that an LLM could never be a trans journalist, but it could probably replace the guy giving the presentation.

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

 

Everybody loves Wikipedia, the surprisingly serious encyclopedia and the last gasp of Old Internet idealism!

(90 seconds later)

We regret to inform you that people write credulous shit about "AI" on Wikipedia as if that is morally OK.

Both of these are somewhat less bad than they were when I first noticed them, but they're still pretty bad. I am puzzled at how the latter even exists. I had thought that there were rules against just making a whole page about a neologism, but either I'm wrong about that or the "rules" aren't enforced very strongly.

 

Retraction Watch reports:

All but one member of the editorial board of the Journal of Human Evolution (JHE), an Elsevier title, have resigned, saying the “sustained actions of Elsevier are fundamentally incompatible with the ethos of the journal and preclude maintaining the quality and integrity fundamental to JHE’s success.”

The resignation statement reads in part,

In fall of 2023, for example, without consulting or informing the editors, Elsevier initiated the use of AI during production, creating article proofs devoid of capitalization of all proper nouns (e.g., formally recognized epochs, site names, countries, cities, genera, etc.) as well italics for genera and species. These AI changes reversed the accepted versions of papers that had already been properly formatted by the handling editors.

(Via Pharyngula.)

Related:

 

Need to make a primal scream without gathering footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh facts of Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

view more: next ›