this post was submitted on 24 Aug 2025
21 points (100.0% liked)

TechTakes

2185 readers
73 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

(page 2) 50 comments
sorted by: hot top controversial new old
[–] BlueMonday1984@awful.systems 10 points 1 month ago* (last edited 1 month ago) (3 children)

OpenAI has stated its scanning users' conversations (as if they weren't already) and reporting conversations to the cops in response to the recent teen suicide I mentioned a couple days ago.

So, rather than let ChatGPT drive users to kill themselves, its just going to SWAT users and have the cops do the job.

(On an arguably more comedic note, the AI doomers are accusing OpenAI of betraying humankind.)

[–] Soyweiser@awful.systems 9 points 1 month ago* (last edited 1 month ago)

Mentioned it on bsky, more externalizing of the costs of their product. And due to vpns will not even work in a lot of cases. (And also prob gdpr illegal).

E: A big reason why this kind of stuff is in there and prob very hard to get rid of is places like alt.suicide.holiday which have been online for ages, and are from what I can tell not considered bad places by the experts.

[–] mii@awful.systems 7 points 1 month ago (1 children)

(On an arguably more comedic note, the AI doomers are accusing OpenAI of betraying humankind.)

The last open letter was funnier. Even though "[t]he stakes could not be higher," this one doesn't even have Yud going on a promo tour talking about all of humanity dropping dead from diamondoid bacteria that ChatGPT will order from Fiverr after going foom.

load more comments (1 replies)
load more comments (1 replies)
[–] Seminar2250@awful.systems 10 points 1 month ago* (last edited 1 month ago) (1 children)

people who talk about "prompting" like it's a skill would take a class^[read: watch a youtube tutorial] on tasseomancy because a coffee shop opened across the street

[–] HedyL@awful.systems 8 points 1 month ago (4 children)

I think this is more about plausible deniability: If people report getting wrong answers from a chatbot, this is surely only because of their insufficient "prompting skills".

Oddly enough, the laziest and most gullible chatbot users tend to report the smallest number of hallucinations. There seems to be a correlation between laziness, gullibility and "great prompting skills".

load more comments (4 replies)
[–] gerikson@awful.systems 10 points 1 month ago (30 children)

"Enjoy" this Wronger explaining human sexual attraction

https://www.lesswrong.com/posts/ktydLowvEg8NxaG4Z/neuroscience-of-human-sexual-attraction-triggers-3

I have but skimmed it, not plumbed its depths for sneers.

[–] Amoeba_Girl@awful.systems 8 points 1 month ago* (last edited 1 month ago)

Ugh what is it with misogynists and compulsively writing out this nonsense again and again? Just want to punch him on the nose. Make it stop.

I think that §2 (“appearance-based sexual attraction”) will be the part that’s more centrally relevant for cis men (and most trans women)

...

kys

load more comments (29 replies)
[–] dgerard@awful.systems 9 points 1 month ago (3 children)

TIL that "Aris Thorne" is a character name favoured by ChatGPT - which means its presence is a reliable slop tell, lol

like the dumbass-ray version of Ballard calling multiple characters variants on "Traven"

what to do with this information

load more comments (3 replies)
[–] YourNetworkIsHaunted@awful.systems 9 points 1 month ago (4 children)
[–] blakestacey@awful.systems 8 points 1 month ago

sex weirdo (derogatory)

load more comments (3 replies)
[–] BlueMonday1984@awful.systems 9 points 1 month ago
[–] BlueMonday1984@awful.systems 8 points 1 month ago (4 children)
load more comments (4 replies)
[–] BigMuffN69@awful.systems 8 points 1 month ago (7 children)

https://www.argmin.net/p/the-banal-evil-of-ai-safety

Once again shilling another great Ben Recht post. This time calling out the fucking insane irresponsibility of "responsible" AI providers to do the bare minimum to prevent people from having psychological beaks from reality.

"I’ve been stuck on this tragic story in the New York Times about Adam Raine, a 16-year-old who took his life after months of getting advice on suicide from ChatGPT. Our relationship with technological tools is complex. That people draw emotional connections to chatbots isn’t new (I see you, Joseph Weizenbaum). Why young people commit suicide is multifactorial. We’ll see whether a court will find OpenAI liable for wrongful death.

But I’m not a court of law. And OpenAI is not only responsible, but everyone who works there should be ashamed of themselves."

load more comments (7 replies)
[–] fnix@awful.systems 8 points 1 month ago* (last edited 1 month ago) (3 children)

Mark Cuban is feeling bullied by Bluesky. He will also have you know that you need to keep aware of the important achievements of your betters, as though he is currently the 5th most blocked user on there, he was indeed once the 4th most blocked user. Perhaps he is just crying out to move up the ranks once more?

It’s really all about Bluesky employees being able to afford their healthcare for Mark you see.

And of course, here’s never-Trumper Anne Applebaum running interference for him. Really an appropriate hotdog-guy-meme moment – as much as I shamelessly sneer at Cuban, I’m genuinely angered by the complete inability of the self-satisfied ‘democracy defender’ set to see their own complicity in perpetuating a permission structure for priviliged white men to feel eternally victimized.

load more comments (3 replies)
[–] BlueMonday1984@awful.systems 8 points 1 month ago (1 children)

Found a couple articles about blunting AI's impact on education (got them off of Audrey Watters' blog, for the record).

The first is a New York Times guest essay by NYU vice provost Clay Shirky, which recommends "moving away from take-home assignments and essays and toward [...] assessments that call on students to demonstrate knowledge in real time."

The second is an article by Kate Manne calling for professors to prevent cheating via AI, which details her efforts in doing so:

Instead of take-home essays to write in their own time, I’ll have students complete in-class assignments that will be hand-written. I won’t allow electronic devices in my class, except for students who tell me they need them as a caregiver or first responder or due to a disability. Students who do need to use a laptop will have to complete the assignment using google docs, so I can see their revision history.

Manne does note the problems with this (outing disabled students, class time spent writing, and difficulties in editing, rewriting, and make-up work), but still believes "it is better, on balance, to take this approach rather than risk a significant proportion of students using AI to write their essays."

load more comments (1 replies)
[–] fullsquare@awful.systems 7 points 1 month ago (4 children)
[–] Architeuthis@awful.systems 7 points 1 month ago (1 children)

Zitron taking every opportunity to shit on Scott's AI2027 is kind of cathartic, ngl

load more comments (1 replies)
load more comments (3 replies)
load more comments
view more: ‹ prev next ›