this post was submitted on 16 Feb 2026
24 points (90.0% liked)

TechTakes

2444 readers
168 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. Also, hope you had a wonderful Valentine's Day!)

you are viewing a single comment's thread
view the rest of the comments
[–] BlueMonday1984@awful.systems 7 points 1 day ago (1 children)

It does seem more and more like the most relevant parallel is radicalization, particularly the concerns about algorithmic radicalization and stochastic terrorism we got back in the early 2010s. The machine system feeds the user back what they've put into it, validating that input and pushing the user into more extreme positions. When it happens through a community ("classical" radicalization) the fact that the community needs to persist serves to mediate or at least slow the destructive elements of the spiral. Your Nazi book club/street gang stops meeting if people go to prison, lose their jobs/homes, etc. Online communities reduce this friction and allow the spiral to accelerate to a great degree, but the group can still start eating itself if it accepts the wrong level of unhingedness and toxicity.

Algorithmic/Stochastic radicalization, where the user moves through a succession of media environments and (usually online) communities can allow things to accelerate even more because the user no longer actually has to maintain long-term social ties to remain engaged in the spiral. Rather than increasingly-destructive ideas echoing around a social space, the user can chase them across communities, with naive content algorithms providing a solid nudge in the right direction (pun wholly intended). However, the spiral is still dependent on the ability of the relevant media figures and communities to persist, even if the individual users no longer need a persistent connection to them. If the market doesn't have space for a creator then their role in that network drops. Getting violent or destructive content deplatformed also helps slow down the spiral by adding friction back into the process of jumping to the next level of radicalism. Past a certain point you find yourself back in the world of needing to maintain a community because the ideology has gotten so rotten that there's no profit in entertaining it. Past that you end up back with in-person or otherwise high-friction high-trist groups because the openness of a low-friction online community compromises internal security in ways that can't be allowed when you're literally doing crimes.

Chatbot-induced radicalization combines the extreme low friction of online interactions with an extremely high value validation and a complete lack of social restrictions. You don't have to retain a baseline connection to reality to maintain a relationship with a chatbot. You don't have to make connections and put in the work to find a chatbot to validate your worst impulses the same way that you do to join a militia. Your central cause doesn't have to be something to motivate anyone outside yourself. Your local KKK chapter probable has more on its agenda than hating your ex-wife (not that it doesn't make the list, of course), but your chatbot instance will happily give you an even stronger echo chamber no matter how narrow the focus. And unlike the stigma associated with the kinds of hate groups and cults that would normally fill this role for people, the entire weight if the trillion-dollar tech industry seems to be invested in promoting these chatbots as reliable and trustworthy -- even more so than the experts and institutions that are supposed to provide an anchor to counter this kind of descent. That's the most dangerous part of our Very Good Friends' projects on the matter. That's how you get relatively normal people to act like they're talking to God and He',s telling them everything they don't want to admit they want to hear.