"Enjoy" this Wronger explaining human sexual attraction
https://www.lesswrong.com/posts/ktydLowvEg8NxaG4Z/neuroscience-of-human-sexual-attraction-triggers-3
I have but skimmed it, not plumbed its depths for sneers.
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
"Enjoy" this Wronger explaining human sexual attraction
https://www.lesswrong.com/posts/ktydLowvEg8NxaG4Z/neuroscience-of-human-sexual-attraction-triggers-3
I have but skimmed it, not plumbed its depths for sneers.
Ugh what is it with misogynists and compulsively writing out this nonsense again and again? Just want to punch him on the nose. Make it stop.
I think that §2 (“appearance-based sexual attraction”) will be the part that’s more centrally relevant for cis men (and most trans women)
...
kys
I regret to inform you all that Elon Musk appears to be completely addicted to AI-generated anime porn..
sex weirdo (derogatory)
TIL that "Aris Thorne" is a character name favoured by ChatGPT - which means its presence is a reliable slop tell, lol
like the dumbass-ray version of Ballard calling multiple characters variants on "Traven"
what to do with this information
New Public Good newsletter, talking about YouTube editing users' uploads without their permission/knowledge.
The deaths just keep coming: the Wall Street Journal's just reported on a murder-suicide caused by ChatGPT.
Mark Cuban is feeling bullied by Bluesky. He will also have you know that you need to keep aware of the important achievements of your betters, as though he is currently the 5th most blocked user on there, he was indeed once the 4th most blocked user. Perhaps he is just crying out to move up the ranks once more?
It’s really all about Bluesky employees being able to afford their healthcare for Mark you see.
And of course, here’s never-Trumper Anne Applebaum running interference for him. Really an appropriate hotdog-guy-meme moment – as much as I shamelessly sneer at Cuban, I’m genuinely angered by the complete inability of the self-satisfied ‘democracy defender’ set to see their own complicity in perpetuating a permission structure for priviliged white men to feel eternally victimized.
https://www.argmin.net/p/the-banal-evil-of-ai-safety
Once again shilling another great Ben Recht post. This time calling out the fucking insane irresponsibility of "responsible" AI providers to do the bare minimum to prevent people from having psychological beaks from reality.
"I’ve been stuck on this tragic story in the New York Times about Adam Raine, a 16-year-old who took his life after months of getting advice on suicide from ChatGPT. Our relationship with technological tools is complex. That people draw emotional connections to chatbots isn’t new (I see you, Joseph Weizenbaum). Why young people commit suicide is multifactorial. We’ll see whether a court will find OpenAI liable for wrongful death.
But I’m not a court of law. And OpenAI is not only responsible, but everyone who works there should be ashamed of themselves."
Found a couple articles about blunting AI's impact on education (got them off of Audrey Watters' blog, for the record).
The first is a New York Times guest essay by NYU vice provost Clay Shirky, which recommends "moving away from take-home assignments and essays and toward [...] assessments that call on students to demonstrate knowledge in real time."
The second is an article by Kate Manne calling for professors to prevent cheating via AI, which details her efforts in doing so:
Instead of take-home essays to write in their own time, I’ll have students complete in-class assignments that will be hand-written. I won’t allow electronic devices in my class, except for students who tell me they need them as a caregiver or first responder or due to a disability. Students who do need to use a laptop will have to complete the assignment using google docs, so I can see their revision history.
Manne does note the problems with this (outing disabled students, class time spent writing, and difficulties in editing, rewriting, and make-up work), but still believes "it is better, on balance, to take this approach rather than risk a significant proportion of students using AI to write their essays."
what worked for me teaching an undergrad course last year was to have
it's a completion grade after all) and i let students know that if they wanted direct feedback they could bring their solutions to office hours
it ended up working pretty well. an added benefit was that my TAs didn't have to deal with the nightmare of grading 120 very poorly written homeworks every four weeks. my students also stopped obsessing about the grades they would receive on their homeworks and instead focused on ~~learning~~ the grades they would receive on their exams
however, at the k-12 level, it feels like a much harder problem to tackle. parental involvement is the only solution i can think of, and that's already kind of a nightmare (at least here in the us)