spit_evil_olive_tips

joined 8 months ago

With NHS mental health waitlists at record highs, are chatbots a possible solution?

taking Betteridge's Law one step further - not only is the answer "no", the fucking article itself explains why the answer is no:

People around the world have shared their private thoughts and experiences with AI chatbots, even though they are widely acknowledged as inferior to seeking professional advice.

as with so many other things, "maybe AI can fix it?" is being used as a catch-all for every systemic problem in society:

In April 2024 alone, nearly 426,000 mental health referrals were made in England - a rise of 40% in five years. An estimated one million people are also waiting to access mental health services, and private therapy can be prohibitively expensive.

fucking fund the National Health Service properly, in order to take care of the people who need it.

but instead, they want to continue cutting its budget, and use "oh there's an AI chatbot that you can use that is totally just as good as talking to a human, trust us" as a way of sweeping the real-world harm caused by those budget cuts under the rug.

Nicholas has autism, anxiety, OCD, and says he has always experienced depression. He found face-to-face support dried up once he reached adulthood: "When you turn 18, it's as if support pretty much stops, so I haven't seen an actual human therapist in years."

He tried to take his own life last autumn, and since then he says he has been on a NHS waitlist.

 

archive link

Toronto police confirmed they did not receive help from Uber. Instead, spokesperson Stephanie Sayer says officers were otherwise able to reach the driver.

"The driver was unaware that the child was still in the vehicle," Sayer said in an email. "When officers arrived, the child was found in good health. Paramedics were called as a precaution."

Julia says it took about an hour and a half for police to find her five-year-old. Officers then drove Julia to her daughter who was "unharmed but in hysterics." Police found the girl and the driver about 20 kilometres away from her boyfriend's house in the city's north end.

Julia's boyfriend later received a $10 credit from Uber, which she considers "a massive slap in the face."

[–] spit_evil_olive_tips@beehaw.org 24 points 2 months ago

tl;dw is that you should say "please" as basically prompt engineering, I guess?

the theory seems to be that the chatbot will try to match your tone, so if you ask it questions in a tone like it's an all-knowing benevolent information god, it'll respond in kind, and if you treat it politely its responses will tend more towards politeness?

I don't see how this solves any of the fundamental problems with asking a fancy random number generator for authoritative information, but sure, if you want to be polite to the GPUs, have at it.

like, several lawyers have been sanctioned for submitting LLM-generated legal briefs with hallucinated case citations. if you tack on "pretty please, don't make up any fake case citations or I could get disbarred" to a prompt...is that going to solve the problem?

here is the original source of the article, published on a site called Futurism: https://futurism.com/microsoft-ceo-ai-generating-no-value

it got syndicated by Yahoo News because Yahoo does a ton of that in a increasingly desperate attempt to be relevant

judging by the "more top stories" on Futurism's home page right now, they lean pretty heavily on clickbait:

Trump White House Tells Elon He's Stepped Over the Line

Microsoft Backing Out of Expensive New Data Centers After Its CEO Expressed Doubt About AI Value

Shark Steals Camera, Capturing Amazing Footage From Inside Its Mighty Jaws

here is the primary source that the article is based on: https://www.dwarkeshpatel.com/p/satya-nadella

there's a transcript that I suspect is almost certainly AI-generated, so some of these quotes may not be completely accurate:

Satya, thank you so much for coming on the podcast. So just in a second, we're going to get to the two breakthroughs that Microsoft has just made. And congratulations, same day in nature. Majorana Zero chip, which we have in front of us right here, and also the world human action models.

right off the bat, we have the context that this is a friendly interview for Nadella to promote some new "breakthroughs" that Microsoft has. this may be explicit spon-con or just "regular" access journalism, it's hard to say.

around 15 minutes in, the host asks:

You recently reported that your yearly revenue from AI is $13 billion. But if you look at your year-on-year growth on that, in like four years, it'll be 10x that. You'll have $130 billion in revenue from AI if the trend continues. If it does, what do you anticipate... we're doing with all that intelligence?

Like this industrial scale use, is it going to be like through office? Is it going to be you deploying it for others to host? Is it going to be, you got to have the AGIs to have 130 billion in revenue? What does it look like?

and Nadella responds:

Yeah. I see the way I come at it, Dworkish, is it's a great question because at some level, if you're going to have this sort of explosion, abundance, whatever commodity of intelligence available, the first thing we have to observe is GDP growth, right? Before I get to what Microsoft's sort of revenue will look like, I mean, there's only one governor in all of this, right? Which is, this is where a little bit of, we get ahead of ourselves with all this AGI hype, which is, hey, you know what? Let's first see if, let's say develop, I mean, like, remember, like, the developed world is what? 2% growth, and if you adjust for inflation, it's zero? That, like, so in 2025, as we sit here, I'm not an economist. At least I look at it and say, man, we have a real growth challenge. So the first thing that we all have to do is let, and when we say, oh, this is like the industrial revolution, blah, blah, blah. Oh, let's have that industrial revolution type of growth. That means to me, 10%. 7%, developed world, inflation adjusted, growing at 5%. That's the real marker, right? So it's not just, it can't just be supply side, right? It has to be, in fact, that's the thing, right?

I think there's a lot of people are writing about it. I'm glad they are, which is the big winners here are not going to be tech companies. The winners are going to be the broader industry that uses this commodity that, by the way, is abundant. Suddenly, productivity goes up and the economy is growing at a faster rate.

When that happens, We'll be fine as an industry. But that's, to me, the moment, right? So it costs self-claiming some AGI milestone. That's just nonsensical benchmark hacking to me. The real benchmark is, is the world growing at 10%.

that word salad is a lot of things, but I don't think it lives up to the "generating basically no value" hype that Futurism tried to give it.

also, I like that the transcript includes the seamless ad transition...which is of course for an AI product:

A quick word from our sponsor, Scale AI. Publicly available data is running out, so major labs like Meta and Google DeepMind and OpenAI all partner with Scale to push the boundaries of what's possible. Through Scale's data foundry, major labs get access to high-quality data to fuel post-training, including advanced reasoning capabilities.

As AI races forward, we must also strengthen human sovereignty. SCALE's research team, SEAL, provides practical AI safety frameworks, evaluates frontier AI system safety via public leaderboards, and creates foundations for integrating advanced AI into society. Most recently, in collaboration with the Center for AI Safety, SCALE published Humanity's Last Exam, a groundbreaking new AI benchmark for evaluating AI systems' expert level knowledge and reasoning across a wide range of fields. If you're an AI researcher or engineer and you want to learn more about how SCALE's data foundry and research team can help you go beyond the current frontier of capabilities, go to scale.com slash Dwarkesh.

did these fucking dweebs seriously name their AI research team the "SEAL team"?