Stopthatgirl7

joined 1 year ago
 

Tech behemoth OpenAI has touted its artificial intelligence-powered transcription tool Whisper as having near “human level robustness and accuracy.”

But Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers. Those experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments.

Experts said that such fabrications are problematic because Whisper is being used in a slew of industries worldwide to translate and transcribe interviews, generate text in popular consumer technologies and create subtitles for videos.

More concerning, they said, is a rush by medical centers to utilize Whisper-based tools to transcribe patients’ consultations with doctors, despite OpenAI’ s warnings that the tool should not be used in “high-risk domains.”

 

The mother of a 14-year-old Florida boy says he became obsessed with a chatbot on Character.AI before his death.

On the last day of his life, Sewell Setzer III took out his phone and texted his closest friend: a lifelike A.I. chatbot named after Daenerys Targaryen, a character from “Game of Thrones.”

“I miss you, baby sister,” he wrote.

“I miss you too, sweet brother,” the chatbot replied.

Sewell, a 14-year-old ninth grader from Orlando, Fla., had spent months talking to chatbots on Character.AI, a role-playing app that allows users to create their own A.I. characters or chat with characters created by others.

Sewell knew that “Dany,” as he called the chatbot, wasn’t a real person — that its responses were just the outputs of an A.I. language model, that there was no human on the other side of the screen typing back. (And if he ever forgot, there was the message displayed above all their chats, reminding him that “everything Characters say is made up!”)

But he developed an emotional attachment anyway. He texted the bot constantly, updating it dozens of times a day on his life and engaging in long role-playing dialogues.

 

A former jockey who was left paralyzed from the waist down after a horse riding accident was able to walk again thanks to a cutting-edge piece of robotic tech: a $100,000 ReWalk Personal exoskeleton.

When one of its small parts malfunctioned, however, the entire device stopped working. Desperate to gain his mobility back, he reached out to the manufacturer, Lifeward, for repairs. But it turned him away, claiming his exoskeleton was too old, *404 media *reports.

"After 371,091 steps my exoskeleton is being retired after 10 years of unbelievable physical therapy," Michael Straight posted on Facebook earlier this month. "The reasons why it has stopped is a pathetic excuse for a bad company to try and make more money."

 

A former spokesperson for Kyle Rittenhouse says he became disillusioned with his ex-client after learning that he had sent text messages pledging to “fucking murder” shoplifters outside a pharmacy before later shooting two people to death during racial justice protests in Wisconsin in 2020.

Dave Hancock made that remark about Rittenhouse – for whom he also worked as a security guard – on a Law & Crime documentary that premiered on Friday. The show explored the unsuccessful criminal prosecution of Rittenhouse, who killed Joseph Rosenbaum and Anthony Huber in Kenosha, Wisconsin.

As Hancock told it on The Trials of Kyle Rittenhouse, the 90-minute film’s main subject had “a history of things he was doing prior to [the double slaying], specifically patrolling the street for months with guns and borrowing people’s security uniforms, doing whatever he could to try to get into some kind of a fight”.

Hancock nonetheless said he initially believed Rittenhouse’s claims of self-defense when he first relayed his story about fatally shooting Rosenbaum and Huber. Yet that changed when he later became aware of text messages that surfaced as part of a civil lawsuit filed by the family of one of the men slain in Kenosha demanding wrongful death damages from Rittenhouse.

 

When German journalist Martin Bernklautyped his name and location into Microsoft’s Copilot to see how his articles would be picked up by the chatbot, the answers horrified him. Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers. For years, Bernklau had served as a courts reporter and the AI chatbot had falsely blamed him for the crimes whose trials he had covered. 

The accusations against Bernklau weren’t true, of course, and are examples of generative AI’s “hallucinations.” These are inaccurate or nonsensical responses to a prompt provided by the user, and they’re alarmingly common. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted. 

But why did Copilot hallucinate these terrible and false accusations?

 

BRASILIA, Brazil (AP) — A judge in the Brazilian state of Rondonia has found two beef slaughterhouses guilty of buying cattle from a protected area of former rainforest in the Amazon and ordered them, along with three cattle ranchers, to pay a total of $764,000 for causing environmental damage, according to the decision issued Wednesday. Cattle raising drives Amazon deforestation. The companies Distriboi and Frigon and the ranchers may appeal.

It is the first decision in several dozen lawsuits seeking millions of dollars in environmental damages from the slaughterhouses for allegedly trading in cattle raised illegally in a protected area known as Jaci-Parana, which was rainforest but is now mostly converted to pasture. 

Four slaughterhouses are among the many parties charged, including JBS SA, which bills itself as the world’s largest protein producer. The court has not decided on the cases involving JBS.

Brazilian law forbids commercial cattle inside a protected area, yet some 210,000 head are being grazed inside Jaci-Parana, according to the state animal division. With almost 80% of its forest destroyed, it ranks as the most ravaged conservation unit in the Brazilian Amazon. A court filing pegs damages in the reserve at some $1 billion.

 

Does AI actually help students learn? A recent experiment in a high school provides a cautionary tale. 

Researchers at the University of Pennsylvania found that Turkish high school students who had access to ChatGPT while doing practice math problems did worse on a math test compared with students who didn’t have access to ChatGPT. Those with ChatGPT solved 48 percent more of the practice problems correctly, but they ultimately scored 17 percent worse on a test of the topic that the students were learning.

A third group of students had access to a revised version of ChatGPT that functioned more like a tutor. This chatbot was programmed to provide hints without directly divulging the answer. The students who used it did spectacularly better on the practice problems, solving 127 percent more of them correctly compared with students who did their practice work without any high-tech aids. But on a test afterwards, these AI-tutored students did no better. Students who just did their practice problems the old fashioned way — on their own — matched their test scores.

 

A white couple from rural West Virginia is back behind bars after a judge revoked the initial bond and raised it to $500,000 apiece — more than double the amount they faced last year when police arrested the pair on charges of locking their adopted Black children in a barn and forcing them to work as “slaves.”

Donald Ray Lantz, 63, and Jeanne Kay Whitefeather, 62, both of Sissonville, were ordered to reappear in Kahanwha County Court on June 11, more than eight months after each posted a $200,000 bond following their arrests in October. 

At the time, police conducting a wellness check at the Cheyanne Lane home were shocked to discover two of the couple’s five adopted children living in deplorable conditions, padlocked inside a ramshackle storage shed on the back of the property, which had no working lights or running water.

 

A Tesla was in its self-driving mode when it crashed into a parked patrol vehicle responding to a fatal crash in Orange County Thursday morning, police said. 

The officer was on traffic control duty blocking Orangethorpe Avenue in Fullerton for an investigation into a suspected DUI crash that left a motorcyclist dead around 9 p.m. Wednesday when his vehicle was struck.

A Fullerton Police Department spokesperson said the officer was standing outside his vehicle around midnight when he saw a Tesla driving in his direction and not slowing down.

 

The day after Mexico’s leftwing ruling party Morena won a landslide victory in presidential, congressional and state elections, one executive stayed in bed all afternoon eating ice cream to try to cope. A wealthy woman in Mexico City told friends it was time to “move to the house in Houston”, while another business leader said his WhatsApp chats were marked by a mood of “collective suicide”.

President-elect Claudia Sheinbaum and Morena’s victory last Sunday was not a surprise, but the scale of their triumph was. Sheinbaum vaulted 31 points clear of her nearest challenger, centre-right entrepreneur Xóchitl Gálvez, and Morena is now poised to push through radical changes to the constitution after greatly increasing its majority in congress.

 

You know how Google's new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won't slide off (pssst...please don't do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these "hallucinations" are an "inherent feature" of  AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem."

 

A small publisher for speculative fiction and roleplaying games is shuttering after 22 years, and the “final straw,” its founder said, is an influx of AI-generated submissions.

In a notice posted to the site, founder ​Julie Ann Dawson wrote that effective March 6, she was winding down operations to focus on her health and “day job” that’s separate from the press. “All of these issues impacted my decision. However, I also have to confess to what may have been the final straws. AI...and authors behaving badly,” she wrote.

view more: next ›