will_a113

joined 2 years ago
[–] will_a113@lemmy.ml 2 points 1 week ago

All too real.

[–] will_a113@lemmy.ml 1 points 1 week ago (1 children)

So if there is actually some punishment handed down, any bets on what even more hellish scenario will rise up to replace the one where Google controls internet ads?

[–] will_a113@lemmy.ml 1 points 1 week ago (1 children)

hah, thanks, tho it's kind of the opposite of work. I had about 3.5hrs of zoom calls that particular day, and if my hands aren't doing something there's absolutely zero chance of me staying tuned in.

[–] will_a113@lemmy.ml 2 points 1 week ago (5 children)

Just another day with debilitating ADHD...

[–] will_a113@lemmy.ml 3 points 1 week ago (1 children)

Not that we have any real info about who collects/uses what when you use the API

[–] will_a113@lemmy.ml 53 points 1 week ago

It’s called “The Tiffany Problem”. You might want to use the historically accurate name Tiffany for a character in your 16th century historical fiction novel, but you can’t because it sounds like someone who was born in 1982.

[–] will_a113@lemmy.ml 4 points 1 week ago

Nobody knows! There's no specific disclosure that I'm aware of (in the US at least), and even if there was I wouldn't trust any of these guys to tell the truth about it anyway.

As always, don't do anything on the Internet that you wouldn't want the rest of the world to find out about :)

[–] will_a113@lemmy.ml 4 points 1 week ago (3 children)

They're talking about what is being recorded while the user is using the tools (your prompts, RAG data, etc.)

[–] will_a113@lemmy.ml 1 points 1 week ago

If money counts as a freedom unit then yes, probably (maybe)

[–] will_a113@lemmy.ml 3 points 1 week ago

Anthropic and OpenAPI both have options that let you use their API without training the system on your data (not sure if the others do as well), so if t3chat is simply using the API it may be that they themselves are collecting your inputs (or not, you'd have to check the TOS), but maybe their backend model providers are not. Or, who knows, they could all be lying too.

[–] will_a113@lemmy.ml 29 points 1 week ago (3 children)

And I can't possibly imagine that Grok actually collects less than ChatGPT.

 

With, I think, a massive grain of salt since this info is unverified and direct from the manufacturer...

Huawei’s official presentation claims their Cloudmatrix 385 supercomputer delivers 300 PFLOPS of computing power, 269 TB/s of network bandwidth, and 1,229 TB/s of total memory bandwidth. It also achieves 55 percent model fitting utilization (MFU) during training workloads and offers 2.8 Tbps of inter-card bandwidth, heavily emphasizing its strength in networking.

| Spec            | NVL72 (Nvidia) | CloudMatrix 384 (Huawei) | Better? (%) |
|-----------------|----------------|--------------------------|------------|
| Total compute   | 180 Pflops     | 300 Pflops               | 67%        |
| Total network bw| 130 TB/s       | 269 TB/s                 | 107%       |
| Total mem bw    | 576 TB/s       | 1,229 TB/s               | 113%       |
409
submitted 1 week ago* (last edited 1 week ago) by will_a113@lemmy.ml to c/privacy@lemmy.ml
 

A chart titled "What Kind of Data Do AI Chatbots Collect?" lists and compares seven AI chatbots—Gemini, Claude, CoPilot, Deepseek, ChatGPT, Perplexity, and Grok—based on the types and number of data points they collect as of February 2025. The categories of data include: Contact Info, Location, Contacts, User Content, History, Identifiers, Diagnostics, Usage Data, Purchases, Other Data.

  • Gemini: Collects all 10 data types; highest total at 22 data points
  • Claude: Collects 7 types; 13 data points
  • CoPilot: Collects 7 types; 12 data points
  • Deepseek: Collects 6 types; 11 data points
  • ChatGPT: Collects 6 types; 10 data points
  • Perplexity: Collects 6 types; 10 data points
  • Grok: Collects 4 types; 7 data points
[–] will_a113@lemmy.ml 12 points 1 week ago (1 children)

Gene sequencing wasn’t really a thing (at least an affordable thing) until the 2010s, but once it was widely available archaeologists started using it on pretty much anything they could extract a sample from. Suddenly it became possible to track the migrations of groups over time by tracing gene similarities, determine how much intermarrying there must have been within groups, etc. Even with individual sites it has been used to determine when leadership was hereditary vs not, or how wealth was distributed (by looking at residual food dna on teeth). It really has revolutionized the field and cast a lot of old-school theories (often taken for truth) into the dustbin.

 

I was reading this article about the NYT's suit against OpenAI. OpenAI argued that NYT couldn't sue for damages because it had been "too long" since the infringing started, and since NYT "must have known" that OpenAI was doing it, they lost the privilege of collecting damages (IANAL but I think it's because the Doctrine of Laches). In any event, the judge sensibly threw this argument out, telling OpenAI they hadn't demonstrated that NYT could have known the size or scale or timing of the any alleged infringement.

This made me think: now that the cat is out of the bag and everyone DOES know that everything on the Internet (and beyond) is being fed into AI factories, do we as creators have an obligation to somehow collectively sue LLM makers so that laches can't be used as a defense in the future?

 

While not the gigantic uber-canines of fantasy lore, these pups will become roughly-gray-wolf-sized dire wolves, and represent the first de-extincted animal species, raising a number of ethical questions about returning animals to ecosystems that may not be stable for long.

 

Using Reddit's popular ChangeMyView community as a source of baseline data, OpenAI had previously found that 2022's ChatGPT-3.5 was significantly less persuasive than random humans, ranking in just the 38th percentile on this measure. But that performance jumped to the 77th percentile with September's release of the o1-mini reasoning model and up to percentiles in the high 80s for the full-fledged o1 model.

So are you smarter than a Redditor?

 

Originality.AI looked at 8,885 long Facebook posts made over the past six years.

Key Findings

  • 41.18% of current Facebook long-form posts are Likely AI, as of November 2024.
  • Between 2023 and November 2024, the average percentage of monthly AI posts on Facebook was 24.05%.
  • This reflects a 4.3x increase in monthly AI Facebook content since the launch of ChatGPT. In comparison, the monthly average was 5.34% from 2018 to 2022.
 

When the global population does decline sometime this century, it will be the first time since the Black Death, 700 years ago. But this time, it will be driven by human choice -- specifically, the choice of women globally to not have so many children.

 

Though I guess "Saudi Arabia" and "dystopia" is a little redundant

 

Robocalls with AI voices to be regulated under Telephone Consumer Protection Act, the agency says. I'm pretty sure this puts us on the timeline where we eventually get incredible, futuristic tech, but computers and robots still sound mechanical and fake.

view more: next ›