Privacy

1974 readers
249 users here now

Icon base by Lorc under CC BY 3.0 with modifications to add a gradient

founded 2 years ago
MODERATORS
1
2
3
4
5
6
 
 

Copilot Vision is an extension of Microsoft's divisive Recall, a feature initially sort of exclusive to the Copilot+ systems with a neural co-processor of sufficient computational power. Like Recall, which was pulled due to serious security failings and subject to a lengthy delay before its eventual relaunch, Copilot Vision is designed to analyze everything you do on your computer.

It does this, when enabled, by capturing constant screenshots and feeding them to an optical character recognition system and a large language model for analysis – but where Recall works locally, Copilot Vision sends the data off to Microsoft servers.

According to a Microsoft spokesperson back in April, users' data will not be stored long-term, aside from transcripts of the conversation with the Copilot assistant itself, and "are not used for model training or ads personalisation."

7
8
 
 

In May 2020, Sacramento, California, resident Alfonso Nguyen was alarmed to find two Sacramento County Sheriff’s deputies at his door, accusing him of illegally growing cannabis and demanding entry into his home. When Nguyen refused the search and denied the allegation, one deputy allegedly called him a liar and threatened to arrest him.

That same year, deputies from the same department, with their guns drawn and bullhorns and sirens sounding, fanned out around the home of Brian Decker, another Sacramento resident. The officers forced Decker to walk backward out of his home in only his underwear around 7 am while his neighbors watched. The deputies said that he, too, was under suspicion of illegally growing cannabis.

9
10
11
12
 
 

When you imagine personal data stolen on the internet, like your address, phone number, internet history, or even passwords, you probably think of hackers passing it to identity thieves. Maybe you think of cops getting their hands on it in less-than-legal ways, or maybe an insurance company spying on its customers. But apparently anyone can buy this data, from a U.S. company, for as little as $50.

That company is Farnsworth Intelligence, an “open-source intel” startup from 23-year-old founder Aidan Raney. And it’s not being coy about what it’s doing. The company’s primary consumer-level product is called “Infostealers,” and it’s hosted at Infostealers.info. (Yup, what a URL.) According to an exposé from 404 Media, a simple purchase starting at fifty bucks can get you access to a searchable database of personal data from people all over the United States and the world.

13
14
 
 
15
16
 
 

We don't want to believe what we deeply understand: nothing is really deleted, and someone, somewhere can (and probably will) use that record against us.

It's possible that someone and somewhere will be a Customs and Border Protection agent at a US airport, as by now we've all heard a story of how has prevented a few unlucky souls from entering the USA – after spending hours or days in a holding cell – because of some post or other activity that someone decided made them unfit to cross the border.

Now that's it's happening, what can we do?

17
18
19
20
21
 
 

Here’s an evergreen take: There has never been a better time to get off social media.

Social services have evolved even further into becoming sticky traps for doomscrolling and AI-generated slop, and are hitherto unprecedented frontiers for rage bait. Bummed out about all the misinformation and being part of a profit machine that funds one increasingly unhinged billionaire or another? Well, there’s a way out.

Unfortunately, social media companies don’t always make it very easy to rescind their grips on your attention. They bury deletion and deactivation options deep in their sidebars and menus and do everything in their power to keep you engaged and scrolling.

It’s not always easy, but if you’re eager to exorcise the demons of social media from your life.

22
 
 

AI is being forced on us in pretty much every facet of life, from phones and apps to search engines and even drive-throughs, for some reason. The fact that we’re now getting web browsers with baked-in AI assistants and chatbots shows that the way some people are using the internet to seek out and consume information today is very different from even a few years ago.

But AI tools are more and more asking for gross levels of access to your personal data under the guise of needing it to work. This kind of access is not normal, nor should it be normalized.

Not so long ago, you would be right to question why a seemingly innocuous-looking free “flashlight” or “calculator” app in the app store would try to request access to your contacts, photos, and even your real-time location data. These apps may not need that data to function, but they will request it if they think they can make a buck or two by monetizing your data.

These days, AI isn’t all that different.

23
24
 
 

Meta has refused to sign the European Union’s code of practice for its AI Act, weeks before the bloc’s rules for providers of general-purpose AI models take effect.

“Europe is heading down the wrong path on AI,” wrote Meta’s chief global affairs officer Joel Kaplan in a post on LinkedIn. “We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI (GPAI) models and Meta won’t be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”

Tech companies from across the world, including those at the forefront of the AI race like Alphabet, Meta, Microsoft and Mistral AI have been fighting the rules, even urging the European Commission to delay its roll out. But the Commission has held firm, saying it will not change its timeline.

25
 
 

Study reveals how the tech behemoth is using the motions sensors on phones to expand quake warnings to more countries.

Technology giant Google harnessed motion sensors on more than two billion mobile phones between 2021 and 2024 to detect earthquakes, and then sent automated warnings to millions of people in 98 countries. In an analysis of the data, released in Science today, Google’s scientists say that the technology captured more than 11,000 quakes and performed on par with standard seismometers. Earthquake researchers who were not involved with the experiment are impressed by the system’s performance, but argue that public officials would need access to more information about the proprietary technology before relying on it.

view more: next ›