Privacy

1958 readers
169 users here now

Icon base by Lorc under CC BY 3.0 with modifications to add a gradient

founded 2 years ago
MODERATORS
1
2
 
 

In May 2020, Sacramento, California, resident Alfonso Nguyen was alarmed to find two Sacramento County Sheriff’s deputies at his door, accusing him of illegally growing cannabis and demanding entry into his home. When Nguyen refused the search and denied the allegation, one deputy allegedly called him a liar and threatened to arrest him.

That same year, deputies from the same department, with their guns drawn and bullhorns and sirens sounding, fanned out around the home of Brian Decker, another Sacramento resident. The officers forced Decker to walk backward out of his home in only his underwear around 7 am while his neighbors watched. The deputies said that he, too, was under suspicion of illegally growing cannabis.

3
 
 
4
5
 
 

When you imagine personal data stolen on the internet, like your address, phone number, internet history, or even passwords, you probably think of hackers passing it to identity thieves. Maybe you think of cops getting their hands on it in less-than-legal ways, or maybe an insurance company spying on its customers. But apparently anyone can buy this data, from a U.S. company, for as little as $50.

That company is Farnsworth Intelligence, an “open-source intel” startup from 23-year-old founder Aidan Raney. And it’s not being coy about what it’s doing. The company’s primary consumer-level product is called “Infostealers,” and it’s hosted at Infostealers.info. (Yup, what a URL.) According to an exposé from 404 Media, a simple purchase starting at fifty bucks can get you access to a searchable database of personal data from people all over the United States and the world.

6
7
 
 

We don't want to believe what we deeply understand: nothing is really deleted, and someone, somewhere can (and probably will) use that record against us.

It's possible that someone and somewhere will be a Customs and Border Protection agent at a US airport, as by now we've all heard a story of how has prevented a few unlucky souls from entering the USA – after spending hours or days in a holding cell – because of some post or other activity that someone decided made them unfit to cross the border.

Now that's it's happening, what can we do?

8
9
10
11
12
 
 

AI is being forced on us in pretty much every facet of life, from phones and apps to search engines and even drive-throughs, for some reason. The fact that we’re now getting web browsers with baked-in AI assistants and chatbots shows that the way some people are using the internet to seek out and consume information today is very different from even a few years ago.

But AI tools are more and more asking for gross levels of access to your personal data under the guise of needing it to work. This kind of access is not normal, nor should it be normalized.

Not so long ago, you would be right to question why a seemingly innocuous-looking free “flashlight” or “calculator” app in the app store would try to request access to your contacts, photos, and even your real-time location data. These apps may not need that data to function, but they will request it if they think they can make a buck or two by monetizing your data.

These days, AI isn’t all that different.

13
 
 

Here’s an evergreen take: There has never been a better time to get off social media.

Social services have evolved even further into becoming sticky traps for doomscrolling and AI-generated slop, and are hitherto unprecedented frontiers for rage bait. Bummed out about all the misinformation and being part of a profit machine that funds one increasingly unhinged billionaire or another? Well, there’s a way out.

Unfortunately, social media companies don’t always make it very easy to rescind their grips on your attention. They bury deletion and deactivation options deep in their sidebars and menus and do everything in their power to keep you engaged and scrolling.

It’s not always easy, but if you’re eager to exorcise the demons of social media from your life.

14
15
 
 

Study reveals how the tech behemoth is using the motions sensors on phones to expand quake warnings to more countries.

Technology giant Google harnessed motion sensors on more than two billion mobile phones between 2021 and 2024 to detect earthquakes, and then sent automated warnings to millions of people in 98 countries. In an analysis of the data, released in Science today, Google’s scientists say that the technology captured more than 11,000 quakes and performed on par with standard seismometers. Earthquake researchers who were not involved with the experiment are impressed by the system’s performance, but argue that public officials would need access to more information about the proprietary technology before relying on it.

16
 
 

Meta has refused to sign the European Union’s code of practice for its AI Act, weeks before the bloc’s rules for providers of general-purpose AI models take effect.

“Europe is heading down the wrong path on AI,” wrote Meta’s chief global affairs officer Joel Kaplan in a post on LinkedIn. “We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI (GPAI) models and Meta won’t be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”

Tech companies from across the world, including those at the forefront of the AI race like Alphabet, Meta, Microsoft and Mistral AI have been fighting the rules, even urging the European Commission to delay its roll out. But the Commission has held firm, saying it will not change its timeline.

17
18
 
 

Meta has urged the Australian government not to make privacy law changes that would prevent the company using personal information taken from Facebook and Instagram posts to train its AI, arguing the AI needs to learn “how individuals discuss Australian concepts”.

In a submission to the Productivity Commission’s review on harnessing data and digital technology, published this week, the parent company of Facebook, Instagram and WhatsApp argued for a “global policy alignment” in the Albanese government’s pursuit of privacy reform in the AI age.

Meta said generative AI models “require large and diverse datasets” and cannot rely on synthetic data – data generated by AI alone. The company said available databases, such as Australian legislation, were limited in what they could offer AI compared to datasets containing personal information.

“Human beings’ discussions of culture, art, and emerging trends are not borne out in such legislative texts, and the discourse that takes place on Meta products both represents vital learning on both how individuals discuss Australian concepts, realities, and figures, as well as, in particular, how users of our products engage,” Meta said.

19
20
 
 

Australians will soon be subjected to mandatory age checks across the internet landscape, in what has been described as a huge and unprecedented change.

Search engines are next in line for the same controversial age-assurance technology behind the teen social media ban, and other parts of the internet are likely to follow suit.

At the end of June, Australia quietly introduced rules forcing companies such as Google and Microsoft to check the ages of logged-in users, in an effort to limit children's access to harmful content such as pornography.

But experts have warned the move could compromise Australians' privacy online and may not do much to protect young people.

21
22
23
24
 
 

The Internal Revenue Service is building a computer program that would give deportation officers unprecedented access to confidential tax data.

ProPublica has obtained a blueprint of the system, which would create an “on demand” process allowing Immigration and Customs Enforcement to obtain the home addresses of people it’s seeking to deport.

Last month, in a previously undisclosed dispute, the acting general counsel at the IRS, Andrew De Mello, refused to turn over the addresses of 7.3 million taxpayers sought by ICE. In an email obtained by ProPublica, De Mello said he had identified multiple legal “deficiencies” in the agency’s request.

25
 
 

Reddit users in the United Kingdom will now be blocked from accessing “certain mature content” unless they complete the platform’s new age verification process. Reddit announced on Monday that UK users will need to upload a selfie or a photo of their government ID in order to view content that’s restricted for under-18s by the UK Online Safety Act (OSA), including abusive, violent, and sexually explicit materials.

The age verification process is performed by Persona, a third-party provider that won’t have access to users’ Reddit data or retain photos for longer than seven days. Reddit says it also won’t have access to uploaded photos, and that it will only store birthdates and verification statuses so that users don’t need to re-verify their account. I managed to complete the process myself this morning using a selfie in under a minute, though the photo tool had some difficulty detecting when my face was correctly framed.

view more: next ›