Privacy

36 readers
1 users here now

A community all about privacy and protecting your data.

founded 2 weeks ago
MODERATORS
tfm
1
17
submitted 3 days ago by tfm to c/privacy
 
 
2
32
submitted 5 days ago* (last edited 5 days ago) by choutos to c/privacy
 
 

"I went to run a rinse cycle, only to find that that, along with features like delayed start and eco mode, require an app.

Not only that, to use the app, you have to connect your dishwasher to WiFi, set up a cloud account in something called Home Connect, and then, and only then, can you start using all the features on the dishwasher."

3
 
 
4
 
 

cross-posted from: https://europe.pub/post/61500

5
 
 

cross-posted from: https://europe.pub/post/65367

cross-posted from: https://lemmy.world/post/27344091

  1. Persistent Device Identifiers

My id is (1 digit changed to preserve my privacy):

38400000-8cf0-11bd-b23e-30b96e40000d

Android assigns Advertising IDs, unique identifiers that apps and advertisers use to track users across installations and account changes. Google explicitly states:

“The advertising ID is a unique, user-resettable ID for advertising, provided by Google Play services. It gives users better controls and provides developers with a simple, standard system to continue to monetize their apps.” Source: Google Android Developer Documentation

This ID allows apps to rebuild user profiles even after resets, enabling persistent tracking.

  1. Tracking via Cookies

Android’s web and app environments rely on cookies with unique identifiers. The W3C (web standards body) confirms:

“HTTP cookies are used to identify specific users and improve their web experience by storing session data, authentication, and tracking information.” Source: W3C HTTP State Management Mechanism https://www.w3.org/Protocols/rfc2109/rfc2109

Google’s Privacy Sandbox initiative further admits cookies are used for cross-site tracking:

“Third-party cookies have been a cornerstone of the web for decades… but they can also be used to track users across sites.” Source: Google Privacy Sandbox https://privacysandbox.com/intl/en_us/

  1. Ad-Driven Data Collection

Google’s ad platforms, like AdMob, collect behavioral data to refine targeting. The FTC found in a 2019 settlement:

“YouTube illegally harvested children’s data without parental consent, using it to target ads to minors.” Source: FTC Press Release https://www.ftc.gov/news-events/press-releases/2019/09/google-youtube-will-pay-record-170-million-settlement-over-claims

A 2022 study by Aarhus University confirmed:

“87% of Android apps share data with third parties.” Source: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies https://dl.acm.org/doi/10.1145/3534593

  1. Device Fingerprinting

Android permits fingerprinting by allowing apps to access device metadata. The Electronic Frontier Foundation (EFF) warns:

“Even when users reset their Advertising ID, fingerprinting techniques combine static device attributes (e.g., OS version, hardware specs) to re-identify them.” Source: EFF Technical Analysis https://www.eff.org/deeplinks/2021/03/googles-floc-terrible-idea

  1. Hardware-Level Tracking

Google’s Titan M security chip, embedded in Pixel devices, operates independently of software controls. Researchers at Technische Universität Berlin noted:

“Hardware-level components like Titan M can execute processes that users cannot audit or disable, raising concerns about opaque data collection.” Source: TU Berlin Research Paper https://arxiv.org/abs/2105.14442

Regarding Titan M: Lots of its rsearch is being taken down. Very few are remaining online. This is one of them available today.

"In this paper, we provided the first study of the Titan M chip, recently introduced by Google in its Pixel smartphones. Despite being a key element in the security of these devices, no research is available on the subject and very little information is publicly available. We approached the target from different perspectives: we statically reverse-engineered the firmware, we audited the available libraries on the Android repositories, and we dynamically examined its memory layout by exploiting a known vulnerability. Then, we used the knowledge obtained through our study to design and implement a structure-aware black-box fuzzer, mutating valid Protobuf messages to automatically test the firmware. Leveraging our fuzzer, we identified several known vulnerabilities in a recent version of the firmware. Moreover, we discovered a 0-day vulnerability, which we responsibly disclosed to the vendor."

Ref: https://conand.me/publications/melotti-titanm-2021.pdf

  1. Notification Overload

A 2021 UC Berkeley study found:

“Android apps send 45% more notifications than iOS apps, often prioritizing engagement over utility. Notifications act as a ‘hook’ to drive app usage and data collection.” Source: Proceedings of the ACM on Human-Computer Interaction https://dl.acm.org/doi/10.1145/3411764.3445589

How can this be used nefariously?

Let's say you are a person who believes in Truth and who searches all over the net for truth. You find some things which are true. You post it somewhere. And you are taken down. You accept it since this is ONLY one time.

But, this is where YOU ARE WRONG.

THEY can easily know your IDs - specifically your advertising ID, or else one of the above. They send this to Google to know which all EMAIL accounts are associated with these IDs. With 99.9% accuracy, AI can know the correct Email because your EMAIL and ID would have SIMULTANEOUSLY logged into Google thousands of times in the past.

Then they can CENSOR you ACROSS the internet - YouTube, Reddit, etc. - because they know your ID. Even if you change your mobile, they still have other IDs like your email, etc. You can't remove all of them. This is how they can use this for CENSORING. (They will shadow ban you, you wont know this.)

6
7
8
 
 
9
 
 
10
 
 

cross-posted from: https://kbin.melroy.org/m/privacy@lemmy.ml/t/817467

Rayhunter is a new open source tool we’ve created that runs off an affordable mobile hotspot that we hope empowers everyone, regardless of technical skill, to help search out cell-site simulators (CSS) around the world.

11
12
 
 

cross-posted from: https://jlai.lu/post/16968722

13
 
 

cross-posted from: https://europe.pub/post/33034

14
 
 

cross-posted from: https://lemmy.today/post/25826615

For those not familiar, there are numerous messages containing images being repeatedly spammed to many Threadiverse users talking about a Polish girl named "Nicole". This has been ongoing for some time now.

Lemmy permits external inline image references to be embedded in messages. This means that if a unique image URL or set of image URLs are sent to each user, it's possible to log the IP addresses that fetch these images; by analyzing the log, one can determine the IP address that a user has.

In some earlier discussion, someone had claimed that local lemmy instances cache these on their local pict-rs instance and rewrite messages to reference the local image.

It does appear that there is a closed issue on the lemmy issue tracker referencing such a deanonymization attack:

https://github.com/LemmyNet/lemmy/issues/1036

I had not looked into these earlier, but it looks like such rewriting and caching intending to avoid this attack is not occurring, at least on my home instance. I hadn't looked until the most-recent message, but the image embedded here is indeed remote:

https://lemmy.doesnotexist.club/pictrs/image/323899d9-79dd-4670-8cf9-f6d008c37e79.png

I haven't stored and looked through a list of these, but as I recall, the user sending them is bouncing around different instances. They certainly are not using the same hostname for their lemmy instance as the pict-rs instance; this message was sent from nicole92 on lemmy.latinlok.com, though the image is hosted on lemmy.doesnotexist.club. I don't know whether they are moving around where the pict-rs instance is located from message to message. If not, it might be possible to block the pict-rs instance in your browser. That will only be a temporary fix, since I see no reason that they couldn't also be moving the hostname on the pict-rs instance.

Another mitigation would be to route one's client software or browser through a VPN.

I don't know if there are admins working on addressing the issue; I'd assume so, but I wanted to at least mention that there might be privacy implications to other users.

In any event, regardless of whether the "Nicole" spammer is aiming to deanonymize users, as things stand, it does appear that someone could do so.

My own take is that the best fix here on the lemmy-and-other-Threadiverse-software-side would be to disable inline images in messages. Someone who wants to reference an image can always link to an external image in a messages, and permit a user to click through. But if remote inline image references can be used, there's no great way to prevent a user's IP address from being exposed.

If anyone has other suggestions to mitigate this (maybe a Greasemonkey snippet to require a click to load inline images as a patch for the lemmy Web UI?), I'm all ears.

15
16
 
 

cross-posted from: https://lemmy.ca/post/40848536

17
11
Blur Your House On Google (support.google.com)
submitted 2 weeks ago by 0x7466 to c/privacy
 
 

In case you ever wanted to blur your house from google street view you can. A little privacy i suppose, its pretty easy. you dont need a reason to do it. This probaly the only thing google lets opt out of which is cool.

Originally posted on Reddit

18
19
 
 

cross-posted from: https://programming.dev/post/26984046

  • All analyzed AI chatbot apps collect some form of user data. The average number of collected types of data is 11 out of a possible 35 for the analyzed apps. 40% of the apps collect users' locations. Additionally, 30% of these apps track user data. Tracking refers to linking user or device data collected from the app with third-party data for targeted advertising or advertising measurement purposes or sharing it with a data broker.
  • Google Gemini collects the most information, gathering 22 out of 35 possible data types. This includes precise location data, which only Gemini, Copilot, and Perplexity collect. Gemini also collects a significant amount of data across various other categories, such as contact info (name, email address, phone number, etc.), user content, contacts (such as a list of contacts in the user’s phone), search history, browsing history, and several other types of data. This extensive data collection may be seen as excessive and intrusive by those concerned about data privacy and security.
  • ChatGPT collects 10 types of data, such as contact information, user content, identifiers, usage data, and diagnostics, while avoiding tracking data or using third-party advertising within the app. While ChatGPT collects chat history, it is possible to use temporary chats, which auto-delete all data after 30 days, or to request the removal of personal data from training sets. Overall, ChatGPT collects slightly fewer types of data than some other analyzed apps, but users should still review the privacy policy to understand how this data is used and protected.
  • Copilot, Poe, and Jasper are the three apps that collect data used to track you. This data could be sold to data brokers or used to display targeted advertisements in your app¹. While Copilot and Poe only collect device IDs, Jasper collects device IDs, product interaction data, advertising data, and other usage data, which refers to “any other data about user activity in the app”.
  • DeepSeek's data collection practices stand comfortably in the middle ground among other AI chatbot apps. DeepSeek collects 11 unique types of data, such as user input, including chat history, and claims to retain information for as long as necessary, storing it on servers located in the People's Republic of China.
  • Don't let your guard down, as chats stored on servers are always at risk of being breached. According to The Hacker News, DeepSeek has already experienced a breach where more than 1 million records of chat history, API keys, and other information were leaked. It is generally a good idea to be mindful of the information provided.
20
 
 

cross-posted from: https://lemmy.ml/post/27209720

I bought a Garmin Forerunner 255 watch that I want to use only with Gadgetbridge. There is an old software version on the watch and I want to update it and I don't want to connect it with Garmin Connect or Garmin Express app?

I have looked for the possibility to do an “offline” update but have not found it. Maybe the community will help?

21