this post was submitted on 10 Apr 2025
265 points (98.9% liked)
Technology
68723 readers
3319 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Automating this system with some kind of algorithm is not right, but a nearly blind 70-year-old can still do damage? The angle here is weird.
I know for a fact they've released "harmless old men," who basically instantly go out and kill someone.
The angle makes complete sense if you understand it: A reason that "AI" automation is bad is because it labels blind 70-year-olds as dangerous.
Blind 70 year olds can still be dangerous. Being blind and old doesn't prevent that.
They're not saying that offloading the responsibility to an algorithm is good, they're saying it's weird to assume a person is harmless based on nothing but two attributes.
I agree on a general basis that it's bad that these kind of decisions are offloaded to an AI. A human should be the one to consider whether the blind 70 year old is dangerous, because they definitely can be.
Operating a vehicle or weapon requires neither eyesight nor a clear mind if you don't intend to do it safely.