Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, toxicity and dog-whistling are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
Maybe? Are you sure you're not weirdly defensive about AI because you prefer interactions where you control the narrative and every opinion you have is validated?
Honestly, I really only know one person who uses AI so much I would even consider it an issue, and until recently, he was my best friend since 2007. He was always really smart and rational because he was the kind of person who would do a lot of research, and look into things before rushing into any decision or forming an opinion.
Originally he just used AI for automation ~2 years ago, then he started using it for quickly researching things related to work, but eventually he started using "AI research" for everything, and once he reads an AI summary there's no changing his opinion.
A lot of times he will send me links that AI cites in the summary to prove he's correct, but when you actually read the information in the links, it doesn't actually say what he thinks it says. But once he's formed an opinion and it's been validated by AI, there is seems to be no evidence that can convince him otherwise.
He actually went down a quantum physics/new understanding of math rabbit hole pretty early on, but luckily he eventually realized all the information chatGPT was telling him was correct was misinterpreted, but it was still giving him positive feedback and telling him he was a genius, just like it always seems to do to people who don't realize it's giving them bad information and end up ruining their own lives.
He didn't stop using AI though, he just stopped using chatGPT and switched to other models. He also gets defensive if you try to tell him that he should dial back his AI use even though he can no longer hold a conversation with anybody if it's not related to whatever he's interested in at the moment, he comes off as very rude bc he doesn't seem to remember just shutting down conversations bc he doesn't feel like hearing them, like he's closing out a tab he's done using, isn't appropriate, and when I tell other people about his opinions and arguments/how he's citing information to support those arguments now, they say "no offense, but he sounds really dumb."
Which is definitely not true. He's very smart and he always has been. He's got some really impressive degrees he earned prior to becoming dependent on AI, that prove it. He also didn't just suddenly lose the social skills and empathy he had for 18 years. He's just become way too dependent on technology that's designed to make him believe he's always correct and being super productive and efficient, so he will get a little dopamine bump and want to keep using it, instead of just taking the time to actually read new information, or listen to what people are saying and how they're saying it, and then use his own very impressive logic and reasoning skills to interpret that information.
Idk, it is an n=1 and I could definitely be wrong. That's why I asked this question. Bc I wanted to hear other opinions outside of my own personal experience and the ones I've already read or seen online.
•The rise of the personal AI advisors
•80% of Gen Z and millennials are turning to AI for financial advice—but more than half say they’ve made a poor decision or mistake as a result
•AI chatbots and digital companions are reshaping emotional connection