AI chatbots may reinforce real-world discrimination
You mean, something trained on real world data, reflects the biases of that data!
TechUnionization is a space for tech workers to organize against corporate greed, unethical AI deployment, and the industry’s growing role in climate destruction, war, and surveillance. With mass layoffs, worker exploitation, and the rise of techno-fascism, it’s more important than ever to push for ethical workplaces, AI safety, job security, and corporate accountability. Join us in demanding a future where our labor benefits workers and society—not just the wealthy.
Get involved with CODE-CWA to help build worker power in tech.
AI chatbots may reinforce real-world discrimination
You mean, something trained on real world data, reflects the biases of that data!
Here's a link to the study: https://arxiv.org/pdf/2506.10491. TL;DR, they ran tests on various LLMs with different personalities based on sex, ethnicity, migrant type, and other (human, AI, etc) and posed a question followed by multiple choice answers and tallied the results.