Can fixing AI bias be as simple as cleaning the data? The math says no.
Explore algorithmic fairness — and why neutrality is never truly neutral.
Teaser for Today’s Podcast
Can fixing AI bias be as simple as cleaning the data? The math says no. Explore algorithmic fairness — and why neutrality is never truly neutral..
Listen to the full episode
below for the complete analysis.

“AI Innovations Unleashed: Your Educational Guide to Artificial Intelligence”
Welcome to AI Innovations Unleashed—your trusted educational resource for understanding artificial intelligence and how it can work for you. This podcast and companion blog have been designed to demystify AI technology through clear explanations, practical examples, and expert insights that make complex concepts accessible to everyone—from students and lifelong learners to small business owners and professionals across all industries.
Whether you’re exploring AI fundamentals, looking to understand how AI can benefit your small business, or simply curious about how this technology works in the real world, our mission is to provide you with the knowledge and practical understanding you need to navigate an AI-powered future confidently.
What You’ll Learn:
- AI Fundamentals: Build a solid foundation in machine learning, neural networks, generative AI, and automation through clear, educational content
- Practical Applications: Discover how AI works in real-world settings across healthcare, finance, retail, education, and especially in small businesses and entrepreneurship
- Accessible Implementation: Learn how small businesses and organizations of any size can benefit from AI tools—without requiring massive budgets or technical teams
- Ethical Literacy: Develop critical thinking skills around AI’s societal impact, bias, privacy, and responsible innovation
- Skill Development: Gain actionable knowledge to understand, evaluate, and work alongside AI technologies in your field or business
Educational Approach:
Each episode breaks down AI concepts into digestible lessons, featuring educators, researchers, small business owners, and practitioners who explain not just what AI can do, but how and why it works. We prioritize clarity over hype, education over promotion, and understanding over buzzwords. You’ll hear actual stories from small businesses using AI for customer service, content creation, operations, and more—proving that AI isn’t just for tech giants.
Join Our Learning Community:
Whether you’re taking your first steps into AI, running a small business, or deepening your existing knowledge, AI Innovations Unleashed provides the educational content you need to:
- Understand AI terminology and concepts with confidence
- Identify practical AI tools and applications for your business or industry
- Make informed decisions about implementing AI solutions
- Think critically about AI’s role in society and your work
- Continue learning as AI technology evolves
Subscribe to the podcast and start your AI education journey today—whether you’re learning for personal growth or looking to bring AI into your small business. 🎙️📚
This version maintains the educational focus while emphasizing that AI is accessible and valuable for small businesses and professionals across various industries, not just large corporations or tech companies.
Interact with us NOW! Send a text and state your mind.
Episode 3 of The Invisible AI asks the hardest question yet: what if the math itself is the problem?
Tour Guide JR D and AI research companion Ada explore why 'just fix the data' isn't enough — and why algorithmic bias runs deeper than dirty training sets. From Amazon's gender-biased hiring tool (2018) to the Optum healthcare algorithm that mistook systemic inequity for health status, to COMPAS criminal risk scores and their proven mathematical fairness trade-offs, to the self-reinforcing feedback loops of predictive policing — this episode maps the full, layered architecture of AI bias.
We also cover the explosive Workday hiring AI lawsuit (Mobley v. Workday, 2024–2025), the SafeRent $2.275M settlement, and the EU AI Act's phased rollout — plus a clear-eyed look at proxy variables, the Chouldechova & Kleinberg impossibility theorems, and the human values embedded in every algorithmic design choice.
Featuring verified quotes from Dr. Joy Buolamwini (Algorithmic Justice League), Cathy O'Neil (Weapons of Math Destruction), Dr. Aylin Caliskan (University of Washington), and Google CEO Sundar Pichai.
REFERENCES
- Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
- Buolamwini, J. (2017). How I'm fighting bias in algorithms [TED Talk]. TED Conferences.
- Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183–186.
- Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. https://doi.org/10.1089/big.2016.0047
- Cohen Milstein Sellers & Toll PLLC. (2024, November 20). Rental applicants using housing vouchers settle ground-breaking discrimination class action against SafeRent Solutions.
- Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters.
- Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1), eaao5580.
- Ensign, D., Friedler, S. A., Neville, S., Scheidegger, C., & Venkatasubramanian, S. (2018). Runaway feedback loops in predictive policing. Proceedings of Machine Learning Research, 81 (FAccT '18).
- Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. Proceedings of the 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). .
- Mobley v. Workday, Inc. (2023–ongoing). U.S. District Court, N.D. California. Case No. 3:23-cv-00770-RFL.
- Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
- O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
- Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453.
- Pichai, S. (2024, February 28). Internal memo on Gemini image generation [Leaked to media]. Reported by Semafor and The Verge.
- U.S. Senate Permanent Subcommittee on Investigations. (2024, October 17). Refusal of recovery: How Medicare Advantage insurers have denied patients access to post-acute care. U.S. Senate.
- Wilson, K., Gueorguieva, A.-M., Sim, M., & Caliskan, A. (2025). People mirror AI systems' hiring biases. University of Washington News, November 10, 2025.
- Wilson, K., & Caliskan, A. (2024). Gender, race, and inte

Episode 3 of The Invisible AI asks the hardest question yet: what if the math itself is the problem?
Tour Guide JR D and AI research companion Ada explore why ‘just fix the data’ isn’t enough — and why algorithmic bias runs deeper than dirty training sets. From Amazon’s gender-biased hiring tool (2018) to the Optum healthcare algorithm that mistook systemic inequity for health status, to COMPAS criminal risk scores and their proven mathematical fairness trade-offs, to the self-reinforcing feedback loops of predictive policing — this episode maps the full, layered architecture of AI bias.
We also cover the explosive Workday hiring AI lawsuit (Mobley v. Workday, 2024–2025), the SafeRent $2.275M settlement, and the EU AI Act’s phased rollout — plus a clear-eyed look at proxy variables, the Chouldechova & Kleinberg impossibility theorems, and the human values embedded in every algorithmic design choice.
Featuring verified quotes from Dr. Joy Buolamwini (Algorithmic Justice League), Cathy O’Neil (Weapons of Math Destruction), Dr. Aylin Caliskan (University of Washington), and Google CEO Sundar Pichai.
REFERENCES
- Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
- Buolamwini, J. (2017). How I’m fighting bias in algorithms [TED Talk]. TED Conferences.
- Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183–186.
- Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. https://doi.org/10.1089/big.2016.0047
- Cohen Milstein Sellers & Toll PLLC. (2024, November 20). Rental applicants using housing vouchers settle ground-breaking discrimination class action against SafeRent Solutions.
- Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters.
- Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1), eaao5580.
- Ensign, D., Friedler, S. A., Neville, S., Scheidegger, C., & Venkatasubramanian, S. (2018). Runaway feedback loops in predictive policing. Proceedings of Machine Learning Research, 81 (FAccT ’18).
- Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. Proceedings of the 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). .
- Mobley v. Workday, Inc. (2023–ongoing). U.S. District Court, N.D. California. Case No. 3:23-cv-00770-RFL.
- Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
- O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
- Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453.
- Pichai, S. (2024, February 28). Internal memo on Gemini image generation [Leaked to media]. Reported by Semafor and The Verge.
- U.S. Senate Permanent Subcommittee on Investigations. (2024, October 17). Refusal of recovery: How Medicare Advantage insurers have denied patients access to post-acute care. U.S. Senate.
- Wilson, K., Gueorguieva, A.-M., Sim, M., & Caliskan, A. (2025). People mirror AI systems’ hiring biases. University of Washington News, November 10, 2025.
- Wilson, K., & Caliskan, A. (2024). Gender, race, and intersectional bias in resume screening via language model retrieval.




Leave a Reply