The Invisible AI: Part 3 – Your Bias Is Showing — And So Is the Algorithm’s

Reading Time: 2 minutes – Can fixing AI bias be as simple as cleaning the data? The math says no. Explore algorithmic fairness — and why neutrality is never truly neutral.

Categories: , , ,

Can fixing AI bias be as simple as cleaning the data? The math says no.
Explore algorithmic fairness — and why neutrality is never truly neutral.


Teaser for Today’s Podcast

Can fixing AI bias be as simple as cleaning the data? The math says no. Explore algorithmic fairness — and why neutrality is never truly neutral..


Listen to the full episode
below for the complete analysis.



Episode 3 of The Invisible AI asks the hardest question yet: what if the math itself is the problem?

Tour Guide JR D and AI research companion Ada explore why ‘just fix the data’ isn’t enough — and why algorithmic bias runs deeper than dirty training sets. From Amazon’s gender-biased hiring tool (2018) to the Optum healthcare algorithm that mistook systemic inequity for health status, to COMPAS criminal risk scores and their proven mathematical fairness trade-offs, to the self-reinforcing feedback loops of predictive policing — this episode maps the full, layered architecture of AI bias.

We also cover the explosive Workday hiring AI lawsuit (Mobley v. Workday, 2024–2025), the SafeRent $2.275M settlement, and the EU AI Act’s phased rollout — plus a clear-eyed look at proxy variables, the Chouldechova & Kleinberg impossibility theorems, and the human values embedded in every algorithmic design choice.

Featuring verified quotes from Dr. Joy Buolamwini (Algorithmic Justice League), Cathy O’Neil (Weapons of Math Destruction), Dr. Aylin Caliskan (University of Washington), and Google CEO Sundar Pichai.

REFERENCES

  • Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  • Buolamwini, J. (2017). How I’m fighting bias in algorithms [TED Talk]. TED Conferences. 
  • Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183–186.
  • Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. https://doi.org/10.1089/big.2016.0047
  • Cohen Milstein Sellers & Toll PLLC. (2024, November 20). Rental applicants using housing vouchers settle ground-breaking discrimination class action against SafeRent Solutions. 
  • Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. 
  • Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1), eaao5580. 
  • Ensign, D., Friedler, S. A., Neville, S., Scheidegger, C., & Venkatasubramanian, S. (2018). Runaway feedback loops in predictive policing. Proceedings of Machine Learning Research, 81 (FAccT ’18).
  • Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. Proceedings of the 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). .
  • Mobley v. Workday, Inc. (2023–ongoing). U.S. District Court, N.D. California. Case No. 3:23-cv-00770-RFL.
  • Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
  • O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
  • Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453.
  • Pichai, S. (2024, February 28). Internal memo on Gemini image generation [Leaked to media]. Reported by Semafor and The Verge.
  • U.S. Senate Permanent Subcommittee on Investigations. (2024, October 17). Refusal of recovery: How Medicare Advantage insurers have denied patients access to post-acute care. U.S. Senate.
  • Wilson, K., Gueorguieva, A.-M., Sim, M., & Caliskan, A. (2025). People mirror AI systems’ hiring biases. University of Washington News, November 10, 2025. 
  • Wilson, K., & Caliskan, A. (2024). Gender, race, and intersectional bias in resume screening via language model retrieval. 
author avatar
JR
JR is the founder of AI Innovations Unleashed—an educational podcast and consulting platform helping educators, leaders, and curious minds harness AI to build smarter learning environments. He has 22 year of project management experience (PMP certified) and an AI strategist who translates complex tech into practical, future-focused insights. Connect with him on LinkedIn, Medium, Substack, and X—or visit him @ aiinnovationsunleashed.com.

Leave a Reply

Your email address will not be published. Required fields are marked *