When AI denies your job, housing, or healthcare — who’s responsible?
Learn your rights & how to fight algorithmic decisions.
You didn’t get the apartment. You didn’t get the job. Your insurance claim was denied — in under two seconds. And no one can tell you why, because no one with a name made that call. A machine did.
In the finale of The Invisible AI, JR D. and AI research companion Ada pull back the curtain on who is legally responsible when algorithms get it wrong — and more importantly, what you can actually do about it. Real cases. Real laws. Real steps you can take today.
Listen to the full episode below.

“AI Innovations Unleashed: Your Educational Guide to Artificial Intelligence”
Welcome to AI Innovations Unleashed—your trusted educational resource for understanding artificial intelligence and how it can work for you. This podcast and companion blog have been designed to demystify AI technology through clear explanations, practical examples, and expert insights that make complex concepts accessible to everyone—from students and lifelong learners to small business owners and professionals across all industries.
Whether you’re exploring AI fundamentals, looking to understand how AI can benefit your small business, or simply curious about how this technology works in the real world, our mission is to provide you with the knowledge and practical understanding you need to navigate an AI-powered future confidently.
What You’ll Learn:
- AI Fundamentals: Build a solid foundation in machine learning, neural networks, generative AI, and automation through clear, educational content
- Practical Applications: Discover how AI works in real-world settings across healthcare, finance, retail, education, and especially in small businesses and entrepreneurship
- Accessible Implementation: Learn how small businesses and organizations of any size can benefit from AI tools—without requiring massive budgets or technical teams
- Ethical Literacy: Develop critical thinking skills around AI’s societal impact, bias, privacy, and responsible innovation
- Skill Development: Gain actionable knowledge to understand, evaluate, and work alongside AI technologies in your field or business
Educational Approach:
Each episode breaks down AI concepts into digestible lessons, featuring educators, researchers, small business owners, and practitioners who explain not just what AI can do, but how and why it works. We prioritize clarity over hype, education over promotion, and understanding over buzzwords. You’ll hear actual stories from small businesses using AI for customer service, content creation, operations, and more—proving that AI isn’t just for tech giants.
Join Our Learning Community:
Whether you’re taking your first steps into AI, running a small business, or deepening your existing knowledge, AI Innovations Unleashed provides the educational content you need to:
- Understand AI terminology and concepts with confidence
- Identify practical AI tools and applications for your business or industry
- Make informed decisions about implementing AI solutions
- Think critically about AI’s role in society and your work
- Continue learning as AI technology evolves
Subscribe to the podcast and start your AI education journey today—whether you’re learning for personal growth or looking to bring AI into your small business. 🎙️📚
This version maintains the educational focus while emphasizing that AI is accessible and valuable for small businesses and professionals across various industries, not just large corporations or tech companies.
Interact with us NOW! Send a text and state your mind.
Episode 3 of The Invisible AI asks the hardest question yet: what if the math itself is the problem?
Tour Guide JR D and AI research companion Ada explore why 'just fix the data' isn't enough — and why algorithmic bias runs deeper than dirty training sets. From Amazon's gender-biased hiring tool (2018) to the Optum healthcare algorithm that mistook systemic inequity for health status, to COMPAS criminal risk scores and their proven mathematical fairness trade-offs, to the self-reinforcing feedback loops of predictive policing — this episode maps the full, layered architecture of AI bias.
We also cover the explosive Workday hiring AI lawsuit (Mobley v. Workday, 2024–2025), the SafeRent $2.275M settlement, and the EU AI Act's phased rollout — plus a clear-eyed look at proxy variables, the Chouldechova & Kleinberg impossibility theorems, and the human values embedded in every algorithmic design choice.
Featuring verified quotes from Dr. Joy Buolamwini (Algorithmic Justice League), Cathy O'Neil (Weapons of Math Destruction), Dr. Aylin Caliskan (University of Washington), and Google CEO Sundar Pichai.
REFERENCES
- Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
- Buolamwini, J. (2017). How I'm fighting bias in algorithms [TED Talk]. TED Conferences.
- Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183–186.
- Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. https://doi.org/10.1089/big.2016.0047
- Cohen Milstein Sellers & Toll PLLC. (2024, November 20). Rental applicants using housing vouchers settle ground-breaking discrimination class action against SafeRent Solutions.
- Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters.
- Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1), eaao5580.
- Ensign, D., Friedler, S. A., Neville, S., Scheidegger, C., & Venkatasubramanian, S. (2018). Runaway feedback loops in predictive policing. Proceedings of Machine Learning Research, 81 (FAccT '18).
- Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. Proceedings of the 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). .
- Mobley v. Workday, Inc. (2023–ongoing). U.S. District Court, N.D. California. Case No. 3:23-cv-00770-RFL.
- Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
- O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
- Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453.
- Pichai, S. (2024, February 28). Internal memo on Gemini image generation [Leaked to media]. Reported by Semafor and The Verge.
- U.S. Senate Permanent Subcommittee on Investigations. (2024, October 17). Refusal of recovery: How Medicare Advantage insurers have denied patients access to post-acute care. U.S. Senate.
- Wilson, K., Gueorguieva, A.-M., Sim, M., & Caliskan, A. (2025). People mirror AI systems' hiring biases. University of Washington News, November 10, 2025.
- Wilson, K., & Caliskan, A. (2024). Gender, race, and inte

Episode 4 of 4 | The Invisible AI Series | AI Innovations Unleashed
When an algorithm denies your job, your apartment, or your health insurance — and takes 1.2 seconds to do it — who is actually responsible?
In this series finale, JR D. and AI research companion Ada close out “The Invisible AI” by tackling the accountability gap: legally, practically, and personally.
We dig into class-action lawsuits against Cigna, Humana, and UnitedHealth Group over AI-driven claim denials, the Mobley v. Workday Inc. ruling (2025) that held AI hiring vendors directly liable for discrimination, and the SafeRent $2M+ settlement that shifted the conversation for renters.
We break down COMPAS — the criminal risk tool at the center of ProPublica’s “Machine Bias” investigation — and explain what new laws in Colorado and the EU mean for your rights today.
Then we get practical: how to request your data, dispute an algorithmic decision, and file a complaint that actually goes somewhere.
Featuring Dr. Joy Buolamwini (Algorithmic Justice League, author of Unmasking AI) and Microsoft CEO Satya Nadella.
Resources: AnnualCreditReport.com | CFPB.gov | EEOC.gov | ProPublica Machine Bias (2016) | Colorado AI Act (2024) |
Full APA citations at AIInnovationsUnleashed.com
Up next: “The Learning Curve: AI & the Future of Education” — March 2026 with new co-host ARIA. Episode 1: “The Teacher in the Age of AI.”
Subscribe now.
#AIInnovationsUnleashed #AlgorithmicAccountability #AIBias #COMPAS #KnowYourRights #TheLearningCurve
Academic Sources
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342
Wilson, K., & Caliskan, A. (2024). Gender, race, and intersectional bias in resume screening via language model retrieval. University of Washington Information School. https://ischool.uw.edu
Legal Cases
Mobley v. Workday, Inc., 2025 WL 1424347 (N.D. Cal. May 16, 2025).
Hicks v. Collier, No. 2:24-CV-00126, 2024 U.S. Dist. LEXIS 241129 (S.D. Tex. Oct. 31, 2024).
SafeRent Solutions LLC Fair Housing Act Settlement (2024). U.S. District Court. (Settlement > $2 million).
News & Investigative Reporting
Bajak, F. (2023, July 25). Cigna health giant accused of improperly rejecting thousands of patient claims using an algorithm. AP News. https://apnews.com
ACLU. (2025, March 19). Complaint filed against Intuit and HireVue over biased AI hiring technology. ACLU Press Release. https://www.aclu.org
Traverse Legal. (2025, July 17). Recent lawsuits against AI companies: Beyond copyright infringement. https://www.traverselegal.com/blog/ai-litigation-beyond-copyright/
Quinn Emanuel. (2025, August 18). When machines discriminate: The rise of AI bias lawsuits. https://www.quinnemanuel.com
CPO Magazine. (2026, January 15). 2026 AI legal forecast: From innovation to compliance. https://www.cpomagazine.com
Expert & Leadership Sources
Buolamwini, J. (2023). Unmasking AI: My mission to protect what is human in a world of machines. Random House.
Buolamwini, J. (2025, February). Rubenstein Lecture, Sanford School of Public Policy, Duke University. Excerpt reported by CBC Radio (May 12, 2025). https://www.cbc.ca/radio/ideas/unmasking-ai-bias-algorithmic-justice-1.7531391
Boston Globe. (2024). Joy Buolamwini — Boston tech leaders. https://www.bostonglobe.com/tech-power-players/year/2024/person/joy-buolamwini-algorithmic-justice-league/
Nadella, S. (2026, January). Remarks at the World Economic Forum, Davos. As reported by PC Gamer (January 21, 2026). https://www.pcgamer.com
Regulatory Sources
European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council — the AI Act. https://eur-lex.europa.eu
Colorado General Assembly. (2024). Colorado Artificial Intelligence Act (SB 24-205). Effective February 2026.
Drata. (2026). Artificial intelligence regulations: State and federal AI laws 2026. https://drata.com/blog/artificial-intelligence-regulations-state-and-federal-ai-laws-2026
Fisher Phillips. (2025). Comprehensive review of AI workplace law and litigation as we enter 2025. https://www.fisherphillips.com




Leave a Reply