A widely used healthcare algorithm underestimated the needs of Black patients, revealing how AI can reinforce systemic bias. This post explores real-world examples, expert insights, and what it takes to build ethical, equitable AI in healthcare and beyond.
Introduction: A Wake-Up Call in the Hospital Hallways
It was a typical Wednesday morning at Midtown General Hospital. Nurses hustled between patients, physicians were deep in rounds, and somewhere in the background, a humming server farm was quietly doing its job—running algorithms that determined who got flagged for special care. On the surface, everything looked efficient, almost futuristic.
But beneath the sleek software interface, something troubling was brewing.
Amira, a young data analyst fresh out of grad school, had recently joined the hospital’s data science team. She loved the idea of using AI to save lives. “Technology doesn’t lie,” she used to say—until one morning, she noticed a strange pattern in the hospital’s care management system.
The AI, designed to identify high-risk patients needing more attention, was consistently flagging fewer Black patients, even when their medical records suggested serious chronic conditions. At first, she thought it was a data input error. But the deeper she dug, the more disturbing the truth became: the algorithm wasn’t making decisions based on medical needs—it was using historical healthcare spending as a stand-in for health.
In simple terms? If a patient hadn’t spent much on healthcare in the past, the AI assumed they didn’t need much help now. But due to decades of unequal access, many Black patients had lower recorded spending—not because they were healthier, but because they had fewer opportunities to seek care in the first place.
What started as a well-intentioned AI project had turned into an invisible gatekeeper, quietly amplifying systemic bias under the guise of efficiency.
This real-world example, later published in a landmark Science study (Obermeyer et al., 2019), became a wake-up call for hospitals, tech companies, and policymakers alike. It wasn’t just about healthcare—it was about the soul of artificial intelligence itself. Could we really trust machines to make fair decisions? Or were we just baking our own prejudices into silicone and code?
On this Wisdom Wednesday, we take a deep dive into this story—not to vilify AI, but to explore the nuanced, and sometimes uncomfortable, truth: that artificial intelligence is only as wise as the data and values we feed it.
Grab your coffee (or tea), and let’s talk about what happens when algorithms meet ethics—and why wisdom matters more than ever in the age of AI.
The Case: When Algorithms Mirror Our Biases
Racial bias in artificial intelligence isn’t some far-off dystopian concept—it’s already here, embedded in systems we use every day, often without even realizing it. While AI promises precision, speed, and neutrality, the reality is a bit messier. Because when you feed a machine data that reflects a biased world, you get a biased machine.
What Is Racial Bias in AI, Anyway?
Let’s break it down: racial bias in AI occurs when an algorithm produces systematically less favorable outcomes for certain racial or ethnic groups. This isn’t because the machine is racist (after all, it doesn’t feel anything), but because it’s trained on historical data. And history, as we know, is full of inequality.
In healthcare, that inequality has been especially pronounced. Studies show that Black Americans face higher rates of chronic illness, yet often receive less preventative care, pain treatment, and access to specialists compared to white patients (Artiga et al., 2020). When AI systems learn from this data, they don’t correct it—they learn to replicate it.
The 2019 Study That Shook Healthcare
In a bombshell 2019 study published in Science, a team of researchers led by Dr. Ziad Obermeyer revealed that a widely-used healthcare algorithm was significantly underestimating the health needs of Black patients (Obermeyer et al., 2019). This wasn’t some niche program—it was used by hospitals and insurers across the United States to manage care for over 200 million people.
Here’s how it worked: the algorithm predicted which patients would benefit most from extra help—such as more doctor visits, medication management, or at-home care. But instead of using actual health data (like lab results or diagnoses), it used healthcare spending as a stand-in for health risk.
See the problem?
Because Black patients historically spend less on healthcare—due to systemic barriers like lower access, less insurance coverage, and historic medical mistrust—the algorithm assumed they were healthier. In fact, researchers found that Black patients needed to be sicker than white patients to receive the same score.
The Stats That Hit Hard
The numbers are jaw-dropping:
- The algorithm was less likely to refer Black patients to care programs, despite having equal or greater medical need.
- When the researchers corrected for actual health conditions, they found that Black patients made up 17.7% of those flagged by the algorithm—but should have made up 46.5%.
- That means nearly three out of every four Black patients in need were missed by the system (Obermeyer et al., 2019).
This wasn’t a bug—it was a feature built into the design. And it had been operating quietly for years.
How Was It Identified?
Uncovering this bias wasn’t easy. It took a cross-disciplinary team of data scientists, doctors, and social scientists to reverse-engineer the algorithm and assess its real-world impact. They ran statistical comparisons between actual patient health outcomes and AI predictions, and the disparities were too large to ignore.
As Dr. Obermeyer said in an interview with NPR, “The algorithm was doing precisely what it was asked to do—predict cost—but it was being used to predict health.”
This mismatch between intended function and actual use is a major ethical blind spot in many AI deployments.
How Long Had It Been Going On?
The exact length varies by health system, but the algorithm had been in use in various forms for several years before the bias was identified. During that time, millions of care decisions were made based on flawed assumptions. Not intentionally—but invisibly, which might be even more dangerous.
This case sparked a broader conversation in healthcare AI: how many other models are making life-altering decisions based on biased data?
A Quiet Crisis of Trust
Trust is the bedrock of healthcare. But how can patients trust that they’re being cared for equally if decisions are influenced by biased algorithms? As one health equity advocate noted, “Algorithms are invisible. You can’t argue with them. You can’t appeal them. And most people don’t even know they’re being used.”
That invisibility, combined with an illusion of objectivity, makes AI bias in healthcare particularly insidious.
Philosophical Reflections: Can AI Ever Be Truly Objective?
If you’re a glass-of-wine-on-the-porch type of thinker, this is the section for you. Because what happened with the healthcare algorithm isn’t just a technical failure—it’s a philosophical dilemma wrapped in lines of code.
Is AI Objective… Or Just a Mirror?
One of the biggest misconceptions about artificial intelligence is that it’s somehow more “fair” than humans. After all, machines don’t have personal opinions, cultural backgrounds, or unconscious prejudices… right?
Well, sort of. AI systems don’t think like we do, but they learn from us. Every AI model is trained on data—human-made, historically shaped, culturally embedded data. And if that data contains patterns of discrimination or inequality (spoiler: it almost always does), then the AI simply absorbs those patterns without question.
It’s not a moral failure on the part of the machine. It’s a reflection of the world we’ve built.
As Dr. Ruha Benjamin, author of Race After Technology, puts it:
“Machines are not merely reflecting social biases—they’re amplifying them. We are coding our past into our future.”
Who Decides What’s “Fair”?
Fairness seems like a straightforward concept until you try to program it.
- Should everyone be treated exactly the same?
- Or should systems correct for past injustices by giving disadvantaged groups a leg up?
- What’s more important: individual outcomes or group equity?
There’s no one answer. Different stakeholders (patients, doctors, data scientists, ethicists) will often have wildly different definitions of fairness. That makes “fair AI” a moving target—and a deeply philosophical one.
In fact, the IEEE released an entire ethics framework for autonomous and intelligent systems, and it begins not with engineering principles, but with questions of human dignity, agency, and cultural values (IEEE, 2019).
The Illusion of Neutrality
Another big issue is the illusion of neutrality. AI often gets a free pass because it feels “scientific” and “mathematical.” And sure, the algorithms themselves might be neutral, but the moment we choose the training data, the variables to measure, and the outcomes to optimize—we’re making subjective decisions.
Imagine a hospital system deciding that “cost” is the best proxy for “need.” That’s a value judgment. And in this case, it had serious consequences.
Dr. Timnit Gebru, a former AI ethics researcher at Google, has long warned of what she calls “algorithmic monoculture”—when a small, homogenous group of developers end up making tools that affect millions without understanding the broader societal context.
“Technology does not exist in a vacuum. Every line of code carries the fingerprint of its creator,” she says.
Can AI Be “Good” Without Being Ethical?
Let’s push the philosophical envelope even further: Can a system be called “good” if it performs with high accuracy, but does so at the expense of justice?
Suppose an AI is 95% accurate in predicting hospital readmissions—but that 5% inaccuracy consistently excludes a specific racial group from critical care. Is that success? Is it acceptable? Do we measure value in terms of lives improved… or lives left behind?
This tension isn’t just academic. As AI becomes more integrated into hiring, policing, education, and finance, we’ll increasingly face ethical trade-offs.
It’s the classic “can we vs. should we?” debate—just with a silicon twist.
Questions to Ponder on This Wisdom Wednesday:
- If AI is trained on an unequal world, is it ethical to use it in decision-making without first correcting for that imbalance?
- Who gets to define fairness in a multicultural, multi-opinionated world?
- Should AI systems have a “morality layer”? And if so, whose morality?
- Is it more dangerous for a biased AI to exist, or for people to believe it’s unbiased?
This philosophical rabbit hole isn’t just fascinating—it’s vital. Because as AI becomes more embedded in our lives, we’re not just building smarter systems. We’re making choices about what kind of world those systems will support.
Voices from the Field: Experts Weigh In on AI and Racial Bias
Dr. Ziad Obermeyer: Unveiling Bias in Healthcare Algorithms
Dr. Ziad Obermeyer, an associate professor at the University of California, Berkeley, has been a leading voice in uncovering racial bias in healthcare algorithms. In a pivotal study published in Science, Obermeyer et al. (2019) revealed that a commonly used algorithm systematically underestimated the health needs of Black patients by relying on healthcare costs as a proxy for health needs—a method that unintentionally penalized populations with historically limited access to care.
Obermeyer has stressed that such bias is not an unsolvable technical problem but a matter of rethinking what the algorithm is designed to predict:
“That bias is fixable, not with new data, not with a new, fancier kind of neural network, but actually just by changing the thing that the algorithm is supposed to predict” (Obermeyer, as cited in News-Medical, 2022).
By reorienting algorithmic goals from economic cost to clinical outcomes, AI systems can be recalibrated to serve all patients more equitably.
Dr. Timnit Gebru: Championing Ethical AI
Dr. Timnit Gebru, a computer scientist and founder of the Distributed AI Research Institute (DAIR), has been a leading figure in advocating for fairness, accountability, and transparency in AI systems. Formerly the co-lead of Google’s Ethical AI team, Gebru has focused on the social harms perpetuated by large-scale AI, particularly in facial recognition and language modeling (Gebru, 2021).
Gebru emphasizes the importance of centering the voices of those most impacted:
“I want us to be able to do AI research in a way that we think it should be done—prioritizing the voices that we think are actually being harmed” (Gebru, as cited in Wakabayashi & Metz, 2020).
Her work with DAIR underscores a commitment to community-rooted AI research that resists the monoculture of Silicon Valley and seeks to embed ethics into the core of technological innovation.
Dr. Ruha Benjamin: Examining the Social Dimensions of Technology
Dr. Ruha Benjamin, professor of African American Studies at Princeton University, brings a sociological perspective to the intersection of race and technology. In her book Race After Technology (2019), she introduces the concept of the “New Jim Code”—the idea that modern technologies can reproduce existing racial hierarchies under the guise of neutrality.
Benjamin critiques the false promise of objectivity in tech:
“Invisibility, with regard to Whiteness, offers immunity. To be unmarked by race allows you to reap the benefits but escape responsibility for your role in an unjust system” (Benjamin, 2019, p. 117).
Her scholarship calls for a proactive and justice-oriented approach to technology—one that doesn’t just expose bias but works to dismantle it.
Dr. Cathy O’Neil: Advocating for Algorithmic Accountability
Dr. Cathy O’Neil, mathematician and author of Weapons of Math Destruction (2016), has long warned about the dangers of opaque algorithms used in everything from policing to credit scoring. She argues that without ethical guardrails, algorithms can become “weapons” that entrench inequality.
O’Neil pushes for explicit ethical embedding in algorithm design:
“We have to explicitly embed better values into our algorithms, creating Big Data models that follow our ethical lead. Sometimes that will mean putting fairness ahead of profit” (O’Neil, 2016, p. 217).
Her work challenges developers and policymakers alike to prioritize human dignity and social justice over mere efficiency.
CAT (Curated Action & Thought): Where to Go From Here
Want to stay informed or take action? Here’s your personal “CAT”—a curated guide to help you continue learning, follow expert voices, and support ethical AI initiatives:
📚 Learn
- Book: Race After Technology by Ruha Benjamin – A deep dive into how tech can encode inequality.
- Documentary: Coded Bias – Follows Joy Buolamwini’s journey uncovering facial recognition bias.
- Course: MIT AI Ethics Online Course – A comprehensive introduction to ethical AI design.
🧠 Follow
- Timnit Gebru on Twitter/X – AI researcher and ethicist, often shares resources on responsible tech.
- Algorithmic Justice League – Advocacy group fighting for equitable AI systems.
- Ziad Obermeyer – Physician-researcher focused on fair and transparent AI in medicine.
💬 Engage
- Join forums like AI Now Institute or Partnership on AI to participate in conversations shaping the future of AI.
- Support open-source bias detection tools like Fairlearn or Aequitas.
Conclusion: Wisdom Requires Willpower
So here we are—another Wednesday, another bite of wisdom. We’ve journeyed from hospitals to philosophy departments, from data science labs to activist communities. And the takeaway is crystal clear:
Artificial intelligence is powerful, but it’s not magic. It’s a mirror—polished, efficient, and occasionally brutal in its honesty.
If we want AI to help us build a better world, we have to be willing to confront the flaws in the one we already have. That means asking hard questions, listening to marginalized voices, and being relentless in our pursuit of fairness—not just in code, but in society.
Because in the end, wisdom isn’t about having all the answers. It’s about having the courage to ask better questions.
Happy Wisdom Wednesday. ✨
📚 References
- Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Polity Press.
- Gebru, T. (2021). [Keynote address]. In Distributed AI Research Institute (DAIR). Retrieved from https://www.dair-institute.org
- News-Medical. (2022, May 14). Uncovering racial bias in healthcare algorithms. Retrieved from https://www.news-medical.net
- Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342
- O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing Group.
- Wakabayashi, D., & Metz, C. (2020, December 2). Google ousts AI researcher Timnit Gebru. The New York Times. Retrieved from https://www.nytimes.com/2020/12/02/technology/google-researcher-timnit-gebru.html
📘 Additional Reading
1. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016).
Machine bias. ProPublica.
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
2. Eubanks, V. (2018).
Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
3. Noble, S. U. (2018).
Algorithms of oppression: How search engines reinforce racism. NYU Press.
4. Barocas, S., Hardt, M., & Narayanan, A. (2019).
Fairness and machine learning. Retrieved from http://fairmlbook.org
🔧 Additional Resources
- Algorithmic Justice League
Fighting algorithmic bias through art, research, and advocacy.
https://www.ajl.org - AI Now Institute (NYU)
Interdisciplinary research on the social implications of AI.
https://ainowinstitute.org - Partnership on AI
Collaborative organization advancing responsible AI.
https://www.partnershiponai.org - Fairlearn
A Python toolkit for assessing and improving AI fairness.
https://fairlearn.org - Aequitas
A toolkit to audit bias and fairness in AI systems.
https://www.datasciencepublicpolicy.org/our-work/tools-guides/aequitas/