AI in science and medicine has limits. Bias, black boxes, energy costs, and clinical failure modes
— the hard truths the headlines skip.
Let’s be honest with each other. Over the past six episodes, we’ve taken a breathless, thrilling tour through some of the most dazzling achievements in modern science. We’ve watched AlphaFold crack a fifty-year-old biological mystery. We’ve followed AI into the radiology suite, the drug discovery lab, the genome sequencer, and even deep into the atmosphere to model climate. If this were a movie, we’d be approaching the triumphant finale, the hero standing atop a mountain, arms wide. Cue the orchestra.
But here’s the thing about mountains: the view from the top is only good if you got there without ignoring all the warning signs along the way. And in the story of AI in science and medicine, there are plenty of warning signs. Flickering ones. Neon ones. Some are the size of a billboard. And yet — in the rush to announce the revolution — they get quietly folded up and stuffed behind the data dashboard.
That stops today. Episode 7 is the one where we tell the whole truth. Not to puncture the balloon — we’re still excited about AI’s potential, genuinely — but because blind enthusiasm is arguably more dangerous than skepticism. The scientists who have built these tools are the first ones warning us. The patients who depend on them deserve to know the fine print. And anyone making decisions about healthcare, research funding, or policy based on the headlines needs to understand what’s actually happening beneath the surface.
So: let’s talk about the black boxes, the biased datasets, the environmental toll, the failed clinical trials, and the very human question lurking beneath all of it — who gets to benefit from AI’s promises, and who gets left holding the consequences?
Chapter One: The Seductive Lie of the Benchmark
Every promising AI system has a benchmark number attached to it, and if you’re not careful, that number can make you feel like you’ve already won. Ninety-two percent accuracy on a test dataset. Outperforming radiologists by five percent. Detecting pancreatic cancer two years earlier than any clinician. These figures are real, they are exciting, and they are, in many important ways, incomplete.
A landmark 2025 systematic review and meta-analysis, published in npj Digital Medicine, analyzed 83 studies comparing generative AI models to physicians in diagnostic tasks. The headline result sounds encouraging: overall, AI performed on par with doctors, matching physician-level accuracy of 52.1% across the reviewed studies. But read the footnotes. When pitted against expert physicians rather than non-specialists, AI models performed significantly worse (p = 0.007). More crucially, fewer than five percent of those 500-plus studies on large language models in healthcare used real-world patient data. The rest used controlled experiments — simulated scenarios, curated datasets, clinical vignettes — the comfortable fiction of the laboratory (Bedi et al., 2025).
Chart 01 · Diagnostic Performance
Generative AI Diagnostic Accuracy — How Does It Really Stack Up?
Meta-analysis of 83 peer-reviewed studies · June 2018 – June 2024 · Bedi et al., npj Digital Medicine, March 2025
Accuracy
Non-Expert Physicians
Expert Physicians
| “These are contrived experiments that are not the real world. We wouldn’t want to conclude yet that AI is better than the physician plus AI for these tasks — because these are not real-world medical tasks.” — Dr. Eric Topol, MD, Founder and Director, Scripps Research Translational Institute, speaking at RSNA 2024 (Topol, as cited in MedCity News, 2024) |
Topol — arguably the most rigorous voice in evidence-based medical AI — has been consistent on this point for years. In his foundational work Deep Medicine, he noted bluntly that “the field is long on AI promise but very short on real-world, clinical proof of effectiveness” (Topol, 2019). Half a decade later, that observation hasn’t lost its sting. We are still, in large part, running experiments in fishbowls and then making claims about the ocean.
The phenomenon here is what researchers call overfitting to benchmarks — a model that has essentially memorized the patterns of a narrow test set and performs brilliantly in that controlled environment, then stumbles badly when it meets the messy, unpredictable complexity of a real hospital ward. Patients arrive with overlapping symptoms, missing records, atypical presentations, and socioeconomic histories that no clean dataset captures. The algorithm, trained on a curated slice of reality, sees a stranger.
A 2025 systematic review in Healthcare Technology Letters found that the most common barriers to AI implementation in healthcare were technical challenges (29.8%), problems with technological adoption (25.5%), and reliability and validity concerns (23.4%). These aren’t minor footnotes — they represent the fundamental tension between what an AI can do in a lab and what it can do at 3 a.m. in a rural emergency department with a 12-year-old patient, no specialist on call, and a corrupted imaging file (Mohammadi et al., 2025).
Chart 05 · Implementation Barriers
What’s Actually Blocking AI Adoption in Healthcare?
Systematic review of 47 studies · Scopus, Web of Science & PubMed · Mohammadi et al., Healthcare Technology Letters, 2025
Chapter Two: The Bias We Baked In
If the benchmark problem is a technical failure, the bias problem is a moral one. And it is, without question, the most consequential limitation of AI in medicine today. To understand why, you need to understand something about how machine learning models are trained: they learn from data. They get very good at recognizing the patterns in whatever they’re shown. And here’s the uncomfortable truth — the data they’ve been shown, overwhelmingly, reflects the patients who have historically had the most access to high-quality medical care.
In dermatology, this produces a crisis with life-or-death stakes. Melanoma is among the most metastatic skin cancers. Caught early, survival rates are excellent. Diagnosed late, outcomes are devastating. AI tools for melanoma detection have achieved impressive accuracy rates — on predominantly light-skinned datasets. Research by Dr. Roxana Daneshjou and colleagues at Stanford University, published in Science Advances, created the Diverse Dermatology Images (DDI) dataset — the first publicly available, pathologically confirmed image set with diverse skin tones. Their findings were stark: state-of-the-art AI models showed substantial performance limitations on dark skin tones, and the dermatologists who had labeled those training datasets showed the same bias in their own assessments (Daneshjou et al., 2022).
| “Unfairness in the teaching materials equates to unfairness in society.” — Dr. Roxana Daneshjou, Dermatologist and Biomedical Data Scientist, Stanford University (as cited in Stanford HAI, 2022) |
A 2024 study from Northwestern University, published in Nature Medicine, added another disturbing layer. When deep learning decision support was introduced alongside physician consultations, overall dermatological diagnostic accuracy improved by 33 percent for specialists and 69 percent for primary care physicians. Wonderful headline. But the accuracy gains were not evenly distributed. For primary care providers, AI assistance exacerbated the accuracy gap between light- and dark-skinned patients by five percentage points. The AI didn’t cause the bias — but it amplified the existing human bias that had been embedded in training data. As lead researcher Matthew Groh noted: “Our study reveals that there are disparities in accuracy of physicians on light versus dark skin. And in this case, it’s not the AI that is biased, it’s how physicians use it” (Groh et al., 2024).
Then there’s the generative AI layer. A 2025 study published in the Journal of the European Academy of Dermatology and Venereology tested four major AI image-generation platforms — Adobe Firefly, ChatGPT-4o, Midjourney, and Stable Diffusion — across 4,000 generated dermatology images. Only 10.2% depicted dark skin tones. And of all the generated images, only 15% accurately depicted the intended skin condition (Joerg et al., 2025). The models being used to train future doctors, build future diagnostic tools, and generate future educational materials are learning from a funhouse mirror of reality — one that overwhelmingly reflects patients who look a certain way.
Chart 03a · Algorithmic Bias
AI-Generated Dermatology Images: Skin Tone Representation Crisis
4,000 AI-generated images across 20 common skin conditions · Joerg et al., Journal of the European Academy of Dermatology and Venereology, July 2025
Skin Tone Distribution in AI-Generated Images vs. U.S. Census
AI Generated (4,000 images):
U.S. Population (Census approximation):
Platforms tested:
The philosophical weight here is hard to overstate. When we say AI has democratizing potential — that it can bring expert-level diagnosis to underserved communities, rural hospitals, and developing countries — that promise evaporates the moment the system is less accurate for the patients those communities are most likely to serve. The algorithmic bias problem isn’t a technical bug to be patched in the next release. It’s a structural consequence of who designed these systems, whose data was available, and whose experiences were deemed legible to a machine.
Chart 01 · Diagnostic Performance
Generative AI Diagnostic Accuracy — How Does It Really Stack Up?
Meta-analysis of 83 peer-reviewed studies · June 2018 – June 2024 · Bedi et al., npj Digital Medicine, March 2025
Accuracy
Non-Expert Physicians
Expert Physicians
| ⚠️ 5 Key Takeaways: What AI Can’t Do (Yet) * AI performs significantly worse than expert specialists — despite matching non-specialist accuracy in controlled studies. * Only 5% of LLM healthcare studies use real-world patient data; most rely on simulated scenarios. * AI amplifies existing human bias in medical data — particularly against darker-skinned patients. * Training a large language model can consume as much energy as hundreds of average U.S. homes for an entire year. * Wet-lab validation and human clinical judgment remain irreplaceable — no AI drug candidate has reached full FDA approval through AI-only design. |
Chapter Three: Black Boxes, Bad Explanations, and the Problem of Trust
Imagine your cardiologist hands you a diagnosis. You ask: why? They can walk you through the evidence — the ECG reading, the troponin levels, the family history, the clinical picture assembled over years of training and pattern recognition. Now imagine an AI hands you the same diagnosis. You ask: why? And the answer is essentially: we trained it on a very large dataset and this is what the model returned. Thank you, next patient.
This is the explainability crisis — sometimes called the “black box” problem — and it is one of the thorniest unresolved challenges in applied medical AI. Deep neural networks, the architecture behind most high-performing medical AI, are remarkable at finding patterns but notoriously bad at explaining themselves in human-comprehensible terms. They operate by adjusting billions of parameters across multiple layers of computation. The final output emerges from a process that no individual human designed or can fully trace.
For clinical medicine, this matters enormously. A 2025 paper in Frontiers in Medicine identified three interconnected failure modes in AI diagnostic systems: data pathology (biases in training sets), algorithmic bias (overfitting to spurious correlations), and human-AI interaction issues — specifically, what researchers call automation complacency, the dangerous tendency for clinicians to defer to an AI output without applying their own critical judgment. When a system says “cancer likely” and the doctor doesn’t know why the system said it, how do they push back? How do they catch an error that the model is confidently wrong about? The result can be delays in clinical workflows, missed corrections, and in worst cases, real patient harm (Li et al., 2025).
The regulatory world is acutely aware of this. As of August 2024, the U.S. Food and Drug Administration had authorized approximately 950 medical devices that use AI or machine learning — the vast majority designed for detection and diagnosis. But authorization is not the same as widespread adoption. Legal liability questions remain unresolved: if an AI-assisted diagnosis is wrong, who is responsible? The clinician who deferred to the system? The hospital that deployed it? The company that built it? These questions are not hypothetical. They are being actively litigated and legislated. And they are keeping many excellent AI tools stuck in regulatory limbo while the field advances faster than governance can follow (Government of Canada, 2025).
Chapter Four: The Staggering Environmental Cost No One Talks About
Here’s a fact that tends to get buried at the very bottom of the press release, usually beneath seven bullet points about breakthroughs in drug discovery: training a large AI model is extraordinarily energy-intensive. Running it at scale, serving millions of queries per day across healthcare systems around the world, requires an infrastructure that is thirsty, power-hungry, and growing at a rate that strains both electrical grids and municipal water supplies.
A 2021 research paper from Google and the University of California, Berkeley, estimated that training GPT-3 alone consumed 1,287 megawatt-hours of electricity — enough to power approximately 120 average U.S. homes for an entire year — while generating roughly 552 tons of carbon dioxide (Brown et al., as cited in Bashir & Olivetti, 2025).
And that was just one model. In 2024. The International Energy Agency estimated that AI-specific servers consumed between 53 and 76 terawatt-hours within U.S. data centers alone, with projections reaching 165 to 326 TWh by 2028. Globally, data center electricity consumption was approximately 415 TWh in 2024 — roughly 1.5 percent of global electricity use — projected to nearly double to 945 TWh by 2030, the equivalent of Japan’s current national electricity demand (IEA, 2025).
Chart 06 · Environmental Impact
The Hidden Cost of Running AI: Carbon, Water & Energy in 2025
de Vries-Gao, Cell Patterns (Dec 2025) · IEA Global Energy Review (2025) · Xiao et al., Nature Sustainability (Nov 2025)
2025 Estimate
2025 Estimate
Electricity, 2024
Projected U.S. AI Data Center Water Use (Xiao et al., Nature Sustainability, 2025)
≈ Annual household water use of 6–10 million Americans
A landmark peer-reviewed analysis published in Cell Patterns in December 2025 put even starker numbers on the table. Researcher Alex de Vries-Gao estimated that AI systems alone could generate between 32.6 and 79.7 million tons of CO₂ emissions in 2025 — a carbon footprint equivalent to that of New York City. The water footprint is equally alarming: between 312.5 and 764.6 billion liters — a range roughly equivalent to the entire global annual consumption of bottled water (de Vries-Gao, 2025). Every 100-word prompt you type into an AI system is estimated to use roughly one small bottle of water in cooling costs (EESI, 2025).
Cornell University researchers, publishing in Nature Sustainability in November 2025, modeled the U.S. AI infrastructure specifically and found that under the current rate of growth, by 2030 AI data centers could annually emit 24 to 44 million metric tons of carbon dioxide — the equivalent of adding 5 to 10 million cars to U.S. roads — and drain 731 to 1,125 million cubic meters of water per year, equivalent to the household water usage of 6 to 10 million Americans (Xiao et al., 2025).
Now: none of this means AI in medicine is not worth pursuing. The question is about honesty and proportion. When we talk about AI accelerating climate science or helping us detect coral reef degradation, we should also be asking: what is this system’s own ecological cost? Is the infrastructure that trains a cancer-detection model powered by renewable energy, or by coal? Is the data center sitting on top of an aquifer in an already drought-stressed region? These are not abstract philosophical questions. They are engineering and policy choices being made right now, mostly without public deliberation.
Chapter Five: A Philosophical Interlude — Who Owns the Revolution?
There is a philosophical debate running quietly beneath the surface of every conversation about AI in science and medicine, and it’s time to name it directly. It is the question of concentration versus democratization. Who owns the revolution, and who gets swept up in its wake?
The most powerful AI tools in medicine — the foundation models, the protein structure predictors, the genomic risk calculators — are overwhelmingly built and owned by a small number of extremely well-resourced actors: large technology companies, elite research universities, and wealthy governments. The computing infrastructure required to train these models costs hundreds of millions of dollars. The proprietary datasets that give them their edge are closely guarded. The regulatory frameworks that govern their deployment are still nascent, largely written in the language of the jurisdictions that can afford to write them.
Google CEO Sundar Pichai, speaking candidly about the challenges facing AI development, acknowledged the unsolved technical dimensions of this moment: “Hallucination is not a solved problem. I think we are all making progress on it, and there’s more work to be done. There are some fundamental limitations we need to work through.” He later noted, looking ahead: “The hill is steeper. When I look at [2025], the low-hanging fruit is gone” (Pichai, as cited in MIT Technology Review, 2023; CNBC, 2024). When the CEO of the world’s most powerful AI company describes climbing a steeper hill, it’s worth asking: who is doing the climbing, and who is being left at base camp?
| “Without sufficient caution, we may irreversibly lose control of autonomous AI systems, rendering human intervention ineffective.” — Yoshua Bengio, Geoffrey Hinton, and co-authors, Managing Extreme AI Risks Amid Rapid Progress (2024), published paper co-signed by Turing Award winners and submitted to the U.S. Senate Subcommittee on Privacy, Technology, and the Law |
The North-South divide in AI access is real and widening. High-income countries are racing to build AI diagnostic tools for their radiology departments. Low- and middle-income countries, which carry a disproportionate share of the global burden of preventable disease, are simultaneously the populations for whom AI-assisted screening could be most transformative — and the ones least likely to have the infrastructure, trained personnel, or regulatory capacity to deploy these systems safely. The irony is almost baroque: AI promises to democratize medical expertise, but its current architecture tends to consolidate power.
This is not an argument against developing AI in medicine. It is an argument for building it differently — with diverse datasets, open-access publication norms, attention to low-resource settings from the design phase, and governance frameworks that center equity, not just accuracy. The ethical question is not whether AI will change medicine. It already is. The question is whether the changes will close gaps or widen them.
Chapter Six: The Valley of Technical Realities
Let’s also be specific about a few technical realities that the breathless press releases often gloss over. Because the gap between what AI can do in a controlled experiment and what it can do in clinical deployment is full of real obstacles that deserve more than a parenthetical asterisk.
Data quality and the garbage-in problem. Machine learning is only as good as its training data. Healthcare data is famously fragmented, inconsistently formatted, siloed across incompatible systems, and rife with entry errors. Electronic health records contain contradictions, abbreviations, missing fields, and documentation shaped by billing incentives rather than clinical accuracy. An AI model trained on this patchwork learns the patchwork. The garbage-in-garbage-out principle is not a metaphor in this context — it is a literal description of how models fail in deployment.
The reproducibility crisis. Science already had a reproducibility problem before AI arrived. Now AI is bringing its own. Many high-profile AI models in medicine have not been independently validated in external populations. Studies are often published with impressive accuracy metrics on internal test sets, but fail to replicate when run on data from a different hospital, a different country, or a different time period. The model has learned the idiosyncrasies of a particular institution’s data collection practices — not the underlying biology.
Integration and workflow friction. Even a perfectly accurate AI model is useless if it cannot be smoothly integrated into clinical workflows. Most hospitals run on legacy systems that are not designed to communicate with modern machine learning pipelines. Clinicians — who are already stretched to the breaking point by administrative burdens — face additional training requirements, liability uncertainties, and trust deficits when new AI tools arrive. A system that is technically excellent but practically unusable is still clinically useless.
Drug discovery’s uncomfortable reality. We have heard a great deal about AI-designed drug candidates, and we covered this in Episode 3. But it is worth reiterating a hard truth here: as of early 2026, no drug designed primarily through AI has yet received full regulatory approval and reached patients at scale. Promising candidates are in trials. The pipeline is real. But the gap between a compelling molecule in a simulation and a safe, effective drug in a human being is bridged by years of wet-lab validation, clinical trials, and regulatory review that no algorithm can shortcut. The chemistry still has to happen. The immune system still has to cooperate. The biology is still stubbornly, beautifully, maddeningly complex.
Chart 07 · Drug Discovery Pipeline
From Molecule in a Machine to Medicine in a Body: Where We Actually Are
Status as of early 2026 · No AI-primarily-designed drug has received full FDA/EMA regulatory approval at scale
of all drug candidates
still fail — with or without AI
| 📚 Additional Reading * Topol, E. J. (2019). Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books. * Bengio, Y., et al. (2024). Managing extreme AI risks amid rapid progress. arXiv:2310.17688 [updated 2024]. * Daneshjou, R., et al. (2022). Disparities in dermatology AI performance on a diverse, curated clinical image set. Science Advances, 8(32). * MIT News (2025). Explained: Generative AI’s environmental impact. Massachusetts Institute of Technology. * Groh, M., et al. (2024). Deep learning-aided decision support for diagnosis of skin disease across skin tones. Nature Medicine. |
Conclusion: The Gift of Honest Enthusiasm
None of what you’ve just read should make you less excited about AI in science and medicine. In fact, we’d argue it should make you more excited — in a more sustainable, clear-eyed way. The researchers doing this work know the limitations better than anyone, and they’re still showing up. They’re building more diverse datasets, experimenting with explainable AI architectures, pushing for open science norms, and designing governance frameworks that can keep pace with the technology.
The danger is not enthusiasm. The danger is the kind of enthusiasm that crowds out scrutiny. That insists on only reading the press release and never the methods section. That deploys half-baked AI tools in clinical settings because the headline accuracy was impressive, without asking: accurate for whom? In what context? Validated how?
We’ve covered the peaks in this series. Episode 7 is about the terrain. The crevasses, the false summits, the altitude sickness that sets in when you move too fast. Understanding limitations isn’t pessimism — it’s navigation. And the destinations in this story — diseases caught earlier, drugs designed faster, biology understood more deeply, medicine distributed more equitably — are worth navigating toward, carefully, honestly, and with eyes fully open.
Next week, in our final episode, we look forward: the next decade of AI in science and medicine, the frontiers just coming into view, and the choices — technological, ethical, and political — that will determine whether the revolution we’ve been describing actually reaches the people who need it most.
References
- Bashir, N., & Olivetti, E. A. (2025, January). Explained: Generative AI’s environmental impact. MIT News. https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
- Bedi, N., et al. (2025, March). A systematic review and meta-analysis of diagnostic performance comparison between generative AI and physicians. npj Digital Medicine, 8. https://doi.org/10.1038/s41746-025-01543-z
- Bengio, Y., Hinton, G., Yao, A., et al. (2024). Managing extreme AI risks amid rapid progress. arXiv:2310.17688. https://arxiv.org/abs/2310.17688
- Cornell University. (2025, November). ‘Roadmap’ shows the environmental impact of AI data center boom. Cornell Chronicle. https://news.cornell.edu/stories/2025/11/roadmap-shows-environmental-impact-ai-data-center-boom
- Daneshjou, R., Vodrahalli, K., Novoa, R. A., et al. (2022). Disparities in dermatology AI performance on a diverse, curated clinical image set. Science Advances, 8(32), eabq6147. https://doi.org/10.1126/sciadv.abq6147
- de Vries-Gao, A. (2025, December). The carbon and water footprints of data centers and what this could mean for artificial intelligence. Cell Patterns. https://www.cell.com/patterns/fulltext/S2666-3899(25)00278-8
- EESI (Environmental and Energy Study Institute). (2025). Data centers and water consumption. https://www.eesi.org/articles/view/data-centers-and-water-consumption
- Government of Canada. (2025). 2025 Watch List: Artificial intelligence in health care. NCBI Bookshelf. https://www.ncbi.nlm.nih.gov/books/NBK613808/
- Groh, M., et al. (2024, February 5). Deep learning-aided decision support for diagnosis of skin disease across skin tones. Nature Medicine. Northwestern University. https://news.northwestern.edu/stories/2024/02/new-study-suggests-racial-bias-exists-in-photo-based-diagnosis-despite-assistance-from-fair-ai
- IEA (International Energy Agency). (2025). Global Energy Review 2025 and Energy and AI report. https://www.iea.org
- Joerg, V., et al. (2025). AI-generated dermatologic images show deficient skin tone diversity and poor diagnostic accuracy: An experimental study. Journal of the European Academy of Dermatology and Venereology. https://doi.org/10.1111/jdv.20849
- Li, Y., Yi, X., Fu, J., Yang, Y., Duan, C., & Wang, J. (2025). Reducing misdiagnosis in AI-driven medical diagnostics: A multidimensional framework for technical, ethical, and policy solutions. Frontiers in Medicine, 12, 1594450. https://doi.org/10.3389/fmed.2025.1594450
- Mohammadi, S., et al. (2025). Artificial intelligence challenges in the healthcare industry: A systematic review of recent evidence. Healthcare Technology Letters, 12(1), e70017. https://doi.org/10.1049/htl2.70017
- Pichai, S. (2023, December 6). Google CEO Sundar Pichai on Gemini and the coming age of AI. MIT Technology Review. https://www.technologyreview.com/2023/12/06/1084539/google-ceo-sundar-pichai-on-gemini-and-the-coming-age-of-ai/
- Pichai, S. (2024, December 8). Google CEO Sundar Pichai: AI development is finally slowing down. CNBC. https://www.cnbc.com/2024/12/08/google-ceo-sundar-pichai-ai-development-is-finally-slowing-down.html
- Stanford HAI. (2022). AI shows dermatology educational materials often lack darker skin tones. Stanford Human-Centered AI. https://hai.stanford.edu/news/ai-shows-dermatology-educational-materials-often-lack-darker-skin-tones
- Topol, E. J. (2019). Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books. Scripps Research Institute press summary: https://www.scripps.edu/news-and-events/press-room/2019/20190312-topol-deep-medicine.html
- Topol, E. J. (2024, December). Generative AI studies boast promising results, but real-world challenges remain [address at RSNA 2024]. As cited in: MedCity News. https://medcitynews.com/2024/12/generative-ai-llm-healthcare/
- Xiao, T., et al. (2025, November). Roadmap for sustainable AI computing infrastructure. Nature Sustainability. As reported by Cornell University. https://news.cornell.edu/stories/2025/11/roadmap-shows-environmental-impact-ai-data-center-boom
Additional Resources
1. Scripps Research Translational Institute — Dr. Eric Topol’s Ground Truths Substack: https://erictopol.substack.com — Evidence-based weekly analysis of AI in medicine and science.
2. Stanford Human-Centered AI (HAI): https://hai.stanford.edu — Rigorous research and policy analysis at the intersection of AI and human values.
3. Center for AI Safety: https://www.safe.ai — Nonprofit focused on reducing catastrophic and existential risks from AI.
4. Digiconomist — AI Environmental Footprint Tracker: https://digiconomist.net — Ongoing data and analysis on the energy and environmental costs of AI systems.
5. FDA AI/ML-Based Software as a Medical Device (SaMD): https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device — Official FDA resource on AI medical device regulation and approvals.




Leave a Reply