From skin cancer detection to mammography AI, machine learning is reshaping how doctors diagnose disease
and what it means when algorithms see more than humans
Prologue: A Missed Signal in the Noise
Somewhere in a stack of chest X-rays taken on an otherwise unremarkable Tuesday morning, there is a shadow. It is small — a few pixels wide on a high-resolution scan, hovering at the edge of a lung lobe, exactly where the rib casts its own oblong shade. A radiologist who has been reading images since before some of his patients were born will glance at this film for an average of three to four seconds. His eyes are good, his instincts are formidable, and his coffee is lukewarm. He moves to the next scan.
The shadow is an early-stage adenocarcinoma. Stage I. Surgically resectable. Five-year survival rate if caught now: above 90%.
Now picture the same X-ray run through a convolutional neural network trained on millions of annotated chest images. The model pauses — metaphorically, of course — on that same region. A bounding box appears. A probability score: 94.7%. A flag is generated and drops into the radiologist’s priority queue.
The shadow gets a second look.
This is not a futuristic scenario. This is medicine in 2025, quietly unfolding in hospitals, clinics, and screening centres across the globe, and it is the story we are diving into head-first in today’s episode. Welcome to The AI Doctor Will See You Now — where we explore the most tangible, most immediately life-changing application of artificial intelligence in the entire series: medical diagnosis.
Chapter One: The Unseen Epidemic of Misdiagnosis
Before we can appreciate what AI brings to the diagnostic table, we need to sit with an uncomfortable truth about the table as it currently exists.
In a landmark study published in BMJ Quality & Safety, researchers at Johns Hopkins University estimated that approximately 800,000 Americans experience death or serious disability annually as a direct result of diagnostic errors (Newman-Toker et al., 2021). Not surgical errors. Not drug interactions. Misdiagnosis. The wrong condition identified, a condition missed entirely, or a correct diagnosis made far too late. Dr. Eric Topol, executive vice president of Scripps Research and one of the most cited physician-scientists in the world, has cited this figure repeatedly in his advocacy for AI adoption in medicine, describing the situation plainly: “Machine eyes will see things that humans will never see. It’s actually quite extraordinary” (Topol, 2024).
This is not an indictment of physicians. It is an indictment of cognitive architecture. The human brain, for all its spectacular pattern-recognition ability, runs on approximately 86 billion neurons shaped by evolution for tasks that did not include differentiating between a Grade 2 glioma and a metastatic lesion on a T1-weighted MRI sequence. Doctors operate under conditions of chronic cognitive overload, time pressure, incomplete information, and perceptual fatigue. They see dozens of patients in a single shift, many of them presenting with overlapping, ambiguous symptom clusters. They are magnificent, and they are finite.
AI systems, particularly deep learning models trained on medical imaging, do not experience alert fatigue. They do not have off days. They do not confuse one scan for another because their mind wandered during a difficult commute. And in the past five years, the evidence for their diagnostic capabilities has moved decisively from “promising” to “clinically validated.”
The scale of deployment already tells a story worth pausing over. As of December 2024, the U.S. Food and Drug Administration had authorized 1,016 AI and machine learning-enabled medical devices (Nature, 2025). Radiology alone accounts for more than 76% of those approvals, with 723 radiology-specific AI tools cleared by mid-2024 (Sivakumar et al., 2025). In 2025 alone, the FDA cleared an additional 295 AI/ML-enabled medical devices, with 62% classified as Software as a Medical Device — tools that exist entirely in code, ready to be deployed across any hospital system with the right infrastructure (Innolitics, 2025). The global market for AI-enabled medical devices was valued at $13.7 billion in 2024 and is projected to exceed $255 billion by 2033 (IntuitionLabs, 2025).
The quiet revolution has arrived. It just didn’t knock.
FDA AI/ML Medical Device Approvals: From Trickle to Torrent
Annual clearances by the U.S. Food and Drug Administration, 1995–2024. Total devices cleared as of December 2024: 1,016. Source: Sivakumar et al., JAMA Network Open (2025); IntuitionLabs (2025).
Between 1995–2015, only 33 AI devices were approved in total. In 2023 alone: 221. ★
Chapter Two: Teaching Machines to See What We Miss
The story of AI in medical imaging is fundamentally a story about a specific type of neural network called a convolutional neural network, or CNN — the architectural cousin of the models that taught computers to recognise your face in a photo. CNNs work by decomposing an image into progressively abstract layers of features: edges, textures, shapes, spatial relationships. Applied to a chest CT, a convolutional model learns not to “look for cancer” in any human sense, but to recognise statistical patterns in pixel distributions that have, in a very large training dataset, correlated with radiologist-confirmed diagnoses.
The mechanics are elegant and ruthlessly pragmatic. You feed the model thousands — ideally millions — of annotated images. The model makes a prediction. The prediction is compared to the known label. The error propagates backward through the network, adjusting millions of numerical weights infinitesimally. Repeat across the entire dataset, across dozens of training cycles, and the model begins to generalise: not memorising scans, but extracting features that persist across diverse patient populations, imaging equipment, and scan parameters.
The real-world results of this process have become increasingly hard to dismiss. Consider diabetic retinopathy, a leading cause of preventable blindness affecting an estimated 103 million people worldwide. IDx-DR became the first FDA-cleared autonomous AI diagnostic system in 2018, and in a multicenter trial of 819 diabetic patients it demonstrated 87% sensitivity and 90% specificity for detecting more-than-minimal diabetic retinopathy — accuracy sufficient to allow primary care clinics to screen patients without requiring an on-site ophthalmologist (IntuitionLabs, 2025). That is not a small thing in a country where specialist access can mean months of waiting and hundreds of miles of travel.
Mammography tells an even more dramatic story. The Mammography Screening with Artificial Intelligence (MASAI) trial, conducted across four sites in Sweden and involving over 105,000 women, is the largest randomised controlled trial of AI in cancer screening ever conducted. Its results, published across multiple phases in The Lancet, The Lancet Oncology, and The Lancet Digital Health, read like a clinical case for rapid adoption. AI-supported screening detected 29% more cancers compared to standard double reading by radiologists, including 24% more early-stage invasive cancers (Lång et al., 2023; Hernström et al., 2025). Perhaps most striking: the final full results published in early 2026 showed a 12% reduction in interval cancers — tumours that develop and grow between screening rounds, typically associated with higher mortality — in the AI-supported group (The Lancet, 2026). Simultaneously, radiologist screen-reading workload was reduced by 44%. Dr. Kristina Lång, associate professor of diagnostic radiology at Lund University and lead investigator of the MASAI trial, captured the implications cleanly: “AI-supported screening improves the early detection of clinically relevant breast cancers which led to fewer aggressive or advanced cancers diagnosed in between screenings” (Lång, as cited in EurekAlert!, 2026).
MASAI Trial: AI-Supported vs. Standard Mammography Screening
Results from 105,000+ women in the largest randomised controlled trial of AI in cancer screening. Standard double-reading = 100% baseline. Source: Lång et al. (2023); Hernström et al. (2025); Gommers et al. (2026), The Lancet series.
Values above 100% = improvement in detection. Values below 100% = desirable reduction (workload & aggressive interval cancers).
In dermatology, a meta-analysis published in npj Digital Medicine that examined 53 studies found that AI algorithms for skin cancer classification achieved a pooled sensitivity of 87.0% and specificity of 77.1%, compared to 79.78% sensitivity and 73.6% specificity for all clinicians combined — a statistically significant advantage (Manco et al., 2024). A concurrent study led by Professor Eleni Linos at Stanford Medicine’s Center for Digital Health reviewed more than 67,000 evaluations of potential skin cancers by practitioners with and without AI assistance and found that accuracy improved across every level of training when AI guidance was available. Medical students and primary care physicians saw the largest gains — roughly 13 percentage points in sensitivity and 11 points in specificity. “I was surprised to see everyone’s accuracy improve with AI assistance, regardless of their level of training,” Linos told Stanford Medicine’s news team. “This makes me very optimistic about the use of AI in clinical care. Soon our patients will not just be accepting, but expecting, that we use AI assistance to provide them with the best possible care” (Linos, 2024).
AI vs. All Clinicians: Skin Cancer Diagnostic Accuracy
Meta-analysis of 53 studies. AI algorithms vs. combined clinician performance on sensitivity (catching real cancers) and specificity (avoiding false alarms). Source: Manco et al., npj Digital Medicine (2024).
A 2024 European Society of Radiology survey of 572 radiologists confirmed that adoption is accelerating in real clinical practice: 48% of respondents were actively using AI tools in routine work, up from just 20% five years earlier (IntuitionLabs, 2025). The era of AI-assisted diagnosis is not approaching. For tens of thousands of patients, it has already arrived.
Chapter Three: Beyond the Scan — AI Across the Full Diagnostic Spectrum
Radiology and imaging are where the AI diagnostic story is loudest and most legible, but they are not where it ends. The frontier extends across virtually every domain of clinical medicine, often in ways that are less visually spectacular but no less consequential.
In cardiac medicine, AI algorithms trained on electrocardiogram waveforms are demonstrating a kind of pattern-reading sorcery that borders on the uncanny. Researchers at the Mayo Clinic have published work showing that an AI model applied to a standard 12-lead ECG can detect asymptomatic left ventricular dysfunction — a precursor to heart failure — with 93% accuracy, years before a patient would conventionally receive a diagnosis (Attia et al., 2019). AliveCor’s Kardia 12L, a portable AI-enabled ECG system, received FDA clearance in mid-2024, extending that kind of analytical power to the point-of-care setting (IntuitionLabs, 2025). Retinal imaging, meanwhile, has become a window not just onto the eye but onto the entire vascular system: deep learning models can now predict a patient’s risk of cardiovascular disease, hypertension, kidney disease, and even neurodegenerative conditions like Alzheimer’s from a retinal photograph alone — information invisible to even the most experienced ophthalmologist (Topol, 2024).
In pathology — the foundational discipline of tissue-based diagnosis — AI models trained on whole-slide digital images are matching or exceeding specialist performance on tumour classification tasks. Four foundation models in pathology were published in clinical journals in 2024, capable of performing diagnosis from a single whole-slide image and, in some cases, identifying the underlying genetic mutation driving the cancer and predicting patient prognosis (Topol, 2024).
The intensive care unit represents perhaps the highest-stakes arena for early AI deployment. Sepsis — a life-threatening systemic response to infection — kills approximately 270,000 Americans each year, and its lethality is acutely sensitive to time-to-treatment. AI-based early warning systems trained on electronic health record data, incorporating vital signs, laboratory values, medication records, and nursing assessments, can identify patients at elevated sepsis risk hours before clinical deterioration becomes visible to the care team. Epic Systems’ Sepsis Prediction Model, deployed across many major health systems, exemplifies this approach, though its real-world performance and clinical impact remain areas of active research and some debate (Sendak et al., 2020).
Multi-modal AI approaches — systems that simultaneously process imaging data, genomic sequences, clinical notes, and laboratory results — represent the next frontier. A patient is not a chest X-ray. They are a chest X-ray plus a medication history plus a family history plus a lab panel plus a presenting complaint, and the interaction among these dimensions holds diagnostic information that any single modality will miss. Early multi-modal foundation models trained on diverse clinical data types are beginning to demonstrate that this integration is not merely theoretically appealing but practically achievable at scale.
Chapter Four: The Human-AI Partnership — And the Philosophical Fault Lines Beneath It
Here is where our adventure must slow down and reckon with some deeply uncomfortable terrain, because the rise of AI in medical diagnosis does not arrive without philosophical cargo.
The most immediate question is the one clinicians raise most loudly: who is responsible when the algorithm is wrong? If a radiologist misses a lesion on a scan, malpractice law has well-established frameworks for adjudicating that failure. If an AI system trained on 3 million mammograms flags the wrong quadrant, or — perhaps worse — confidently misses a tumour that a human would have caught, the legal and moral architecture for accountability is in its infancy. Is the liability with the clinician who deferred to the AI? The hospital that deployed it? The company that built it? The FDA pathway that authorised it? As of 2025, these questions remain largely unanswered, and the answers will likely vary by jurisdiction, by clinical specialty, and by the precise nature of the AI’s role in the diagnostic workflow.
There is also the question of explainability — the so-called “black box” problem. Most high-performing diagnostic AI systems, particularly deep learning CNNs, do not produce human-interpretable reasoning. They produce a probability score. A clinician asked to integrate that score into a diagnostic decision has no way of knowing whether the model flagged a lesion because it correctly identified pathological tissue architecture, or because something in the preprocessing pipeline produced an artefact that happens to superficially resemble one. Research into explainable AI (XAI) for medical imaging is actively addressing this gap, and there are encouraging results: a 2025 study in Nature Communications found that a dermatologist-like XAI system, which provided domain-specific explanations for its diagnostic reasoning, improved dermatologists’ balanced diagnostic accuracy by 2.8 percentage points compared to standard opaque AI, while also reducing cognitive load on the clinician (Chanda et al., 2025).
Deeper still is a philosophical problem that sits at the very heart of what medicine is. The diagnostic encounter between a physician and a patient is not simply an information-processing task. It is a relationship — one in which the asymmetry of knowledge is tempered by the shared humanity of vulnerability. The doctor who delivers a cancer diagnosis does not merely transmit a probabilistic classification. They hold space for grief, answer questions that the patient doesn’t quite know how to ask, and make eye contact across a desk in a way that communicates something algorithms fundamentally cannot: I see you as a person, not a data point.
Dr. Eric Topol, whose book Deep Medicine (2019) remains perhaps the defining text on AI’s relationship to medical practice, argues that the correct frame for AI in diagnosis is not replacement but liberation: “One of the most important potential outgrowths of AI in medicine is the gift of time. It will take many years for all of this to be actualized, but ultimately it should be regarded as the most extensive transformation in the history of medicine” (Topol, 2019). By offloading the routine, the computational, and the pattern-recognition heavy lifting onto AI systems, physicians are theoretically freed to do the thing no algorithm can replicate: be present with their patients. The average physician today spends fewer than seven minutes in face-to-face conversation per clinical encounter. The promise of “keyboard liberation” — of AI that handles documentation, preliminary image screening, and risk stratification — is the promise of giving those minutes back.
But this promise comes shadowed by risk. A substantial body of research has documented algorithmic bias in medical AI systems, with performance disparities across racial, ethnic, gender, and socioeconomic lines that directly reflect the composition of training datasets. A study published in Science in 2019 demonstrated that a widely used healthcare algorithm systematically underestimated the care needs of Black patients relative to white patients with the same clinical severity, because it used healthcare spending as a proxy for health need — a proxy that encodes decades of structural inequality in healthcare access (Obermeyer et al., 2019). As of 2025, the JAMA Network Open analysis of 903 FDA-approved AI devices found that fewer than one-third of clinical evaluations provided sex-specific performance data, and only one-quarter addressed age-related subgroups (Windecker et al., 2025). The tools entering clinical practice are being validated on populations that do not represent the full diversity of the patients they will be used to diagnose.
This is not a technical glitch. It is an equity crisis embedded in a technological optimism problem.
Chapter Five: What AI Still Cannot Do — And Why That Matters
The exhilarating statistics of AI diagnostic performance carry a small-print caveat that is easy to glide past: most of them are generated in carefully controlled retrospective studies, using curated datasets, with pre-selected patient populations, evaluated against consensus expert labels. Clinical medicine, in all its chaotic, underfunded, under-resourced, multilingual, high-stakes reality, is a different environment entirely.
The Adoption Gap: Clinical Readiness vs. Real-World Deployment
The tools are approved. The evidence is mounting. Yet clinical uptake tells a more complicated story — with a stark divide between European and U.S. practice. Source: ESR Survey (2024) via IntuitionLabs (2025).
The gap between benchmark performance and real-world clinical deployment is one of the most persistent and underreported challenges in medical AI. A 2025 analysis in JAMA Network Open of 903 FDA-approved AI devices found that clinical performance studies were reported for only 55.9% of analysed devices at the time of approval, with 24.1% of submissions explicitly stating that no such study had been conducted (Windecker et al., 2025). In radiology specifically, only 8.1% of clinical evaluations were prospective studies — the gold standard — and just 2.4% used randomised clinical designs (Sivakumar et al., 2025). Many AI diagnostic tools are being approved, marketed, and deployed in clinical settings on the basis of retrospective analyses whose generalisability to diverse patient populations, equipment configurations, and clinical workflows remains unproven.
There is also the structural challenge of integration. An AI diagnostic tool does not slot into a healthcare system the way a new stethoscope does. It requires institutional IT infrastructure, clinician training, workflow redesign, ongoing performance monitoring, reimbursement coding, liability insurance coverage, and interoperability with existing electronic health record systems — each of which represents a potential failure point. A 2024 U.S. report estimated that despite the extraordinary growth in FDA approvals, only approximately 2% of radiology practices in the United States were actively using AI diagnostic tools in routine clinical care (IntuitionLabs, 2025). The pipeline is filling with approved tools. The clinical translation pipeline is still under construction.
The bottom line is not discouraging — it is clarifying. AI in medical diagnosis is not a coming revolution that will spontaneously fix medicine. It is a collection of powerful, rigorously validated tools that require careful, evidence-based, equitable deployment within systems designed to support them. The algorithms are ready for medicine. It is less clear that medicine — as organised, as resourced, as regulated — is fully ready for the algorithms.
Epilogue: The Second Set of Eyes That Never Sleeps
Back to our chest X-ray. The radiologist returns to his queue. The AI flag is there, annotated with a confidence interval and a differential: possible pulmonary nodule, right lower lobe. He zooms in. He hadn’t clocked it. Now he does. He orders a follow-up CT.
Three months later, the patient — a 54-year-old ex-smoker who came in for a pre-employment health check — is told that she has a Stage IA adenocarcinoma. It is resected. She does not require chemotherapy. She goes home.
This is not a story about a machine that replaced a doctor. It is a story about a machine that made a good doctor better, and about a patient who walked out of a surgeon’s office instead of a hospice. It is, in miniature, the story of what AI in medical diagnosis can be when it is developed thoughtfully, validated rigorously, deployed equitably, and kept always in its proper place: as the second set of eyes, the tireless screener, the pattern-recognition partner — in service of the irreplaceable human being at the other end of the stethoscope.
The AI doctor will see you now. But the doctor — the real one — will still be in the room.
📚 Reference List
- Attia, Z. I., Kapa, S., Lopez-Jimenez, F., McKie, P. M., Ladewig, D. J., Satam, G., Pellikka, P. A., Enriquez-Sarano, M., Noseworthy, P. A., Munger, T. M., Asirvatham, S. J., Scott, C. G., Carter, R. E., & Friedman, P. A. (2019). Screening for cardiac contractile dysfunction using an artificial intelligence-enabled electrocardiogram. Nature Medicine, 25(1), 70–74. https://doi.org/10.1038/s41591-018-0240-2
- Chanda, T., Hauser, K., Apfelbacher, C., Grazina Putten, P., & Hekler, A. (2025). Dermatologist-like explainable AI enhances melanoma diagnosis accuracy: Eye-tracking study. Nature Communications, 16, Article 4582. https://doi.org/10.1038/s41467-025-59532-5
- Gommers, J., Lång, K., Hofvind, S., Larsson, A.-M., Josefsson, V., Hernström, V., & Rodríguez-Ruiz, A. (2026). Interval cancer, sensitivity, and specificity comparing AI-supported mammography screening with standard double reading without AI in the MASAI study. The Lancet. https://doi.org/10.1016/S0140-6736(25)02464-X
- Hernström, V., Josefsson, V., Sartor, H., Schmidt, D., Larsson, A.-M., Hofvind, S., Andersson, I., & Lång, K. (2025). Screening performance and characteristics of breast cancer detected in the Mammography Screening with Artificial Intelligence trial (MASAI). The Lancet Digital Health, 7(3), e175–e183. https://doi.org/10.1016/S2589-7500(24)00267-X
- IntuitionLabs. (2025). AI medical devices: 2025 status, regulation & challenges. https://intuitionlabs.ai/articles/ai-medical-devices-regulation-2025
- IntuitionLabs. (2025). AI in radiology: 2025 trends, FDA approvals & adoption. https://intuitionlabs.ai/articles/ai-radiology-trends-2025
- Innolitics. (2025). 2025 year in review: AI/ML medical device 510(k) clearances. https://innolitics.com/articles/year-in-review-ai-ml-medical-device-k-clearances/
- Kim, C. R., & Linos, E. (2024). AI improves accuracy of skin cancer diagnoses in Stanford Medicine-led study. npj Digital Medicine. https://med.stanford.edu/news/all-news/2024/04/ai-skin-diagnosis.html
- Lång, K., Josefsson, V., Larsson, A.-M., Larsson, S., Högberg, C., Sartor, H., Hofvind, S., Andersson, I., & Rosso, A. (2023). Artificial intelligence-supported screen reading versus standard double reading in the Mammography Screening with Artificial Intelligence trial (MASAI). The Lancet Oncology, 24(8), 936–944. https://doi.org/10.1016/S1470-2045(23)00298-X
- Manco, L., Manco, A., Cipolat Mis, L., & Ambrosi, E. (2024). A systematic review and meta-analysis of artificial intelligence versus clinicians for skin cancer diagnosis. npj Digital Medicine. https://doi.org/10.1038/s41746-024-01103-x
- Morin, S. H., Tan, Z., & Kim, T. (2025). How AI is used in FDA-authorized medical devices: A taxonomy across 1,016 authorizations. npj Digital Medicine. https://doi.org/10.1038/s41746-025-01800-1
- Newman-Toker, D. E., Nassery, N., Schaffer, A. C., Yu-Moe, C. W., Clemens, G. D., Wang, Z., Zhu, Y., Tehrani, A. S. S., Fanai, M., Siegal, D., & Kaplan, R. M. (2021). Burden of serious harms from diagnostic error in the USA. BMJ Quality & Safety. https://doi.org/10.1136/bmjqs-2021-014130
- Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342
- Sendak, M., Elish, M. C., Gao, M., Futoma, J., Ratliff, W., Nichols, M., Bedoya, A., Balu, S., & O’Brien, C. (2020). “The human body is a black box”: Supporting clinical decision-making with deep learning. Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency, 99–109. https://doi.org/10.1145/3351095.3372827
- Sivakumar, R., & Lue, R. A. (2025). FDA approval of artificial intelligence and machine learning devices in radiology: A systematic review. JAMA Network Open, 8(11), e2542338. https://doi.org/10.1001/jamanetworkopen.2025.42338
- Topol, E. J. (2019). Deep medicine: How artificial intelligence can make healthcare human again. Basic Books.
- Topol, E. J. (2024). Topol discusses potential of AI to transform medicine. NIH Record. https://nihrecord.nih.gov/2024/11/22/topol-discusses-potential-ai-transform-medicine
- Windecker, D., Locher, L., Serra-Burriel, M., & Vokinger, K. N. (2025). Generalizability of FDA-approved AI-enabled medical devices for clinical use. JAMA Network Open. https://pmc.ncbi.nlm.nih.gov/articles/PMC12044510/
📖 Additional Reading
- Topol, E. J. (2019). Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books. — The foundational text on AI’s role in transforming clinical practice; essential reading for clinicians and general readers alike.
- Obermeyer, Z., & Emanuel, E. J. (2016). Predicting the future — Big data, machine learning, and clinical medicine. New England Journal of Medicine, 375(13), 1216–1219. https://doi.org/10.1056/NEJMp1606181 — A clear-eyed analysis of both the promise and the structural risks of predictive AI in healthcare settings.
- Rajpurkar, P., Chen, E., Banerjee, O., & Topol, E. J. (2022). AI in health and medicine. Nature Medicine, 28(1), 31–38. https://doi.org/10.1038/s41591-021-01614-0 — A comprehensive 2022 review of AI applications across clinical domains, widely cited in subsequent research.
- Lång, K., et al. (2023). Artificial intelligence-supported screen reading versus standard double reading in the MASAI trial. The Lancet Oncology. — The landmark randomised trial providing the highest level of clinical evidence to date for AI in breast cancer screening.
- Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118. https://doi.org/10.1038/nature21056 — The seminal 2017 paper that demonstrated AI skin cancer classification at dermatologist level, launching a decade of clinical research in the field.
🔗 Additional Resources
- FDA AI/ML-Enabled Medical Devices Database — The U.S. FDA’s official, publicly accessible database of all cleared and approved AI/ML medical devices, updated quarterly. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices
- Scripps Research Translational Institute (Dr. Eric Topol) — The research institute led by Dr. Eric Topol, publishing ongoing work on AI in cardiology, genomics, and clinical medicine. https://www.scripps.edu/science-and-medicine/translational-institute/
- Stanford Center for Digital Health — Led by Dr. Eleni Linos, the Stanford Centre for Digital Health conducts peer-reviewed research at the intersection of AI, technology, and clinical outcomes. https://digitalhealth.stanford.edu/
- The Lancet Digital Health — The leading peer-reviewed journal publishing clinical trials, systematic reviews, and policy analysis specifically on digital health and AI in medicine. https://www.thelancet.com/journals/landig/home
- European Society of Radiology (ESR) AI Publications — The ESR produces regular white papers, surveys, and position statements on the clinical integration of AI tools in radiology practice. https://www.myesr.org/ai




Leave a Reply