AI discovered 300+ exoplanets humans missed & predicted 200M protein structures.
The revolution in scientific discovery is here—and it’s just beginning.
Introduction: When The Universe Blinked
Picture this: It’s December 2017, and somewhere in the massive data archives of NASA’s Kepler Space Telescope, a signal lurks—so faint, so delicate, that human eyes have glazed over it countless times. This whisper-thin dip in starlight, just 0.01% dimmer for barely 14 hours, represents a world 2,545 light-years away. A rocky planet, slightly larger than Earth, circling its star every 14.4 days in a scorching orbit that would vaporize any hopes of life as we know it.
But here’s the kicker: we almost never found it.
Enter Christopher Shallue, a senior software engineer at Google Brain, and Andrew Vanderburg, a NASA Sagan Postdoctoral Fellow. Armed with a neural network and an audacious hypothesis, they trained an artificial intelligence to see what human astronomers couldn’t. The result? Kepler-90i—the eighth planet in the Kepler-90 system, making it the first star system discovered to match our own solar system’s planetary headcount (NASA, 2017). The neural network sifted through 670 star systems with surgical precision, achieving a 96% accuracy rate in distinguishing real planets from cosmic imposters.
“Just as we expected, there are exciting discoveries lurking in our archived Kepler data, waiting for the right tool or technology to unearth them,” Paul Hertz, director of NASA’s Astrophysics Division, noted when announcing the discovery (NASA, 2017).
This isn’t just a cool space story. This is the opening salvo in a revolution that’s transforming how humanity discovers, understands, and manipulates the fundamental fabric of reality. While the world obsesses over ChatGPT composing sonnets and AI generating art, a quieter—but infinitely more consequential—transformation is unfolding in laboratories, observatories, and research institutions worldwide. Artificial intelligence isn’t just assisting scientists anymore. It’s rewriting the rules of what scientific discovery means.
Welcome to the new golden age of discovery, where algorithms dream up hypotheses, silicon chips predict protein structures that took nature billions of years to evolve, and machine learning models peer into the future of materials that don’t yet exist. The question isn’t whether AI will transform science—it already has. The question is: Are we ready for what comes next?
Chapter One: The Old Guard—How Science Used to Work (And Why It Was Gloriously Inefficient)
Let’s rewind to understand what’s actually being revolutionized here. For centuries, scientific discovery followed a rhythm as predictable as the seasons: hypothesis, experiment, observation, analysis, publication, peer review, repeat. It was methodical. It was rigorous. And holy hell, was it slow.
Consider the traditional scientific method in its full, painstaking glory. A researcher observes a phenomenon, formulates a hypothesis, designs an experiment, collects data, analyzes results, writes a paper, submits it for peer review (where it languishes for months), revises, resubmits, and finally—if lucky—publishes. The entire process could take years for a single discovery. According to research on peer review labor, scientists globally spent over 100 million hours on peer reviews alone in 2020, equivalent to over 15 thousand years of human labor, with an estimated monetary value exceeding $1.5 billion USD just for US-based reviewers (Aczel et al., 2021).
Then came the bottlenecks—oh, the beautiful, maddening bottlenecks:
Time: Protein structure determination using X-ray crystallography could take years per protein. The Human Genome Project, launched in 1990, took 13 years and $2.7 billion to sequence one human genome. Drug discovery from target identification to FDA approval? Try 10-15 years and an average cost of $2.6 billion when accounting for capitalized costs and failures, with a failure rate exceeding 90% (DiMasi et al., 2016).
Cost: A single systematic literature review—essentially reading and synthesizing existing research—costs approximately $141,195 on average, factoring in researcher time and expertise (Michelson & Reuter, 2019). Clinical trials burn through hundreds of millions before a drug ever reaches your pharmacy shelf.
Human Cognitive Limits: Here’s the uncomfortable truth—human brains, magnificent as they are, can only hold so much information, spot so many patterns, test so many hypotheses. We’re slow. We get tired. We have biases. We need coffee breaks. The scientific literature has grown explosively—with millions of papers published annually across all disciplines—and no human can keep pace with that tsunami of knowledge, even within a narrow subspecialty.
The traditional hypothesis-driven approach worked beautifully when there were fewer variables to juggle. But modern science? Modern science is drowning in complexity. Climate systems with millions of interacting variables. Genomic data with billions of base pairs. Particle physics with seventeen fundamental particles and four fundamental forces creating an incomprehensibly vast possibility space.
Enter AI, stage left, with a very different set of strengths.
Chapter Two: The New Sheriff in Town—How AI Changes Everything
Artificial intelligence doesn’t get tired. It doesn’t need tenure. It works 24/7/365, processing patterns at speeds that make human cognition look like continental drift. And it’s fundamentally changing scientific discovery across three critical dimensions:
The Pattern Recognition Revolution
While humans excel at recognizing faces and interpreting social cues, AI excels at finding needles in haystacks the size of galaxies. Machine learning algorithms can analyze millions of data points simultaneously, identifying correlations and patterns that would take human researchers lifetimes to spot—if they could spot them at all.
Take the Kepler telescope example. The spacecraft collected four years of continuous observations, generating petabytes of data. Human astronomers developed automated pipelines to flag potential planets, but detecting Earth-sized planets at Earth-like orbital distances remained extraordinarily challenging. The Kepler photometer was engineered to achieve 20-ppm relative precision, yet for reference, an Earth orbiting the Sun would produce only an 84-ppm signal lasting about 13 hours (Batalha, 2014). The false positive rate was astronomical (pun intended). Every promising signal required painstaking human verification.
Then came the neural networks. By training convolutional neural networks on 15,000 previously-vetted signals from the Kepler exoplanet catalogue, Shallue and Vanderburg (2018) created a system that could learn the subtle patterns distinguishing real planetary transits from instrumental artifacts, stellar variability, and other cosmic noise. The system didn’t just match human performance—it identified planets humans had missed, hidden in data we’d already examined. As of 2021, the ExoMiner algorithm alone had validated an additional 301 previously unknown exoplanets (Valizadegan et al., 2022).
“When ExoMiner says something is a planet, you can be sure it’s a planet,” explained Hamed Valizadegan, ExoMiner project lead and machine learning manager with the Universities Space Research Association (NASA, 2021). Unlike traditional “black box” algorithms, ExoMiner’s design allows researchers to understand exactly which features in the data lead to a classification—transparency that builds trust in AI-assisted discovery.
Simulation and Prediction: Testing Virtually Before Testing Physically
Here’s where things get truly wild: AI can now predict outcomes before experiments are run, structures before they’re built, behaviors before they’re observed. This isn’t fortune-telling—it’s learned pattern recognition applied to known physical laws and observed data.
The crown jewel of this capability? AlphaFold.
In November 2020, at the 14th Critical Assessment of Structure Prediction (CASP14) competition, Google DeepMind’s AlphaFold 2 stunned the scientific community by essentially solving the 50-year-old protein folding problem. Proteins—those molecular workhorses that do virtually everything in our bodies—are chains of amino acids that fold into specific three-dimensional shapes. Understanding that shape is crucial to understanding function, but experimentally determining protein structures using X-ray crystallography or cryo-electron microscopy is expensive, time-consuming, and technically challenging.
AlphaFold 2 achieved a median global distance test (GDT) score of 87.0 across protein targets in the challenging free-modeling category, with an overall error of less than the width of an atom (< 1 Angstrom)—making it competitive with experimental methods (Jumper et al., 2021). The CASP organizers declared the problem essentially solved.
But DeepMind didn’t stop there. They used AlphaFold 2 to predict the structures of all 200 million proteins known to science and made them freely available through the AlphaFold Protein Structure Database in collaboration with EMBL-EBI. As Sir Demis Hassabis, CEO of Google DeepMind and co-recipient of the 2024 Nobel Prize in Chemistry for this work, put it: “It’s kind of like a billion years of PhD time done in one year” (Cambridge University, 2025).
Over 2 million researchers from 190 countries now use AlphaFold (DeepMind, 2024). The impact spans drug discovery, understanding disease mechanisms, enzyme design for industrial applications, and research on malaria, antibiotic resistance, and countless other applications. Following the July 2021 release of AlphaFold, University of Chicago structural biologist Tobin Sosnick sent his colleagues an email with the subject line “Revolution in structural biology,” predicting the impact would “rival that of genomics and sequencing” (Sosnick, 2023). Two years later, he confirmed: “AlphaFold has transformed biological and biomedical research for the better.”
Hassabis himself frames AI as humanity’s ultimate discovery tool: “I’ve always thought of AI as the ultimate tool to help us accelerate scientific discovery,” he stated upon receiving the Nobel Prize (DeepMind, 2024). His vision extends beyond current achievements: “I hope we’ll look back on AlphaFold as the first proof point of AI’s incredible potential to accelerate scientific discovery.”
Data Analysis at Superhuman Scale
Modern scientific instruments generate data at rates that dwarf human analytical capacity. The Large Hadron Collider produces about 1 petabyte of data per second during operation. Genomic sequencing facilities process billions of base pairs daily. Climate models run simulations across millions of variables and timescales from milliseconds to centuries.
AI thrives in this data deluge. Machine learning algorithms can process massive datasets, identify anomalies, extract meaningful signals from noise, and generate insights at speeds measured in milliseconds rather than months. According to a 2025 report from Axios, federal government investment in non-defense AI research and development reached $3.3 billion in fiscal year 2025, while private sector investments exceeded $109 billion in 2024 (Axios, 2025).
Google’s 2025 research review highlighted how AI co-scientist—a multi-agent AI system—helps scientists generate novel hypotheses, with examples including identifying drugs for repurposing to treat liver fibrosis at Stanford and producing in days what took researchers years to develop at Imperial College London on antimicrobial resistance (Google Research, 2025).
These aren’t isolated successes. They represent a fundamental shift in scientific methodology—what some researchers are calling the emergence of a new paradigm: data-driven, AI-augmented discovery.
Chapter Three: Early Wins That Proved the Concept
Every revolution has its “shot heard ’round the world” moments—the demonstrations that prove a concept isn’t just theoretically possible but practically transformative. For AI in scientific discovery, three areas provided early validation that this technology wasn’t just hype:
Astronomical Discovery: Teaching Machines to See the Cosmos
The Kepler Space Telescope’s mission, from 2009 to 2018, was to answer one of humanity’s most profound questions: How common are planets like Earth? The spacecraft stared at the same patch of sky for years, monitoring the brightness of over 150,000 stars, looking for the telltale dimming that occurs when a planet passes in front of its host star.
The challenge? Planetary transits are incredibly subtle—a Jupiter-sized planet causes its star to dim by only about 1%, while an Earth-sized planet produces a signal right at the edge of detection limits (Jenkins et al., 2010). Moreover, countless phenomena can mimic planetary signals: binary star systems, stellar spots, instrumental artifacts, cosmic ray hits. Sorting real planets from false positives required an army of human experts and citizen scientists.
Machine learning changed the game. By training neural networks on known examples of both confirmed planets and false positives, researchers created systems that could learn the subtle patterns distinguishing one from the other. The neural network developed by Shallue and Vanderburg achieved 98.8% accuracy in ranking real planetary signals higher than false positives (Shallue & Vanderburg, 2018).
The practical impact was immediate. Not only did these AI systems identify previously-missed planets in archival data, but they also demonstrated scalability. ExoMiner, the deep neural network developed by NASA researchers, validated 301 new planets in one analysis (Valizadegan et al., 2022). “Unlike other exoplanet-detecting machine learning programs, ExoMiner isn’t a black box—there is no mystery as to why it decides something is a planet or not,” explained Jon Jenkins, exoplanet scientist at NASA’s Ames Research Center (NASA, 2021).
But astronomical discovery represents just one domain. The techniques are being adapted for TESS (Transiting Exoplanet Survey Satellite) and future missions, with the potential to accelerate planetary discovery across the galaxy.
Drug Compound Screening: Finding Needles in Molecular Haystacks
Drug discovery is an exercise in astronomical probability. The chemical space of potentially drug-like molecules exceeds 10^60 compounds—more possibilities than there are atoms in the observable universe. Traditional high-throughput screening can test thousands to millions of compounds, but that’s a rounding error in the vastness of chemical space.
AI offers a different approach: rather than randomly screening compounds, machine learning models can learn from existing data what molecular structures are likely to bind to specific biological targets, have favorable pharmacological properties, and avoid toxic side effects. Generative models can even propose entirely novel molecular structures optimized for desired characteristics.
Companies like Atomwise, Insilico Medicine, and Recursion Pharmaceuticals are pioneering AI-driven drug discovery platforms. In 2025, reports emerged of startups like Lila Sciences declaring missions to “build scientific superintelligence” using specialized AI software to design and direct experiments in real-world labs (Axios, 2025). Another startup, Latent Labs, announced a frontier model claiming to help design drugs and accelerate development timelines by reducing wet lab work.
The most dramatic demonstration of AI’s potential came from the discovery that a specific gene causes Alzheimer’s disease—a finding researchers could only make because AI helped them visualize the three-dimensional structure of the protein involved (Axios, 2025). This exemplifies how AI doesn’t just speed up existing processes but enables entirely new types of discoveries.
Genomics: Making Sense of the Code of Life
The completion of the Human Genome Project in 2003 was a monumental achievement, but it also created a new problem: we had the sequence, but understanding what it all meant was another matter entirely. The human genome contains roughly 3 billion base pairs encoding approximately 20,000-25,000 genes, along with vast stretches of regulatory regions, non-coding RNA genes, and sequences whose functions remain mysterious.
AI has become essential for making sense of this complexity. Machine learning algorithms can:
- Identify disease-causing genetic variants among millions of benign variations
- Predict how mutations will affect protein function
- Calculate polygenic risk scores aggregating the effects of thousands of genetic variants
- Discover previously unknown regulatory elements controlling gene expression
Tools like DeepVariant, developed by Google, use deep learning to improve the accuracy of genomic variant calling from DNA sequencing data. The impact extends beyond human health to agriculture (predicting crop yields and disease resistance), evolutionary biology (reconstructing phylogenetic relationships), and ecology (monitoring biodiversity through environmental DNA).
What these early successes taught us is profound: AI doesn’t just make science faster—it makes certain types of science possible for the first time. Patterns too subtle for human perception become visible. Hypotheses too numerous for human testing become tractable. Problems once considered intractable become solvable.
Chapter Four: The Philosophical Crossroads—What Does It Mean When Machines Make Discoveries?
Here’s where our adventure takes a contemplative turn. Because underneath all the excitement about AI accelerating discovery lurks a genuinely thorny philosophical question: What does it mean for a machine to “discover” something? And more provocatively: If AI makes a breakthrough that no human fully understands, have we actually gained knowledge or just computational predictions?
The Black Box Problem and the Nature of Understanding
Traditional science operates on a principle of comprehensibility. When a human scientist makes a discovery, they can explain the reasoning, trace the logical steps, defend the methodology, and teach others to replicate the work. Understanding flows from discovery like water from a spring.
But many AI systems—particularly deep neural networks—operate as “black boxes.” Data goes in, predictions come out, but the intermediate steps involve millions of numerical weights adjusted through inscrutable optimization processes. We know the AI works (the predictions are accurate), but we don’t always know why it works or how it arrived at specific conclusions.
“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it,” warns Eliezer Yudkowsky, co-founder and research fellow at the Machine Intelligence Research Institute (Nisum, 2025). This observation cuts to the heart of our dilemma: Can we trust discoveries we don’t fully comprehend?
Consider AlphaFold again. It predicts protein structures with remarkable accuracy, but does it “understand” protein folding the way a human structural biologist does? The system learned patterns from known protein structures and can generalize to new sequences, but it doesn’t reason about hydrogen bonds, hydrophobic effects, and thermodynamic stability in the way humans do. In a 2025 commentary, researcher Jacob Marks noted that while praising DeepMind’s progress, we must acknowledge that “any machine learning based approach to science will need to address practical and philosophical challenges” (Voxel51, 2025).
The question becomes: Is the goal of science to produce accurate predictions, or to develop human understanding? If it’s the latter, then AI that generates predictions without generating insight might be problematic. If it’s the former, then perhaps we need to rethink what scientific “understanding” means in an age of machine intelligence.
The Democratization Dilemma
AI for science carries a utopian promise: democratizing expertise and enabling researchers worldwide to access tools and capabilities previously restricted to elite institutions. AlphaFold’s protein structure database is freely available. Machine learning frameworks are open source. Cloud computing makes massive computational resources accessible to anyone with internet connection.
But there’s a dystopian counterpoint: AI development requires enormous resources—massive datasets, supercomputing infrastructure, teams of specialized engineers. According to National Academy of Medicine President Victor Dzau, “We have to be very careful about understanding the potential of [emerging technologies] possibly affecting society in many different ways … cost, access, equity, ethics, and privacy” (National Academies, 2024).
Vukosi Marivate, ABSA UP Chair of Data Science at the University of Pretoria, emphasizes that “governance of AI is a team sport; ethical decisions and responsibility shouldn’t rest solely with AI developers and scientists. Society should have a voice in what the expectations for limits on these technologies should be” (National Academies, 2024). He stresses: “It can’t just be that you have these discussions about societal impact, and then society’s not there.”
The risk is a new form of scientific inequality: wealthy institutions and corporations with AI capabilities pulling ahead, while researchers in lower-resourced settings are left behind. The North-South divide in access to AI technology threatens to create a two-tiered system of scientific discovery—those who can afford cutting-edge AI tools and those who cannot.
The Attribution Question: Who Gets Credit When AI Discovers Something?
In 2024, Demis Hassabis and John Jumper were awarded the Nobel Prize in Chemistry for developing AlphaFold. But here’s the puzzle: Should the prize have gone to them, or should it have included the AI itself as a co-discoverer? After all, AlphaFold made predictions about protein structures that its human creators didn’t explicitly program.
This isn’t just academic navel-gazing. It has real implications for scientific credit, intellectual property, patents, and the structure of scientific careers. If AI systems can generate novel hypotheses, design experiments, and interpret results, what role remains for human scientists? Are we becoming research managers, delegating the actual discovery work to our silicon assistants?
Hiroaki Kitano, CEO of Sony AI, proposed the Nobel Turing Challenge: developing AI systems by 2050 capable of making major discoveries autonomously, at Nobel Prize level. “Can AI form a groundbreaking concept that will change our perception?” he asks. “If we manage to build a system like that, is it going to behave like the best human scientists, or does it show a very different kind of intelligence?” (National Academies, 2024).
University of Virginia professor Deborah Johnson expresses concern about terms like “autonomy,” “autonomous,” and “AI scientist,” because “they seem to distance human scientists from responsibility for the AI systems they create and any negative impacts that result” (National Academies, 2024). The question of attribution matters because it determines accountability—who is responsible when AI-driven discoveries lead to harm?
Confirmation Bias and the Illusion of Understanding
Perhaps the subtlest danger is that AI might reinforce our existing biases rather than challenge them. As researchers Messeri and Crockett noted in a 2024 Nature paper, AI can create “illusions of understanding” in scientific research—making us feel we understand phenomena when we’ve merely gained the ability to predict them (Messeri & Crockett, 2024).
A 2025 paper in AI & Agent warns that AI’s tendency to prioritize pattern consistency over empirical truth creates a “credibility gap” undermining its utility (Zhang & Li, 2025). AI-generated outputs can appear scientifically credible while being factually incorrect, a phenomenon particularly problematic with scientific references, where chatbot models exhibit high rates of fabricated citations.
The Nature review notes that AI hallucinations “align with dominant scientific narratives, risking confirmation bias by reinforcing existing hypotheses rather than challenging them” (Wang et al., 2023). This is insidious precisely because it’s subtle—AI might make us more confident in conclusions that are actually wrong, foreclosing alternative explanations we should be exploring.
Conclusion: Standing at the Threshold
So here we are, standing at the threshold of something unprecedented. Artificial intelligence isn’t just another tool in the scientific toolkit, like a better microscope or a faster centrifuge. It’s a fundamental transformation in how discovery happens, what knowledge means, and who gets to participate in the grand human project of understanding reality.
The promise is intoxicating: Diseases cured before symptoms appear. Materials designed with properties nature never imagined. Climate models accurate enough to save millions of lives. Scientific knowledge accumulating at speeds that would have seemed like magic to researchers just a generation ago.
But the challenges are real and consequential. Black boxes making predictions we can’t explain. Vast resources concentrated in corporate hands. Algorithmic bias replicating and amplifying human prejudices. The potential for AI to create illusions of understanding while leaving us more ignorant than ever about the actual mechanisms underlying natural phenomena.
What comes next in this series will explore the specific domains where AI is making the biggest impact—from AlphaFold’s protein revolution to AI-powered medical diagnosis, from personalized medicine to the hard limits of what AI cannot do. We’ll look at success stories and spectacular failures, utopian promises and dystopian risks.
But for now, remember this: We’re witnessing a transformation that happens maybe once per century—a fundamental shift in how humans generate knowledge about the universe and our place within it. The Kepler-90i discovery that opened this essay wasn’t just about finding another exoplanet. It was about demonstrating that machines can see patterns in data that elude human perception, that silicon chips can augment human cognition in ways that expand the boundaries of discovery itself.
The quiet revolution is here. Whether it stays quiet, or whether the world wakes up to realize just how transformative this moment is, remains to be seen.
The universe blinked, and this time, an algorithm noticed.
Next in this series: How AlphaFold cracked the 50-year-old protein folding problem
and why the code of life might finally be an open book.
References
- Aczel, B., Szaszi, B., & Holcombe, A. O. (2021). A billion-dollar donation: Estimating the cost of researchers’ time spent on peer review. Research Integrity and Peer Review, 6(1), 14. https://doi.org/10.1186/s41073-021-00118-2
- Axios. (2025, December 31). 2025’s AI-fueled scientific breakthroughs. https://www.axios.com/2025/12/31/2025-ai-scientific-breakthroughs
- Batalha, N. M. (2014). Exploring exoplanet populations with NASA’s Kepler Mission. Proceedings of the National Academy of Sciences, 111(35), 12647-12654. https://doi.org/10.1073/pnas.1304196111
- Cambridge University. (2025, January). Nobel laureate and Cambridge University alumnus Sir Demis Hassabis heralds a new era of AI drug discovery at ‘digital speed’. https://www.cst.cam.ac.uk/nobel-laureate-and-cambridge-university-alumnus-sir-demis-hassabis-heralds-new-era-ai-drug-discovery
- DiMasi, J. A., Grabowski, H. G., & Hansen, R. W. (2016). Innovation in the pharmaceutical industry: New estimates of R&D costs. Journal of Health Economics, 47, 20-33. https://doi.org/10.1016/j.jhealeco.2016.01.012
- Google Research. (2025). Google Research 2025: Bolder breakthroughs, bigger impact. https://research.google/blog/google-research-2025-bolder-breakthroughs-bigger-impact/
- Google DeepMind. (2024, October). Demis Hassabis & John Jumper awarded Nobel Prize in Chemistry. https://deepmind.google/discover/blog/demis-hassabis-john-jumper-awarded-nobel-prize-in-chemistry/
- Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., … & Hassabis, D. (2021). Highly accurate protein structure prediction with AlphaFold. Nature, 596(7873), 583-589. https://doi.org/10.1038/s41586-021-03819-2
- Messeri, L., & Crockett, M. J. (2024). Artificial intelligence and illusions of understanding in scientific research. Nature, 627, 49-58. https://doi.org/10.1038/s41586-024-07146-0
- Michelson, M., & Reuter, K. (2019). The significant cost of systematic reviews and meta-analyses: A call for greater involvement of machine learning to assess the promise of clinical trials. Contemporary Clinical Trials Communications, 16, 100443. https://doi.org/10.1016/j.conctc.2019.100443
- NASA. (2017, December 14). Artificial intelligence, NASA data used to discover eighth planet circling distant star. https://www.nasa.gov/news-release/artificial-intelligence-nasa-data-used-to-discover-eighth-planet-circling-distant-star/
- NASA. (2021, November 22). New deep learning method adds 301 planets to Kepler’s total count. https://science.nasa.gov/universe/exoplanets/new-deep-learning-method-adds-301-planets-to-keplers-total-count/
- National Academies. (2024). How AI is shaping scientific discovery. https://www.nationalacademies.org/news/how-ai-is-shaping-scientific-discovery
- Nisum. (2025). Top 10 expert quotes that redefine the future of AI technology. https://www.nisum.com/nisum-knows/top-10-thought-provoking-quotes-from-experts-that-redefine-the-future-of-ai-technology
- Shallue, C. J., & Vanderburg, A. (2018). Identifying exoplanets with deep learning: A five planet resonant chain around Kepler-80 and an eighth planet around Kepler-90. The Astronomical Journal, 155(2), 94. https://doi.org/10.3847/1538-3881/aa9e09
- Sosnick, T. R. (2023). AlphaFold developers Demis Hassabis and John Jumper share the 2023 Albert Lasker Basic Medical Research Award. Journal of Clinical Investigation, 133(19), e174915. https://doi.org/10.1172/JCI174915
- Valizadegan, H., Martinho, M. J., Wilkerson, C. M., Jenkins, J. M., Smith, J. C., Caldwell, D. A., … & Bryson, S. T. (2022). ExoMiner: A highly accurate and explainable deep learning classifier that validates 301 new exoplanets. The Astrophysical Journal, 926(2), 120. https://doi.org/10.3847/1538-4357/ac4399
- Voxel51. (2025). What AI means for science in 2025. https://voxel51.com/blog/what-ai-means-for-science-in-2025
- Wang, H., Fu, T., Du, Y., Gao, W., Huang, K., Liu, Z., … & Zitnik, M. (2023). Scientific discovery in the age of artificial intelligence. Nature, 620, 47-60. https://doi.org/10.1038/s41586-023-06221-2
- Zhang, D., & Li, H. (2025). AI as a catalyst for transforming scientific research: A perspective. AI & Agent, 1, 100003. https://doi.org/10.1016/j.aiagent.2025.08
Additional Reading
- “The Fourth Paradigm: Data-Intensive Scientific Discovery” by Tony Hey, Stewart Tansley, and Kristin Tolle (Microsoft Research, 2009). A foundational text on how data-intensive approaches are changing scientific methodology. Available free at: https://www.microsoft.com/en-us/research/publication/fourth-paradigm-data-intensive-scientific-discovery/
- “AI for Science: An Emerging Agenda” by Berens, P. et al. (arXiv, 2023). Comprehensive overview of the AI for science landscape, discussing both opportunities and challenges. https://doi.org/10.48550/arXiv.2303.04217
- “A New Golden Age of Discovery: Seizing the AI for Science Opportunity” – Google DeepMind Report (2024). Industry perspective on AI’s transformative potential for scientific research. https://storage.googleapis.com/deepmind-media/DeepMind.com/Assets/Docs/a-new-golden-age-of-discovery.pdf
- Nature Collection: “AI for Science 2025” – A curated collection of papers examining AI’s impact across scientific disciplines. https://www.nature.com/collections/bfefgbacag
- President’s Council of Advisors on Science and Technology Report: “AI to Assist Scientific Discovery” (2023). Policy perspective on AI’s role in transforming research. https://www.whitehouse.gov/ostp/
Additional Resources
- AlphaFold Protein Structure Database (EMBL-EBI & Google DeepMind) Free access to 200+ million predicted protein structures https://alphafold.ebi.ac.uk/
- NASA Exoplanet Archive (Caltech/NASA) Comprehensive database of confirmed and candidate exoplanets, including Kepler mission data https://exoplanetarchive.ipac.caltech.edu/
- FutureHouse (AI for Scientific Discovery) Non-profit organization developing AI tools like PaperQA/Crow for literature search and scientific discovery https://www.futurehouse.org/
- Google AI for Science Resources, tools, and updates on Google’s AI research applications in science https://ai.google/discover/ai-for-science/
- National Academies of Sciences, Engineering, and Medicine: AI & Scientific Discovery Portal Reports, workshops, and resources on responsible AI development for research https://www.nationalacademies.org/topics/artificial-intelligence



Leave a Reply