How artificial intelligence will accelerate biomedical research and discovery

A Microsoft Research Podcast episode brings together Daphne Koller, Noubar Afeyan, and Eric Topol to examine how artificial intelligence is reshaping biomedicine, from target discovery and autonomous labs to the pursuit of a virtual cell. The discussion charts rapid progress since GPT-4 and what it means for patients, researchers, and regulators.

In a special episode of the Microsoft Research Podcast, Peter Lee reconvenes leaders across medicine and biotechnology to assess how artificial intelligence is transforming the biomedical pipeline. Daphne Koller of Insitro, Noubar Afeyan of Flagship Pioneering and Moderna, and Eric Topol of Scripps Research trace the field’s fast shift from early hype to practical impact, reflecting on predictions made around the release of GPT-4 and what has since materialized in research, development, and care.

Koller details how machine learning is moving upstream in drug discovery to tackle the hardest problem: target identification. Rather than rely on noisy, human-defined endpoints, Insitro integrates large-scale physiological, imaging, and multi-omics data from patients and cells to uncover disease subtypes and rank targets with higher conviction, using human genetics as the connective thread across systems. The approach has yielded a novel amyotrophic lateral sclerosis target and milestone payments from Bristol Myers Squibb, with Eli Lilly contributing molecule design capabilities for downstream development. Early cellular results suggest reversal of ALS-related mis-splicing across multiple dimensions, while Koller sees near-term gains in trial operations and molecular design from foundation models. She is cautiously optimistic about long-horizon goals like a foundation model for a cell, noting the combinatorial complexity across thousands of interacting cell types.

Afeyan highlights how generative techniques have opened protein design and broader biotech research. Generate:Biomedicines, formed before “generative” entered the mainstream, uses diffusion and transformer-based models such as Chroma to create novel proteins and antibodies. ProFound Therapeutics has revealed thousands of previously unrecognized proteins, expanding the interaction space for disease biology, while Quotient Therapeutics is mapping pervasive somatic mutations across healthy tissues to illuminate mechanisms and compensatory pathways. He describes a shift to co-developing bespoke datasets with models, Moderna’s adoption of artificial intelligence across manufacturing and clinical operations, and Lila Sciences’ automated science factories that close the loop from hypothesis to experiment. Afeyan argues for a “poly-intelligence” era that fuses human, machine, and nature’s intelligence and foresees better patient selection as heterogeneous diseases are subdivided more precisely.

Topol recounts the step-change after ChatGPT, as transformer and multimodal advances reset research expectations. He sees a credible path to a virtual cell within a decade, citing growing consensus among life and computer scientists and rapid progress across DNA, RNA, ligands, and cell models. A virtual cell could enable vast synthetic datasets, accelerate target validation, and clarify disease underpinnings, elevating prevention and treatment. In aging science, he points to new biological and organ clocks and blood biomarkers like p-Tau217, paired with multimodal artificial intelligence, to identify high-risk individuals early and personalize surveillance and preventive interventions for heart disease, cancer, and neurodegeneration.

Lee closes with the view that artificial intelligence is already accelerating discovery and development and may soon increase the volume of animal and human studies. He suggests regulators, including the United States Food and Drug Administration, will likely need to adopt artificial intelligence to keep pace with the rising tempo of submissions and evidence generation.

75

Impact Score

A blueprint for implementing RAG at scale

Retrieval-augmented generation is positioned as essential for most large language model applications because it injects company-specific knowledge into responses. For organizations rolling out generative Artificial Intelligence, the approach promises higher accuracy and fewer hallucinations.

Artificial intelligence chatbots cite retracted scientific papers

Studies and tests show that popular Artificial Intelligence chatbots and research tools often cite retracted papers without warning, risking the spread of flawed findings. Companies are adding retraction data, but gaps and inconsistent publisher notices complicate fixes.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.