Artificial intelligence labs race to turn virtual materials into real-world breakthroughs

Startups like Lila Sciences, Periodic Labs, and Radical AI are betting that autonomous labs guided by artificial intelligence can finally turn decades of virtual materials predictions into real compounds with commercial impact, but the field is still waiting for a definitive breakthrough. Their challenge is to move beyond simulations and hype to deliver synthesized, tested materials that industry will actually adopt.

Inside Lila Sciences’ Cambridge lab, a sputtering instrument runs experiments under the direction of an artificial intelligence agent trained on scientific literature and data, which selects element combinations to form thin-film samples that are later tested and fed into another artificial intelligence system to propose new experiments. Human scientists still supervise and approve each step, but Lila sees this setup as an early version of autonomous labs that could dramatically cut the time and cost of discovering useful new materials. The company, flush with hundreds of millions of dollars in funding and newly minted as a unicorn, is part of a broader push to use artificial intelligence directed experimentation to pursue what it calls scientific superintelligence.

Materials science urgently needs a boost, as society demands better batteries, carbon capture materials, green-fuel catalysts, magnets, semiconductors, and potentially transformative technologies like higher-temperature superconductors and improved artificial intelligence hardware. Despite scientific advances such as perovskite solar cells, graphene transistors, metal-organic frameworks, and quantum dots, the field has produced relatively few major commercial wins in recent decades, in part because developing a new material typically takes 20 years or more and costs hundreds of millions of dollars. Traditional computational modeling, now accelerated by deep learning efforts at Google DeepMind, Meta, and Microsoft, has expanded the catalog of theoretically stable structures, but has repeatedly run into the same limitation: simulations cannot replace the need to synthesize and test materials in the lab.

DeepMind’s 2023 claim that deep learning had discovered “millions of new materials,” including 380,000 crystals deemed “the most stable,” drew intense attention from the artificial intelligence community but skepticism from many materials scientists, who argued the work largely produced proposed compounds simulated at absolute zero rather than demonstrated materials with real-world functionality. Critics at the University of California, Santa Barbara reported “scant evidence for compounds that fulfill the trifecta of novelty, credibility, and utility” and suggested that many of the supposed materials were trivial variants of known structures or idealized versions of already disordered crystals. The controversy underscored how hard it remains to bridge the gap between atomistic simulations and properties that depend on microstructure, finite temperature behavior, and poorly understood phenomena such as high-temperature superconductivity or complex catalysis.

In response, a new generation of startups is explicitly combining computation with aggressive automation. Periodic Labs, cofounded by DeepMind alum Ekin Dogus Cubuk and former OpenAI researcher Liam Fedus, is building a strategy around automated synthesis and large language models that digest scientific literature, propose recipes and conditions, interpret test data, and iterate designs, aiming at targets like quantum materials and, ultimately, a room-temperature superconductor. At Lawrence Berkeley National Laboratory, the A-Lab has demonstrated a fully automated inorganic synthesis line that reported using robotics and artificial intelligence to create and test 41 novel materials, including some from the DeepMind database, though its claims of novelty and analytical rigor have been debated. Principal scientist Gerbrand Ceder envisions artificial intelligence agents that capture “diffused” expert knowledge, read a flood of papers, and make strategic experimental decisions, coordinated by higher-level orchestrator models such as those Radical AI is now building into its self-driving labs.

Despite these advances, the field has yet to produce a clear “AlphaGo moment” or an AlphaFold-like triumph; there is still no flagship example where artificial intelligence has rapidly delivered a commercially important new material. Venture investor Susan Schofer, who once worked at an early high-throughput materials startup, says she now looks for concrete signs that companies are already “finding something new, that’s different,” along with business models that capture value by specifying, scaling, and selling materials in partnership with incumbent manufacturers. Materials companies remain wary after past disappointments with promises that more computation, combinatorial chemistry, or synthetic biology would revolutionize their pipelines, and a 2024 paper from an MIT economics student claiming that a large corporate lab had quietly used artificial intelligence to invent numerous materials was later disavowed by the university as unreliable. Yet activity is accelerating: Lila is moving into a larger lab, Periodic Labs is shifting from manually synthesized, artificial intelligence guided experiments toward robotics, and Radical AI reports an almost fully autonomous facility. Founders describe an influx of funding and a renewed sense of purpose for a field long overshadowed by drug discovery, even as they acknowledge that turning artificial intelligence tools and autonomous labs into real, adopted materials will require not just scientific breakthroughs but also persuading a conservative industry to embrace a new way of doing research and development.

55

Impact Score

The great Artificial Intelligence hype correction of 2025

After a breakneck cycle of product launches and bold promises, the Artificial Intelligence industry is entering a more sober phase as stalled adoption, diminishing leaps in model performance, and shaky business models force a reset in expectations. Researchers, investors, and executives are now reassessing what large language models can and cannot do, and what kind of Artificial Intelligence future is realistically taking shape.

Artificial intelligence doomers stay the course despite hype backlash

A string of disappointments and bubble talk has emboldened artificial intelligence accelerationists, but prominent artificial intelligence safety advocates say their core concerns about artificial general intelligence risk remain intact, even as their timelines stretch.

Sam Altman’s role in shaping Artificial Intelligence hype

Sam Altman’s sweeping promises about superintelligent systems and techno-utopia have helped define how Silicon Valley and the public imagine the future of Artificial Intelligence, often ahead of what the technology can actually prove.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.