How artificial intelligence could speed the development of RNA vaccines and other RNA therapies

MIT researchers trained a transformer-based model to design lipid nanoparticles that deliver RNA more efficiently, potentially accelerating RNA vaccine and therapy development.

MIT engineers have developed a machine-learning approach to speed the design of lipid nanoparticles used to deliver RNA vaccines and therapies. The team trained a model on thousands of existing formulations and used it to predict new mixtures that, when tested in the lab, outperformed many of the particles in the training set and in some cases beat commercially used formulations. According to Giovanni Traverso, the senior author, the approach lets researchers search a vast formulation space far faster than traditional trial-and-error experiments.

The researchers built a training library of roughly 3,000 different lipid nanoparticle formulations and measured how well each delivered mRNA payloads to cells. They then trained a transformer-based model, which they call ´COMET´, to learn how different components interact inside a nanoparticle. The model is inspired by architectures used in large language models such as ChatGPT, but repurposed to understand combinations of chemical components rather than words. That lets ´COMET´ predict which multi-component formulations will yield better delivery properties, rather than optimizing one compound at a time.

After training, the team asked the model to propose new formulations and validated those predictions by delivering mRNA encoding a fluorescent protein to mouse skin cells grown in culture. Predicted formulations improved delivery efficiency. The researchers also extended the approach to include a fifth component class, branched poly beta amino esters or PBAEs, by training on a smaller set of about 300 LNPs that include these polymers. The model successfully suggested PBAE-containing formulations with improved performance. They further trained and tested predictions for cell-type specificity, including experiments in Caco-2 cells, and for properties relevant to real-world use such as resistance to lyophilization.

The paper, led by Alvin Chan and Ameya Kirtane with Traverso as senior author, appears in Nature Nanotechnology and was published on August 15, 2025. The work was funded by ARPA-H and several MIT and institute sources. The authors say the model is a flexible tool: it can be retrained or fine-tuned for different questions, from improving shelf life to targeting particular tissues, and it is now being used to guide development of RNA therapeutics aimed at obesity and diabetes.

78

Impact Score

Analog computing from waste heat

MIT researchers developed an analog computing approach that uses waste heat in electronic devices to process data without electricity. The technique performs matrix vector multiplication with strong accuracy and could also help monitor heat in chips without extra energy use.

How Artificial Intelligence is reshaping financial services oversight

Financial services regulators are largely treating Artificial Intelligence as another technology governed by existing rules rather than building new securities-specific frameworks. History suggests that clearer expectations will emerge through examinations, enforcement, and supervisory guidance.

Nvidia faces gamer backlash over Artificial Intelligence shift

Nvidia is facing growing frustration from gamers as memory supply is steered toward data center chips and DLSS 5 becomes more central to game performance. The dispute highlights how far the company’s priorities have shifted toward enterprise Artificial Intelligence.

Executives see limited Artificial Intelligence productivity gains so far

Corporate enthusiasm around Artificial Intelligence has yet to translate into broad gains in employment or productivity, reviving comparisons to the long lag between early computing breakthroughs and measurable economic impact. Recent surveys and studies show mixed results, with strong expectations for future benefits but little consensus on present gains.

Nvidia skips a new GeForce generation as Artificial Intelligence chips dominate

Nvidia is set to go a year without a new GeForce GPU generation for the first time since the 1990s as memory shortages and higher margins in Artificial Intelligence hardware reshape the market. AMD and Intel are also struggling to capitalize because the same supply constraints are hitting gaming products across the industry.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.