Sneha Goenka’s ultra-fast sequencing cuts genetic diagnosis to hours

Sneha Goenka, MIT Technology Review’s 2025 Innovator of the Year and an assistant professor at Princeton, helped build a sequencing pipeline that reduces genetic diagnosis from weeks to hours. Her work pairs cloud computing architectures with real-time streaming analysis to accelerate clinical genomics.

Sneha Goenka, an assistant professor of electrical and computer engineering at Princeton and MIT Technology Review’s 2025 Innovator of the Year, helped develop a rapid-sequencing pipeline that can deliver a genetic diagnosis in hours rather than weeks. The effort began five years ago and combined software, hardware and workflow redesign to stream sequencing data from instruments into cloud computation, cut inefficiencies in data transfer and orchestrate base calling and alignment in parallel. The original pipeline reduced the compute time to identify mutations from about 20 hours to 1.5 hours, with downstream filtering and manual curation taking up to three hours, and later refinements shortened the end-to-end time to roughly six hours.

Goenka’s technical contributions included designing a cloud computing architecture that minimized communication overhead between sequencer and cloud, determining the precise number of reusable communication channels, and implementing algorithms to assign data streams directly to dedicated cloud nodes for base calling. She also wrote software to trigger sequence alignment as soon as a batch finished base calling while simultaneously starting base calling for the next batch, ensuring efficient utilization of computational resources. The team worked with genetic counselors and physicians to build filters that flag clinically relevant mutations for final specialist review.

The pipeline has been tested on 26 patients and was put to a critical test in 2021 when a 13-year-old patient named Matthew arrived at Stanford children’s hospital with heart failure. His blood was drawn on a Thursday and the transplant committee met on Friday. The rapid sequencing revealed a genetic mutation that led to the patient being placed on the transplant list and receiving a new heart three weeks later. Goenka is now cofounder and scientific lead of a startup aiming to deploy the technology more broadly, and she is adapting filters to use more diverse reference genomes from the Human Pangenome Project to reduce bias toward people of European descent. The work grew out of personal motivation and Goenka’s education in Mumbai and at the Indian Institute of Technology Bombay, and it is already influencing care in neonatal and pediatric intensive care units.

84

Impact Score

Executives see limited Artificial Intelligence productivity gains so far

Corporate enthusiasm around Artificial Intelligence has yet to translate into broad gains in employment or productivity, reviving comparisons to the long lag between early computing breakthroughs and measurable economic impact. Recent surveys and studies show mixed results, with strong expectations for future benefits but little consensus on present gains.

Nvidia skips a new GeForce generation as Artificial Intelligence chips dominate

Nvidia is set to go a year without a new GeForce GPU generation for the first time since the 1990s as memory shortages and higher margins in Artificial Intelligence hardware reshape the market. AMD and Intel are also struggling to capitalize because the same supply constraints are hitting gaming products across the industry.

Where gpu debt starts to break

Stress in gpu-backed infrastructure financing is emerging around deals that lack the structural protections seen in the strongest transactions. Oracle, the Abilene Stargate project, and older CoreWeave debt illustrate different ways residual risk can surface when contracts, collateral, and counterparties fall short.

SK hynix starts mass production of 192 GB SOCAMM2

SK hynix has begun mass production of the 192 GB SOCAMM2, a next-generation memory module standard built on 1cnm LPDDR5X low-power DRAM. The module is positioned as a primary memory solution for next-generation Artificial Intelligence servers.

AMD taps GlobalFoundries for co-packaged optics in Instinct MI500

AMD is preparing a renewed manufacturing link with GlobalFoundries to bring co-packaged optics to its Instinct MI500 Artificial Intelligence accelerators. The move is aimed at improving bandwidth and power efficiency in data center systems by moving beyond copper-based interconnects.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.