Artificial Intelligence speeds quantum encryption threat timeline

Research from Google and Oratomic suggests quantum computers capable of breaking core internet encryption may arrive sooner than expected. Artificial Intelligence played a key role in improving one of the new algorithms, raising fresh urgency around post-quantum security.

New research from Google and quantum startup Oratomic suggests quantum computers capable of breaking the encryption protocols that secure the internet may arrive sooner than expected. Cybersecurity researchers described the results as a major warning for the internet’s security timeline, and Cloudflare said it was accelerating its deadline to prepare for quantum computers to 2029. The U.S. National Institute for Standards and Technology has set a 2035 deadline to prepare for their arrival, but multiple quantum computing experts said the combined Google and Oratomic results could significantly shorten the development time of a quantum computer that threatens encryption.

Quantum computers use qubits to perform some calculations far faster than ordinary computers, creating a long-term threat to systems that depend on encryption. Everything from private messages to classified documents relies on the fact that conventional machines would take vastly longer than practical timescales to crack them, while a quantum computer could theoretically do the same work in days. A 2025 survey found a 39% chance of this changing in the next decade, as quantum hardware improves and algorithms become more efficient. Researchers warned that if quantum machines arrive before post-quantum protections are fully deployed, the risks could include data leaks, extortion, and businesses being taken offline.

Artificial Intelligence was described by Oratomic’s authors as central to the development of their algorithm. In atomic quantum computers, it can take 100 to 1,000 atoms to encode a single qubit. But the algorithm found by the Oratomic researchers requires just three atoms to encode a qubit, reducing the number of particles required to build an atomic quantum computer by 100 times. Initially, the performance of the team’s key algorithms was about 1,000 times worse, and researchers said the approach would not have worked in that state. After turning to OpenEvolve, an open-source tool that uses large language models including Google’s Gemini and Anthropic’s Claude, the team said Artificial Intelligence generated useful ideas by combining earlier scientific results in a novel way and exploring thousands of possibilities.

The work remains preliminary. The paper has not yet been peer-reviewed, and some experts said several assumptions in the research remain untested. The authors said many open challenges still stand between the current findings and a dangerous quantum computer. Even so, the results have already prompted attention from industry and government. Members of the Oratomic team briefed U.S. government officials before publication, and Google has also moved to expand its own atomic quantum computing effort while publicly outlining plans to secure its systems against quantum computers by 2029.

78

Impact Score

New methods aim to improve Large Language Model reasoning

A new study on arXiv outlines algorithmic techniques designed to strengthen Large Language Model reasoning and reduce hallucinations. The work reports better logical consistency and stronger performance on mathematical and coding benchmarks.

Nvidia acquisition of SchedMD raises Slurm neutrality concerns

Nvidia’s purchase of SchedMD has given it control of Slurm, an open-source scheduler that sits at the center of many supercomputing and large-model training systems. Researchers and engineers are watching for signs that support could tilt toward Nvidia hardware over AMD and Intel alternatives.

Mustafa Suleyman says Artificial Intelligence compute growth is still accelerating

Mustafa Suleyman argues that Artificial Intelligence development is being propelled by simultaneous advances in chips, memory, networking, and software efficiency rather than nearing a hard limit. He contends that rising compute capacity and falling deployment costs will push systems beyond chatbots toward more capable agents.

China and the US are leading different Artificial Intelligence races

The US leads in large language models and advanced chips, while China has built a major advantage in robotics and humanoid manufacturing. That balance is shifting as Chinese developers narrow the gap in model performance and both countries push to combine software and machines.

Congress weighs Artificial Intelligence transparency rules

Bipartisan lawmakers are pushing a federal transparency standard for the largest Artificial Intelligence models as Congress works on a broader national framework. The proposal aims to increase public trust while avoiding stricter state-by-state requirements and heavier regulation.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.