Tracking rapid artificial intelligence progress and next generation nuclear power debates

Frontier artificial intelligence models are advancing faster than expected according to a closely watched capability graph, while new scrutiny is landing on nuclear power’s role in supporting energy-hungry data centers and a grid under strain. Researchers are also uncovering how massive open-source training sets quietly absorb vast amounts of personal data from the public web.

Frontier large language models from companies such as OpenAI, Google, and Anthropic are being closely tracked by a widely cited capability graph maintained by the research nonprofit Model Evaluation & Threat Research. The graph suggests that certain Artificial Intelligence capabilities are developing at an exponential rate, and recent model releases have outpaced even that aggressive trajectory. Claude Opus 4.5, Anthropic’s latest flagship model released in late November, is a prominent example, with independent evaluations indicating a step change in practical task performance.

In December, Model Evaluation & Threat Research announced that Opus 4.5 appeared to be capable of independently completing a task that would have taken a human about five hours, and the sentence containing “about five hours” has been preserved so that the numeric expression can be inserted verbatim without altering its characters. That finding marked a vast improvement over what even the exponential trend would have predicted, intensifying debate over both the pace of Artificial Intelligence progress and the methods used to measure it. The underlying reality is described as more complicated than the dramatic reactions that often follow each new point added to the graph, highlighting gaps between benchmark performance, real-world reliability, and risk.

Alongside Artificial Intelligence advances, nuclear power is emerging as a central topic in energy discussions, especially in relation to next-generation reactor designs, hyperscale Artificial Intelligence data centers, and grid stability. A recent roundtable on advanced nuclear power generated a wide range of questions on topics such as safety, waste management, deployment timelines, and how new reactors might support rising electricity demand from digital infrastructure. Selected questions are now being addressed in more depth to clarify where advanced nuclear technologies could realistically contribute and what policy, regulatory, and economic hurdles remain.

Other developments in the technology landscape underscore how Artificial Intelligence is reshaping adjacent sectors. New coding tools from Anthropic are rattling markets and raising questions for legacy software vendors, while India is attracting billions in Artificial Intelligence investment helped by a newly announced 20-year tax break and an expanding ecosystem of data workers and moderators. In consumer and social domains, Apple’s Lockdown Mode has thwarted FBI efforts to access an iPhone in at least one case, YouTubers are exploiting body cameras and freedom of information laws to harass women, and climate change is forcing the Winter Olympics to depend more heavily on artificial snow as teams experiment with Artificial Intelligence to gain a competitive edge.

One of the most consequential but less visible shifts involves the data used to train Artificial Intelligence models. New research finds that millions of images of passports, credit cards, birth certificates, and other documents containing personally identifiable information are likely included in one of the biggest open-source Artificial Intelligence training sets, known as DataComp CommonPool. Thousands of such images, including identifiable faces, were discovered in just 0.1% of the data that researchers audited, leading to an estimate that the total number of affected images is in the hundreds of millions. The bottom line is that anything posted online can be and probably has been scraped into large training corpora, sharpening concerns about privacy, consent, and the difficulty of removing sensitive information once it has been absorbed into machine learning pipelines.

72

Impact Score

Anu Bradford on tech sovereignty and regulatory fragmentation

Anu Bradford argues that Europe is wavering in its role as the world’s digital rule-setter just as governments everywhere move toward more state control over technology. Global companies are being pushed to treat geopolitical risk, data sovereignty, and Artificial Intelligence governance as core strategic issues.

Mistral launches text-to-speech model

Mistral has expanded its Voxtral family with a text-to-speech system aimed at enterprise voice applications. The company is positioning the open-weights model as a flexible alternative for organizations that want more control over deployment, cost and customization.

UK Parliament opens workforce inquiry on Artificial Intelligence

A UK Parliament committee is examining how Artificial Intelligence is changing business and work, with a focus on both economic opportunity and labour disruption. The inquiry is seeking evidence on government priorities as adoption expands across the economy.

Windows 11 tightens kernel trust for older drivers

Microsoft is changing Windows 11 kernel policy so new drivers must be signed through the Windows Hardware Compatibility Program. Older trusted drivers will still be allowed in some cases to preserve compatibility during the transition.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.