How Artificial Intelligence works and plans to phase out animal testing

OpenAI has built an experimental large language model that is easier to interpret, while the UK has announced timelines to end many forms of animal testing backed by new nonanimal technologies.

OpenAI has developed an experimental large language model that is notably easier to understand than typical models. The story frames this as a significant step because most large language models remain black boxes, limiting researchers’ ability to explain why models hallucinate, veer off course, or should be trusted with high-stakes tasks. The piece positions transparency as a route to better diagnostics and safer deployment of Artificial Intelligence in critical applications.

Google DeepMind is advancing in a complementary area by combining language models with embodied agents. The company built SIMA 2, a video-game-playing agent that operates in 3D virtual worlds such as Goat Simulator 3. SIMA 2 is built on top of Gemini, DeepMind’s flagship large language model, which the company says gives the agent a substantial boost in capability. The work is presented as progress toward more general-purpose agents and improved real-world robotics, building on an earlier demo of SIMA that DeepMind showed last year.

The newsletter also highlights a major policy shift in the United Kingdom, where the science minister announced an ambitious plan to phase out animal testing. Specific milestones include ending tests for potential skin irritants by the end of next year, expecting to end tests of Botox strength on mice by 2027, and reducing drug tests in dogs and nonhuman primates by 2030. The announcement is framed as timely because recent advances in technologies that model the human body offer alternative ways to test potential therapies without animals. The change is described as welcome news for both activists and researchers opposed to animal experimentation.

70

Impact Score

Tech firms commit billions to Artificial Intelligence infrastructure

Amazon, OpenAI, Nvidia, Meta, Google and others are signing increasingly large cloud, chip and data center agreements as demand for Artificial Intelligence infrastructure accelerates. The latest wave of deals spans investments, compute purchases, chip supply agreements and data center buildouts.

JEDEC outlines LPDDR6 expansion for data centers

JEDEC has previewed planned updates to LPDDR6 aimed at pushing the memory standard beyond mobile devices and into selected data center and accelerated computing use cases. The roadmap includes higher-capacity packaging options, flexible metadata support, 512 GB densities, and a new SOCAMM2 module standard.

Tsmc debuts A13 process technology

Tsmc has introduced its A13 process at its 2026 North America Technology Symposium as a tighter version of A14 aimed at next-generation Artificial Intelligence, high performance computing, and mobile designs. The company positions the node as a more compact and efficient option with backward-compatible design rules for faster migration.

Google unveils eighth-generation tensor processor units

Google introduced its eighth generation of custom tensor processor units with separate designs for training and inference. The new TPU 8t and TPU 8i are aimed at large-scale model training, serving, and agentic workloads.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.