more Artificial Intelligence-resilient biosecurity with the Paraphrase Project

Microsoft researcher Eric Horvitz and collaborators discuss the Paraphrase Project, a red-teaming effort that exposed and helped secure a biosecurity vulnerability in Artificial Intelligence-driven protein design. The episode frames the work as a practical model for mitigating dual-use risks in Artificial Intelligence applications.

Microsoft’s Eric Horvitz convenes a discussion with Bruce Wittmann, Tessa Alexanian, and James Diggans about the Paraphrase Project, a coordinated red-teaming effort that targeted vulnerabilities arising from the use of Artificial Intelligence in protein design. The guests describe how the project identified a specific biosecurity weakness and took steps to secure it, illustrating a hands-on approach to the risks that can accompany powerful computational tools.

The Paraphrase Project is presented as an operational example of responsible testing and mitigation for dual-use technologies. By intentionally probing systems used for protein engineering, the team was able to reveal where misuse could occur and implement measures to reduce those risks. The discussion emphasizes the value of red-teaming as part of an overall security posture when deploying Artificial Intelligence in sensitive scientific domains.

Speakers link the project’s outcomes to broader efforts to make biological research more resilient to misuse. The episode frames the Paraphrase Project not only as a single intervention but also as a replicable model that other organizations can use to evaluate and harden their own Artificial Intelligence-driven workflows. The conversation appears as content from Microsoft Research, signaling the institution’s engagement in cross-disciplinary work on technology safety and biosecurity.

65

Impact Score

ChatGPT Images adds thinking capability

OpenAI has upgraded ChatGPT Images with a new thinking mode that can search the internet, generate multiple images, and verify outputs before finalizing results. The update also improves text rendering, dense compositions, multilingual support, and style flexibility.

YouTube expands deepfake detection to Hollywood talent

YouTube is opening its likeness protection system to actors, athletes, musicians and creators beyond its own platform. The move gives public figures a way to flag and request removal of damaging Artificial Intelligence-generated replicas while YouTube weighs broader rules and possible future monetization.

Adobe plans outcome-based pricing for Artificial Intelligence agents

Adobe is positioning its Artificial Intelligence agents around performance-based pricing, charging only when the software completes useful work. The approach points to a more results-oriented model for selling generative Artificial Intelligence tools to business customers.

Tech firms commit billions to Artificial Intelligence infrastructure

Amazon, OpenAI, Nvidia, Meta, Google and others are signing increasingly large cloud, chip and data center agreements as demand for Artificial Intelligence infrastructure accelerates. The latest wave of deals spans investments, compute purchases, chip supply agreements and data center buildouts.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.