New prompt injection papers: agents rule of two and the attacker moves second

Two recent papers examine prompt injection risks and defenses: Meta Artificial Intelligence's Agents Rule of Two proposes limiting agent capabilities to reduce high-impact attacks, while a large arXiv study shows adaptive attacks can bypass most published jailbreak and prompt injection defenses.

Two new works on prompt injection and language model security are highlighted. The first, published October 31 on the Meta Artificial Intelligence blog and shared by Meta Artificial Intelligence security researcher Mick Ayzenberg, proposes an “Agents Rule of Two.” The rule says that until robust detection of prompt injection exists, an agent session should satisfy no more than two of three properties to avoid the highest impact consequences: (A) process untrustworthy inputs, (B) have access to sensitive systems or private data, and (C) change state or communicate externally. If all three are required without starting a fresh session, the agent should not operate autonomously and needs human supervision or reliable validation. The post includes a Venn diagram illustrating the danger where all three properties overlap and frames the rule as practical advice for system design.

The second paper, dated October 10, 2025 on arXiv, is a multi-author study with contributors from organizations including OpenAI, Anthropic, and Google DeepMind. The paper evaluates 12 published defenses against prompt injection and jailbreaks using extensive adaptive attacks. The authors report that by tuning and scaling optimization methods they bypassed 12 recent defenses with attack success rates above 90% for most, despite many defenses previously reporting near-zero success rates. A human red-teaming setting achieved 100% success; that red-team involved 500 participants in an online competition with a prize fund. The report argues that static example attacks are insufficient for evaluating defenses and that adaptive evaluations reveal far higher vulnerability.

The arXiv paper describes three automated adaptive techniques used by the attackers: gradient-based methods, reinforcement learning methods that interact with defended systems, and search-based methods that generate and iteratively refine candidates using language models as judges. The paper urges defense authors to release simple defenses amenable to human analysis and to adopt higher standards for evaluation. The author of the blog post finds the paper a forceful reminder of how much remains to be done and endorses the Agents Rule of Two as pragmatic guidance for building more secure language-model agents today, while expressing skepticism that reliable defenses will appear soon.

63

Impact Score

OpenAI and Amazon sign $38 billion deal for Artificial Intelligence computing power

OpenAI and Amazon have signed a $38 billion deal that will let the ChatGPT maker run its Artificial Intelligence systems on Amazon data centers using hundreds of thousands of Nvidia chips via Amazon Web Services. The agreement includes an immediate start on AWS compute with capacity targeted for deployment before the end of 2026 and the option to expand into 2027 and beyond.

Tesla vows yearly breakthroughs in Artificial Intelligence chips

Tesla chief Elon Musk said the company will deliver a new Artificial Intelligence chip design to volume production every 12 months and aims to outproduce rivals in unit volumes. Analysts warn scaling annual launches and matching established ecosystems will be a substantial operational challenge.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.