Two new works on prompt injection and language model security are highlighted. The first, published October 31 on the Meta Artificial Intelligence blog and shared by Meta Artificial Intelligence security researcher Mick Ayzenberg, proposes an “Agents Rule of Two.” The rule says that until robust detection of prompt injection exists, an agent session should satisfy no more than two of three properties to avoid the highest impact consequences: (A) process untrustworthy inputs, (B) have access to sensitive systems or private data, and (C) change state or communicate externally. If all three are required without starting a fresh session, the agent should not operate autonomously and needs human supervision or reliable validation. The post includes a Venn diagram illustrating the danger where all three properties overlap and frames the rule as practical advice for system design.
The second paper, dated October 10, 2025 on arXiv, is a multi-author study with contributors from organizations including OpenAI, Anthropic, and Google DeepMind. The paper evaluates 12 published defenses against prompt injection and jailbreaks using extensive adaptive attacks. The authors report that by tuning and scaling optimization methods they bypassed 12 recent defenses with attack success rates above 90% for most, despite many defenses previously reporting near-zero success rates. A human red-teaming setting achieved 100% success; that red-team involved 500 participants in an online competition with a prize fund. The report argues that static example attacks are insufficient for evaluating defenses and that adaptive evaluations reveal far higher vulnerability.
The arXiv paper describes three automated adaptive techniques used by the attackers: gradient-based methods, reinforcement learning methods that interact with defended systems, and search-based methods that generate and iteratively refine candidates using language models as judges. The paper urges defense authors to release simple defenses amenable to human analysis and to adopt higher standards for evaluation. The author of the blog post finds the paper a forceful reminder of how much remains to be done and endorses the Agents Rule of Two as pragmatic guidance for building more secure language-model agents today, while expressing skepticism that reliable defenses will appear soon.
