Multi-turn attacks expose weaknesses in open-weight large language models

A Cisco report published 6 November 2025 found open-weight large language models are vulnerable to multi-turn adversarial attacks. The research recorded attack success rates around 90 percent against the tested models.

A Cisco report published 6 November 2025 has revealed that open-weight large language models are vulnerable to multi-turn adversarial attacks. According to the report summary listed by Infosecurity Magazine, multi-turn attack sequences against publicly weighted models achieved success rates of about 90 percent, exposing significant weaknesses in those model configurations.

The findings focus on open-weight models, highlighting that adversaries can repeatedly interact with a model across multiple turns to influence outputs in undesired ways. The reported 90 percent success rate underscores a high likelihood that these attack patterns can bypass existing controls when applied to the models tested in the study.

The coverage appears as part of Infosecurity Magazine’s news roundup of security developments on 6 November 2025. The report from Cisco adds to ongoing industry concerns about the robustness of large language models and the need for defenders and developers to reassess protections for open-weight deployments in light of demonstrated multi-turn adversarial effectiveness.

68

Impact Score

Tech firms commit billions to Artificial Intelligence infrastructure

Amazon, OpenAI, Nvidia, Meta, Google and others are signing increasingly large cloud, chip and data center agreements as demand for Artificial Intelligence infrastructure accelerates. The latest wave of deals spans investments, compute purchases, chip supply agreements and data center buildouts.

JEDEC outlines LPDDR6 expansion for data centers

JEDEC has previewed planned updates to LPDDR6 aimed at pushing the memory standard beyond mobile devices and into selected data center and accelerated computing use cases. The roadmap includes higher-capacity packaging options, flexible metadata support, 512 GB densities, and a new SOCAMM2 module standard.

Tsmc debuts A13 process technology

Tsmc has introduced its A13 process at its 2026 North America Technology Symposium as a tighter version of A14 aimed at next-generation Artificial Intelligence, high performance computing, and mobile designs. The company positions the node as a more compact and efficient option with backward-compatible design rules for faster migration.

Google unveils eighth-generation tensor processor units

Google introduced its eighth generation of custom tensor processor units with separate designs for training and inference. The new TPU 8t and TPU 8i are aimed at large-scale model training, serving, and agentic workloads.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.