A Cisco report published 6 November 2025 has revealed that open-weight large language models are vulnerable to multi-turn adversarial attacks. According to the report summary listed by Infosecurity Magazine, multi-turn attack sequences against publicly weighted models achieved success rates of about 90 percent, exposing significant weaknesses in those model configurations.
The findings focus on open-weight models, highlighting that adversaries can repeatedly interact with a model across multiple turns to influence outputs in undesired ways. The reported 90 percent success rate underscores a high likelihood that these attack patterns can bypass existing controls when applied to the models tested in the study.
The coverage appears as part of Infosecurity Magazine’s news roundup of security developments on 6 November 2025. The report from Cisco adds to ongoing industry concerns about the robustness of large language models and the need for defenders and developers to reassess protections for open-weight deployments in light of demonstrated multi-turn adversarial effectiveness.
