Multi-turn attacks expose weaknesses in open-weight large language models

A Cisco report published 6 November 2025 found open-weight large language models are vulnerable to multi-turn adversarial attacks. The research recorded attack success rates around 90 percent against the tested models.

A Cisco report published 6 November 2025 has revealed that open-weight large language models are vulnerable to multi-turn adversarial attacks. According to the report summary listed by Infosecurity Magazine, multi-turn attack sequences against publicly weighted models achieved success rates of about 90 percent, exposing significant weaknesses in those model configurations.

The findings focus on open-weight models, highlighting that adversaries can repeatedly interact with a model across multiple turns to influence outputs in undesired ways. The reported 90 percent success rate underscores a high likelihood that these attack patterns can bypass existing controls when applied to the models tested in the study.

The coverage appears as part of Infosecurity Magazine’s news roundup of security developments on 6 November 2025. The report from Cisco adds to ongoing industry concerns about the robustness of large language models and the need for defenders and developers to reassess protections for open-weight deployments in light of demonstrated multi-turn adversarial effectiveness.

68

Impact Score

UK mps open inquiry into artificial intelligence and edtech in education

UK mps have launched a cross party inquiry into how artificial intelligence and education technology are reshaping learning across early years, schools, colleges and universities, and how government should balance innovation with safeguards. The education committee will examine opportunities to improve teaching and workload alongside risks around inequality, privacy, safeguarding and assessment.

Most UK firms see Artificial Intelligence training gap as shadow tool use grows

New research finds that 6 in 10 UK businesses say employees lack comprehensive Artificial Intelligence training, even as shadow use of unapproved tools becomes widespread and investment surges. Executives warn that without stronger skills, governance and strategy, many organisations risk missing out on expected Artificial Intelligence returns.

COSO issues internal control roadmap for governing generative artificial intelligence

COSO has released governance guidance that applies its Internal Control-Integrated Framework to generative artificial intelligence, offering audit-ready control structures and implementation tools for organizations. The publication details capability-based risk mapping, aligned controls, and practical templates to help institutions manage emerging technology risks.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.