Why current benchmarks fail security operations teams using large language models

SentinelOne researchers argue that popular large language model benchmarks in cybersecurity fail to reflect real security operations workflows, overindexing on multiple-choice tasks and Artificial Intelligence self-judging while ignoring operational outcomes that matter to defenders.

The article argues that existing benchmarks for large language models in cybersecurity, including those from Microsoft and Meta, do not measure what actually matters to security operations teams. Early benchmarks in 2023 focused on multiple-choice exams over clean text, which produced clean and reproducible metrics but quickly became saturated as models improved, making scores less meaningful. As the industry has boomed, benchmarks evolved into a marketing tool, with vendors highlighting gains like +3.7 on SomeBench-v2, SOTA on ObscureQA-XL, or 99th percentile on obscure exams to impress buyers and investors, even though these numbers say little about real-world defensive value. The authors emphasize that, despite a flood of benchmarks, defenders still lack reliable ways to judge whether a system is safe and effective enough to trust with critical operations.

The analysis reviews four influential benchmarks: Microsoft’s ExCyTIn-Bench, Meta’s CyberSOCEval and CyberSecEval 3, and Rochester Institute of Technology’s CTIBench. ExCyTIn-Bench drops agents into a MySQL instance that mirrors a realistic Azure tenant with 57 Sentinel-style tables, 8 distinct multi-stage attacks, and a unified log stream spanning 44 days of activity, and its headline result reports an average reward of 0.249 and a best of 0.368 across evaluated models, underscoring that current models struggle with multi-hop investigations over realistic, heterogeneous logs. CyberSOCEval reframes malware analysis and threat intelligence reasoning as multi-answer multiple-choice questions and finds that models perform well above random but are far from solving the tasks, with malware analysis accuracy in the teens to high-20s percentage range against a random baseline around 0.63% and threat intelligence reasoning accuracy in the ~43 to 53% band versus ~1.7% random. CTIBench, grounded in cyber threat intelligence workflows, similarly turns practical tasks such as mapping vulnerabilities to weaknesses and assigning severities into isolated questions, revealing systematic misjudgments of severity in its CVSS scoring task and showing that general model confidence does not equate to calibrated risk assessment.

Across these benchmarks, the authors identify recurring methodological problems that limit their relevance to real security operations. All four largely treat security as a series of static questions rather than as continuous, collaborative workflows where teams triage queues of alerts, pivot between related incidents, and make time-pressured judgment calls under incomplete telemetry. Multiple-choice and static question answering formats assume the right question and evidence have already been selected, quietly shifting the model’s role from investigation to summarization, while failing to penalize costly mistakes or reward good investigative strategies. Statistical hygiene is uneven, with many results based on single-seed, temperature-zero runs, limited reporting of variance, and little systematic analysis of training data contamination. Every benchmark relies on large language models somewhere in the evaluation loop, often from the same vendor being scored, which makes the setup susceptible to overfitting and conflicts of interest. The article concludes that today’s benchmarks measure narrow task performance in controlled settings, not operational outcomes such as time-to-detect, time-to-contain, or reduced business risk, and therefore cannot yet tell security teams whether deploying large language model driven systems will meaningfully improve their defensive posture.

55

Impact Score

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.