CSO Online’s generative Artificial Intelligence section serves as a focused hub for security leaders tracking how large language models and agentic systems are reshaping both cyber defense and cybercrime. The page curates news, opinion pieces, features, and resources that examine the security implications of generative and agentic Artificial Intelligence across application security, governance, and threat operations. It is aimed at practitioners such as CISOs, security architects, and risk managers who need to understand how these tools change attack surfaces, introduce new vulnerabilities, and create opportunities for improved defense.
Recent coverage emphasizes the rise of agentic Artificial Intelligence and its associated risks. One analysis looks at managing agentic Artificial Intelligence risk using lessons from the OWASP Top 10 for Agentic Artificial Intelligence, highlighting that adoption is accelerating while security practices trail behind. Opinion columns explore how to demystify risk in Artificial Intelligence, outline a generative Artificial Intelligence governance, risk, and compliance approach for supply chain risk, and argue that generative Artificial Intelligence success depends on a network of champions embedded at team level to align experimentation with business results. Another piece introduces the MAESTRO framework, presented as a layered, bank-focused approach for securing next generation generative and agentic Artificial Intelligence systems.
The section also tracks how generative Artificial Intelligence is transforming the threat landscape. Features and news reports describe polymorphic Artificial Intelligence malware and clarify what that term means in practice, document Google researchers detecting the first operational use of large language models in active malware campaigns, and cover a high profile remote code execution flaw in OpenAI’s Codex command line interface that exposed new development environment risks. Other stories examine Anthropic technology reportedly used in automated cyberattacks, prompt injection techniques that target tools like Microsoft 365 Copilot diagrams and potentially leak corporate emails, and research that tricks ChatGPT into prompt injecting itself. Further articles analyze an Artificial Intelligence native successor to CobaltStrike called Villager, zero click indirect prompt injection methods marketed as difficult to detect, and the risks of “vibe coding” when developers over rely on tools such as Copilot and GhostWriter. Complementing the journalism, whitepapers from vendors like MuleSoft and Salesforce discuss foundations for agentic enterprises, outline 3 critical agentic Artificial Intelligence security risks and how to prevent them, and recommend data security best practices in the age of Artificial Intelligence.
