Generative Artificial Intelligence security coverage on CSO Online

CSO Online’s generative Artificial Intelligence hub tracks how security teams and attackers are using large language models, from agentic Artificial Intelligence risks to malware campaigns and supply chain governance. The section combines news, opinion, and practical guidance aimed at CISOs adapting to rapidly evolving Artificial Intelligence driven threats.

CSO Online’s generative Artificial Intelligence section serves as a focused hub for security leaders tracking how large language models and agentic systems are reshaping both cyber defense and cybercrime. The page curates news, opinion pieces, features, and resources that examine the security implications of generative and agentic Artificial Intelligence across application security, governance, and threat operations. It is aimed at practitioners such as CISOs, security architects, and risk managers who need to understand how these tools change attack surfaces, introduce new vulnerabilities, and create opportunities for improved defense.

Recent coverage emphasizes the rise of agentic Artificial Intelligence and its associated risks. One analysis looks at managing agentic Artificial Intelligence risk using lessons from the OWASP Top 10 for Agentic Artificial Intelligence, highlighting that adoption is accelerating while security practices trail behind. Opinion columns explore how to demystify risk in Artificial Intelligence, outline a generative Artificial Intelligence governance, risk, and compliance approach for supply chain risk, and argue that generative Artificial Intelligence success depends on a network of champions embedded at team level to align experimentation with business results. Another piece introduces the MAESTRO framework, presented as a layered, bank-focused approach for securing next generation generative and agentic Artificial Intelligence systems.

The section also tracks how generative Artificial Intelligence is transforming the threat landscape. Features and news reports describe polymorphic Artificial Intelligence malware and clarify what that term means in practice, document Google researchers detecting the first operational use of large language models in active malware campaigns, and cover a high profile remote code execution flaw in OpenAI’s Codex command line interface that exposed new development environment risks. Other stories examine Anthropic technology reportedly used in automated cyberattacks, prompt injection techniques that target tools like Microsoft 365 Copilot diagrams and potentially leak corporate emails, and research that tricks ChatGPT into prompt injecting itself. Further articles analyze an Artificial Intelligence native successor to CobaltStrike called Villager, zero click indirect prompt injection methods marketed as difficult to detect, and the risks of “vibe coding” when developers over rely on tools such as Copilot and GhostWriter. Complementing the journalism, whitepapers from vendors like MuleSoft and Salesforce discuss foundations for agentic enterprises, outline 3 critical agentic Artificial Intelligence security risks and how to prevent them, and recommend data security best practices in the age of Artificial Intelligence.

55

Impact Score

Google Vids opens free video generation to all Google users

Google has made Google Vids available to anyone with a Google account, adding free access to video generation with its latest models. The move expands Google’s end-to-end video workflow and increases pressure on rivals that charge for similar tools.

Court warns against chatbot legal advice in Heppner case

A federal court found that chats with a publicly available generative Artificial Intelligence tool were not protected by attorney-client privilege or the work-product doctrine. The ruling highlights litigation risks when executives or employees use chatbots for legal guidance without lawyer supervision.

Newsom orders California to weigh Artificial Intelligence harms in contract rules

Gov. Gavin Newsom has signed an executive order directing California agencies to account for potential Artificial Intelligence harms in state contracting while expanding approved use of generative tools across government. The move follows a dispute involving Anthropic and reflects a broader split between California and the Trump administration on Artificial Intelligence oversight.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.