Sysdig has introduced runtime security for Artificial Intelligence coding agents, targeting organisations that use autonomous development tools. The service is designed to give security teams real-time visibility into how coding agents behave across cloud and development environments as businesses roll out tools such as Claude Code, Codex and Gemini. The move reflects rising concern that coding agents often need access to sensitive data, source code and elevated permissions to perform tasks.
Sysdig is positioning the product around the risk that these agents can become attractive targets because they may hold credentials and operate inside development environments where malicious activity can be difficult to detect without close monitoring. The new detections are intended to flag several categories of activity, including the installation of new Artificial Intelligence coding agents, attempts to open sensitive files, efforts to bypass controls on credential access, and command-line arguments that weaken safeguards, such as unrestricted file writes. The system is also meant to identify more severe activity inside developer environments, including reverse shells, binary tampering and persistence mechanisms.
The launch highlights a broader security challenge as Artificial Intelligence agents move beyond generating text or code snippets and begin taking actions, interacting with local files, calling tools and moving through workflows with significant access. In these settings, security teams are trying to allow productivity tools into workflows without losing oversight of their behaviour. Runtime monitoring is being presented as one way to address that gap because it focuses on behaviour as it happens, instead of relying only on software reviews or access policies.
Sysdig also framed the release as part of a wider shift in cyber security toward monitoring Artificial Intelligence systems as operational entities rather than treating them only as software features. As coding agents take on more tasks in development pipelines, the boundary between user activity and automated activity becomes harder to distinguish. In environments where an assistant can inspect repositories, modify files, connect to services or trigger scripts, a misconfigured, manipulated or abused agent could produce effects similar to a traditional intrusion while operating through authorised tools.
Sysdig said its detections are designed to reduce false positives and support investigations into incidents involving Artificial Intelligence agent activity. The company said more than 60% of Fortune 500 companies use its technology. Loris Degioanni, founder and CTO of Sysdig, said the security challenge will grow as agents move into more business-critical roles and argued that organisations should adopt an assume breach approach built on runtime visibility and real-time detections.
