Singapore publishes governance and security guidance for agentic Artificial Intelligence

Singapore has released new guidance on how organizations should govern and secure agentic Artificial Intelligence systems as they move into enterprise use. The recommendations focus on risk assessment, human accountability, technical safeguards, and clearer responsibilities for end users.

Singapore’s Infocomm Media Development Authority launched a non-binding Model Artificial Intelligence Governance Framework for Agentic Artificial Intelligence in January 2026, following a discussion paper from the Cyber Security Agency titled Securing Agentic Artificial Intelligence. Together, the documents set out an operational roadmap for organizations dealing with the governance and security challenges created by agentic systems. The guidance defines agentic Artificial Intelligence as systems that can plan across multiple steps, take actions, and interact with external systems or other agents to achieve user-defined goals.

The guidance highlights that agentic Artificial Intelligence combines traditional software risks with large language model-specific weaknesses, but those risks can intensify because agents can act autonomously, plan tasks, and use tools. Risks described include hallucinated plans, misuse or invention of tools through prompt or code injection, biased tool calls affecting external systems and data, and vulnerabilities in emerging agent communication protocols. The documents also warn that errors in one agent can cascade through multi-agent systems, that parallel agents may compete or coordinate in unintended ways, and that these failures can lead to unauthorized actions, unfair outcomes, data breaches, or disruption of connected systems.

The framework organizes its recommendations into four broad areas. First, organizations should assess whether a use case is appropriate by examining the deployment domain, access to sensitive data and external systems, reversibility of actions, level of autonomy, and task complexity. It recommends bounding risk early by limiting tools and data to the minimum necessary, using access controls, and establishing agent identity management so that each agent has a traceable identity linked to a human accountable party. Second, it stresses that organizations deploying agents, and the humans overseeing them, remain responsible for agents’ actions, with clear allocation of responsibilities and human approval at significant checkpoints, especially for high-stakes or irreversible decisions.

Third, the framework calls for technical controls across the lifecycle. During design and development, it points to least-privilege access, secure sandboxed environments, whitelisted servers, standardized communication protocols, and prompts that require agents to confirm their understanding, request clarification, and log plans and reasoning for review. Before deployment, organizations should test task execution accuracy, policy compliance, correct tool usage, and robustness in realistic environments for both single-agent and multi-agent systems. During and after deployment, the guidance recommends gradual rollout, continuous monitoring, real-time intervention, incident review, debugging, and regular auditing, supported by clear logging, alert thresholds, and risk-based interventions.

Fourth, the framework addresses end-user responsibility. For users who interact directly with agents, it emphasizes transparency around capabilities, data access, and a human point of contact. For users integrating agents into workflows, it adds education and training on oversight practices, failure modes, and the effect of automation on tradecraft as agents take over entry-level tasks. The guidance is positioned as a living document that will evolve over time, and IMDA has requested feedback on the framework and implementation case studies.

64

Impact Score

OpenAI prepares GPT-5.5 launch

OpenAI is reportedly preparing GPT-5.5, its first fully retrained base model since GPT-4.5, as it pushes harder into enterprise software. The model is expected to bring native multimodal capabilities and stronger support for agent-based workflows.

Meta expands AWS Graviton deal for agentic Artificial Intelligence

Meta is expanding its partnership with AWS by deploying Graviton processors at scale for its next generation of Artificial Intelligence systems. The move highlights growing demand for CPU-heavy agentic Artificial Intelligence workloads alongside continued reliance on GPUs for model training.

Why DeepSeek v4 matters

DeepSeek’s new open-source flagship pairs stronger performance with a much longer context window and early support for domestic Chinese chips. The release signals progress in open models, memory efficiency, and China’s push to reduce reliance on Nvidia.

OpenAI launches workspace agents in ChatGPT

OpenAI has introduced workspace agents in ChatGPT, giving teams shared Codex-powered agents that can handle multi-step work across business tools and Slack. The feature is aimed at recurring organizational workflows with admin controls, approvals, and enterprise monitoring.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.