COSO issues generative Artificial Intelligence internal control guidance

COSO has released new guidance on internal controls for generative Artificial Intelligence, framing it as an extension of its longstanding control framework rather than a standalone rulebook. The guidance centers on governance, continuous monitoring, and use-case-based oversight as organizations expand generative Artificial Intelligence in business and financial reporting.

In February, the Committee of Sponsoring Organizations of the Treadway Commission released a new publication, “Achieving Effective Internal Control Over Generative AI,” that builds on COSO’s “Internal Control-Integrated Framework” from 2013. The guidance presents a practical approach for managing evolving risks and internal controls tied to generative Artificial Intelligence, using a capability-based view focused on what the technology can do across data extraction and ingestion, automated transaction processing and reconciliation, workflow orchestration, insight generation, monitoring, and human-Artificial Intelligence collaboration.

The guidance aligns risk identification and control expectations with the 17 principles embedded in the five components of COSO’s integrated framework. A central feature is an implementation roadmap with six steps: govern, inventory, assess, design, implement, and monitor. That roadmap is positioned as a tool for management functions such as compliance, risk, and internal audit, as well as boards and audit committees, to build and oversee a structured Artificial Intelligence governance program. It can also help external auditors assess how Artificial Intelligence has been implemented within an organization.

COSO’s approach highlights risks including rapid change, limited explainability, and uncontrolled adoption, including shadow Artificial Intelligence. It stresses the need for ongoing inventories of use cases, clear ownership and escalation paths, and continuous monitoring of control performance. Generative Artificial Intelligence requires a shift from deterministic, rule-based systems to probabilistic models with variable outcomes, and from point-in-time assurance to continuous monitoring of model performance and risk. Monitoring is especially important in financial reporting, where set-and-forget approaches are not sufficient.

Effective oversight should focus on key performance indicators, including, for example, transaction volume, transaction size, and override percentages, to identify model drift and other performance issues. Organizations are also advised to evaluate accuracy and reliability and examine root causes of deficiencies, including prompt design, retrieval issues, and vendor changes. The guidance recommends cross-functional governance with defined roles, accountability, controls, and escalation protocols, plus a use-case inventory mapped to business processes, assertions, and key controls.

Additional recommendations include tailoring the level of human involvement to the risk profile of each use case and to the extent generative Artificial Intelligence outputs influence decisions or automated processes. COSO also calls for control building blocks such as access restrictions, input and retrieval controls, prompt governance, output validation, logging and traceability, and monitoring for drift, anomalies, and unauthorized use. For financially relevant use cases, organizations should map applications to financial reporting processes and ensure outputs affecting material amounts, disclosures, journal entries, reconciliations, estimates, or related controls receive human oversight and appropriate evidence. Early coordination with internal and external auditors is recommended to define what constitutes sufficient, appropriate evidence for generative Artificial Intelligence-enabled control activities and monitoring.

52

Impact Score

Memory architecture is central to autonomous llm agents

Memory design, not just model choice, determines whether autonomous agents can sustain context, learn from experience, and stay reliable over time. A practical framework centers on how information is written, managed, and read across multiple memory types.

OpenAI expands cyber model access through trusted program

OpenAI has introduced GPT-5.4-Cyber as a restricted model for cybersecurity professionals, widening access through its Trusted Access for Cyber program. The release highlights both the defensive value and misuse risks of more capable Artificial Intelligence tools in security work.

Chinese tech firms and Li Fei-Fei push world models forward

Chinese tech companies and Li Fei-Fei’s World Labs are accelerating work on world models, a field focused on helping Artificial Intelligence learn from and interact with physical reality. Alibaba’s new Happy Oyster system targets real-time virtual world creation with more continuous user control.

UK launches Sovereign Artificial Intelligence backing for startups

The UK government has unveiled Sovereign Artificial Intelligence, a state-backed initiative aimed at helping domestic startups build, scale and stay in Britain. The first support includes an equity investment in Callosum and supercomputing access for 6 additional companies working across drug discovery, infrastructure and national security.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.