coso issues guidance on generative artificial intelligence risk management

coso has released 30 pages of guidance on how to align its internal control framework with the emerging risks of generative artificial intelligence, urging companies to build governance structures that can keep pace with fast-evolving technology and use cases.

coso has released new guidance to help businesses manage the risks of generative artificial intelligence, warning that generative artificial intelligence is entering corporate operations “far faster than traditional governance models anticipated” and creating a range of risks that demand early attention. The guidance, 30 pages of material explaining how to map coso’s five components of internal control to generative artificial intelligence, is intended for compliance, audit, and governance professionals who increasingly see artificial intelligence as the lens through which broader risk issues are evaluated. The objective is to help organizations establish robust, durable artificial intelligence governance structures now, so that adoption of the technology is channeled along controlled, beneficial paths rather than veering into harmful or uncontrolled uses.

To ground governance in actual business activity, the document introduces eight capability types for generative artificial intelligence, such as data extraction and ingestion, and monitoring and continuous review. Framing artificial intelligence in terms of these capability types is meant to help governance and internal control teams think about what the technology does across multiple tools and vendors, instead of focusing on specific systems. The guidance connects these capability types to familiar risk themes, including data quality, hallucinations, explainability, security and privacy, bias and fairness, accountability, and vendor and third party exposure. For instance, in forecasting and insight generation, weak controls over training data can introduce bias into product recommendations, inviting consumer litigation or regulatory discrimination probes, while poor explainability controls can leave companies unable to understand or defend how an artificial intelligence system produced biased outcomes.

The guidance then walks through the 17 principles in coso’s 2013 internal control framework and illustrates how each can be extended to artificial intelligence. Under principle 8 on fraud risk assessment, generative artificial intelligence is described as introducing new fraud mechanisms such as deepfakes, synthetic records, and model manipulation via crafted prompts, which can be amplified by artificial intelligence agents that create authorization risks, excessive autonomy, and insecure interfaces. Organizations are urged to determine whether existing fraud controls can detect or prevent these schemes and to expand prevention and detection where needed. Principle 16 on monitoring activities is recast for generative artificial intelligence to stress continuous monitoring and independent assurance over artificial intelligence enhanced processes for risks like accuracy, precision, and fairness. Given the rapid pace of technological change, the guidance emphasizes “always on” monitoring that can quickly identify issues, lock down malfunctioning systems, and trace errant behavior back to its origin, since misinterpreted data in everyday use cases such as sales forecasting can quickly cascade into faulty strategic decisions and widespread damage.

55

Impact Score

Google Vids opens free video generation to all Google users

Google has made Google Vids available to anyone with a Google account, adding free access to video generation with its latest models. The move expands Google’s end-to-end video workflow and increases pressure on rivals that charge for similar tools.

Court warns against chatbot legal advice in Heppner case

A federal court found that chats with a publicly available generative Artificial Intelligence tool were not protected by attorney-client privilege or the work-product doctrine. The ruling highlights litigation risks when executives or employees use chatbots for legal guidance without lawyer supervision.

Newsom orders California to weigh Artificial Intelligence harms in contract rules

Gov. Gavin Newsom has signed an executive order directing California agencies to account for potential Artificial Intelligence harms in state contracting while expanding approved use of generative tools across government. The move follows a dispute involving Anthropic and reflects a broader split between California and the Trump administration on Artificial Intelligence oversight.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.