coso has released new guidance to help businesses manage the risks of generative artificial intelligence, warning that generative artificial intelligence is entering corporate operations “far faster than traditional governance models anticipated” and creating a range of risks that demand early attention. The guidance, 30 pages of material explaining how to map coso’s five components of internal control to generative artificial intelligence, is intended for compliance, audit, and governance professionals who increasingly see artificial intelligence as the lens through which broader risk issues are evaluated. The objective is to help organizations establish robust, durable artificial intelligence governance structures now, so that adoption of the technology is channeled along controlled, beneficial paths rather than veering into harmful or uncontrolled uses.
To ground governance in actual business activity, the document introduces eight capability types for generative artificial intelligence, such as data extraction and ingestion, and monitoring and continuous review. Framing artificial intelligence in terms of these capability types is meant to help governance and internal control teams think about what the technology does across multiple tools and vendors, instead of focusing on specific systems. The guidance connects these capability types to familiar risk themes, including data quality, hallucinations, explainability, security and privacy, bias and fairness, accountability, and vendor and third party exposure. For instance, in forecasting and insight generation, weak controls over training data can introduce bias into product recommendations, inviting consumer litigation or regulatory discrimination probes, while poor explainability controls can leave companies unable to understand or defend how an artificial intelligence system produced biased outcomes.
The guidance then walks through the 17 principles in coso’s 2013 internal control framework and illustrates how each can be extended to artificial intelligence. Under principle 8 on fraud risk assessment, generative artificial intelligence is described as introducing new fraud mechanisms such as deepfakes, synthetic records, and model manipulation via crafted prompts, which can be amplified by artificial intelligence agents that create authorization risks, excessive autonomy, and insecure interfaces. Organizations are urged to determine whether existing fraud controls can detect or prevent these schemes and to expand prevention and detection where needed. Principle 16 on monitoring activities is recast for generative artificial intelligence to stress continuous monitoring and independent assurance over artificial intelligence enhanced processes for risks like accuracy, precision, and fairness. Given the rapid pace of technological change, the guidance emphasizes “always on” monitoring that can quickly identify issues, lock down malfunctioning systems, and trace errant behavior back to its origin, since misinterpreted data in everyday use cases such as sales forecasting can quickly cascade into faulty strategic decisions and widespread damage.
