Five Key Steps to Manage Generative AI Risks in Compliance Programs

Practical guidance for companies to update compliance programs and mitigate the legal and operational risks of generative Artificial Intelligence.

Organizations adopting generative artificial intelligence face a complex landscape of emerging risks, demanding a strategic overhaul of existing compliance frameworks. According to Ankura´s legal and compliance specialists, employees remain the first line of defense, responsible for identifying business risks stemming from new technologies. However, it is the compliance function that must recalibrate processes and controls to address rapidly evolving Artificial Intelligence-related threats and support effective risk mitigation.

The article outlines five critical steps for appropriate risk management. First, companies should recalibrate their compliance governance and oversight structures to account for Artificial Intelligence impacts across business functions. This includes updating governance documents, clarifying the roles of business, IT, legal, compliance, and audit leaders, and considering the appointment of Artificial Intelligence experts or even the establishment of an Artificial Intelligence ethics committee. Second, organizations should refresh enterprise-wide risk assessments, leveraging subject matter expertise to identify and prioritize Artificial Intelligence risks in all areas of operation, including those lacking internal technical proficiency. Prioritization of high-risk business areas and improvement of monitoring and preventive controls should immediately follow risk identification.

The third step focuses on enhancing both prevention and detection measures in compliance programs, asking organizations to review and update policies, procedures, and controls specific to generative Artificial Intelligence use. This includes creating clear labels for Artificial Intelligence-generated content, incorporating regular monitoring for trustworthiness of outputs, instituting audits for bias (especially in HR processes), and defining methods to stay ahead of regulatory developments and peer enforcement actions. The fourth step urges the implementation of robust data security and quality controls over inputs into Artificial Intelligence systems, with close collaboration between legal, compliance, IT, and cybersecurity to protect data integrity. Finally, comprehensive Artificial Intelligence compliance training is recommended, tailored to varying roles and including scenario-based learning, deepfakes to illustrate risk, and prominent leadership involvement. The strategy may also include tabletop exercises for executive oversight committees to evaluate organizational readiness and alignment of Artificial Intelligence deployment with risk strategy.

By adopting these measures, companies position themselves to better manage the nuanced risks that generative Artificial Intelligence brings, ensuring compliance standards keep pace with technological advancement and regulatory expectations.

51

Impact Score

Y Combinator backs new wave of computer vision startups in 2026

Y Combinator’s 2026 computer vision cohort spans infrastructure, developer tools, and industry-specific applications from retail security to aquaculture and healthcare. Startups are increasingly pairing computer vision with large vision language models and foundation models to tackle real-time video, automation, and domain-specific analysis.

How evolving technology reshapes modern crime and enforcement

Rapidly shifting consumer technologies are creating new vulnerabilities for criminals to exploit just as they equip governments with powerful tools for surveillance and prosecution, raising fresh questions about security and civil rights.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.