Five Key Steps to Manage Generative AI Risks in Compliance Programs

Practical guidance for companies to update compliance programs and mitigate the legal and operational risks of generative Artificial Intelligence.

Organizations adopting generative artificial intelligence face a complex landscape of emerging risks, demanding a strategic overhaul of existing compliance frameworks. According to Ankura´s legal and compliance specialists, employees remain the first line of defense, responsible for identifying business risks stemming from new technologies. However, it is the compliance function that must recalibrate processes and controls to address rapidly evolving Artificial Intelligence-related threats and support effective risk mitigation.

The article outlines five critical steps for appropriate risk management. First, companies should recalibrate their compliance governance and oversight structures to account for Artificial Intelligence impacts across business functions. This includes updating governance documents, clarifying the roles of business, IT, legal, compliance, and audit leaders, and considering the appointment of Artificial Intelligence experts or even the establishment of an Artificial Intelligence ethics committee. Second, organizations should refresh enterprise-wide risk assessments, leveraging subject matter expertise to identify and prioritize Artificial Intelligence risks in all areas of operation, including those lacking internal technical proficiency. Prioritization of high-risk business areas and improvement of monitoring and preventive controls should immediately follow risk identification.

The third step focuses on enhancing both prevention and detection measures in compliance programs, asking organizations to review and update policies, procedures, and controls specific to generative Artificial Intelligence use. This includes creating clear labels for Artificial Intelligence-generated content, incorporating regular monitoring for trustworthiness of outputs, instituting audits for bias (especially in HR processes), and defining methods to stay ahead of regulatory developments and peer enforcement actions. The fourth step urges the implementation of robust data security and quality controls over inputs into Artificial Intelligence systems, with close collaboration between legal, compliance, IT, and cybersecurity to protect data integrity. Finally, comprehensive Artificial Intelligence compliance training is recommended, tailored to varying roles and including scenario-based learning, deepfakes to illustrate risk, and prominent leadership involvement. The strategy may also include tabletop exercises for executive oversight committees to evaluate organizational readiness and alignment of Artificial Intelligence deployment with risk strategy.

By adopting these measures, companies position themselves to better manage the nuanced risks that generative Artificial Intelligence brings, ensuring compliance standards keep pace with technological advancement and regulatory expectations.

51

Impact Score

Who decides how America uses Artificial Intelligence in war

Stanford experts are divided over how the United States should govern Artificial Intelligence in defense, surveillance, and warfare. Their views converge on one point: decisions with such high stakes cannot be left to companies alone.

GPUBreach bypasses IOMMU on GDDR6-based NVIDIA GPUs

Researchers from the University of Toronto describe GPUBreach, a rowhammer attack against GDDR6-based NVIDIA GPUs that can bypass IOMMU protections. The technique enables CPU-side privilege escalation by abusing trusted GPU driver behavior on the host system.

Google Vids opens free video generation to all Google users

Google has made Google Vids available to anyone with a Google account, adding free access to video generation with its latest models. The move expands Google’s end-to-end video workflow and increases pressure on rivals that charge for similar tools.

Court warns against chatbot legal advice in Heppner case

A federal court found that chats with a publicly available generative Artificial Intelligence tool were not protected by attorney-client privilege or the work-product doctrine. The ruling highlights litigation risks when executives or employees use chatbots for legal guidance without lawyer supervision.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.