Organizations adopting generative artificial intelligence face a complex landscape of emerging risks, demanding a strategic overhaul of existing compliance frameworks. According to Ankura´s legal and compliance specialists, employees remain the first line of defense, responsible for identifying business risks stemming from new technologies. However, it is the compliance function that must recalibrate processes and controls to address rapidly evolving Artificial Intelligence-related threats and support effective risk mitigation.
The article outlines five critical steps for appropriate risk management. First, companies should recalibrate their compliance governance and oversight structures to account for Artificial Intelligence impacts across business functions. This includes updating governance documents, clarifying the roles of business, IT, legal, compliance, and audit leaders, and considering the appointment of Artificial Intelligence experts or even the establishment of an Artificial Intelligence ethics committee. Second, organizations should refresh enterprise-wide risk assessments, leveraging subject matter expertise to identify and prioritize Artificial Intelligence risks in all areas of operation, including those lacking internal technical proficiency. Prioritization of high-risk business areas and improvement of monitoring and preventive controls should immediately follow risk identification.
The third step focuses on enhancing both prevention and detection measures in compliance programs, asking organizations to review and update policies, procedures, and controls specific to generative Artificial Intelligence use. This includes creating clear labels for Artificial Intelligence-generated content, incorporating regular monitoring for trustworthiness of outputs, instituting audits for bias (especially in HR processes), and defining methods to stay ahead of regulatory developments and peer enforcement actions. The fourth step urges the implementation of robust data security and quality controls over inputs into Artificial Intelligence systems, with close collaboration between legal, compliance, IT, and cybersecurity to protect data integrity. Finally, comprehensive Artificial Intelligence compliance training is recommended, tailored to varying roles and including scenario-based learning, deepfakes to illustrate risk, and prominent leadership involvement. The strategy may also include tabletop exercises for executive oversight committees to evaluate organizational readiness and alignment of Artificial Intelligence deployment with risk strategy.
By adopting these measures, companies position themselves to better manage the nuanced risks that generative Artificial Intelligence brings, ensuring compliance standards keep pace with technological advancement and regulatory expectations.