Applying genome editing oversight lessons to artificial intelligence governance

Insights from genome editing regulation are shaping new frameworks for responsible Artificial Intelligence development at Microsoft and beyond.

Generative artificial intelligence is forcing a re-examination of governance strategies across the tech industry, prompting experts to look to other domains—like genome editing—for guidance on testing, evaluation, and regulatory oversight. In a recent discussion hosted by Microsoft Research’s Kathleen Sullivan, professor emerita of law and bioethics R. Alta Charo detailed how the field of genome editing has managed risk and established coordinated frameworks across multiple agencies and international borders. Rather than regulating the technology itself, Charo explained, the biotechnology sector focuses on regulating specific applications, distinguishing between inherent hazards and contextual risks. This approach has enabled flexible yet robust oversight, adapting to different uses in medicine, agriculture, and the environment.

Charo traced the evolution of genome editing regulation, noting how early ambiguity in statutory language and the involvement of numerous stakeholders—government agencies, professional licensing bodies, institutional committees, and insurers—resulted in a complex but adaptable system. She illustrated how practical regulatory challenges, such as distinguishing between veterinary drugs and genetic edits in animals, led agencies like the FDA and USDA to collaborate on shared oversight. Charo highlighted gaps that still exist, especially with cross-border issues and harmonization of standards, but emphasized the necessity of proportional response: unfamiliar or riskier applications receive more rigorous oversight, while incremental or well-understood uses are subject to lighter control. She urged future policy leaders to see ethics and regulation as essential partners in innovation rather than obstacles.

Following Charo’s insights, Daniel Kluttz, general manager in Microsoft’s Office of Responsible AI, described how these bioethical frameworks are informing Microsoft’s own risk governance for artificial intelligence. Kluttz’s team works with internal product groups to identify high-impact or sensitive uses of artificial intelligence technologies, applying customized requirements to deployments that could affect legal standing, psychological or physical well-being, or human rights. Echoing genome editing’s application-level focus, Kluttz advocates for a proportional, use-case-based approach to regulating artificial intelligence, tailoring mitigation strategies to each context instead of blanket, one-size-fits-all rules. This includes tracking post-market deployment data, learning from customer and stakeholder feedback, and updating oversight practices as the technology landscape evolves. Both Charo and Kluttz identify cross-disciplinary learning as critical for developing nuanced, resilient regulatory frameworks that balance innovation with public trust and safety.

64

Impact Score

Saudi Artificial Intelligence startup launches Arabic LLM

Misraj Artificial Intelligence unveiled Kawn, an Arabic large language model, at AWS re:Invent and launched Workforces, a platform for creating and managing Artificial Intelligence agents for enterprises and public institutions.

Introducing Mistral 3: open artificial intelligence models

Mistral 3 is a family of open, multimodal and multilingual Artificial Intelligence models that includes three Ministral edge models and a sparse Mistral Large 3 trained with 41B active and 675B total parameters, released under the Apache 2.0 license.

NVIDIA and Mistral Artificial Intelligence partner to accelerate new family of open models

NVIDIA and Mistral Artificial Intelligence announced a partnership to optimize the Mistral 3 family of open-source multilingual, multimodal models across NVIDIA supercomputing and edge platforms. The collaboration highlights Mistral Large 3, a mixture-of-experts model designed to improve efficiency and accuracy for enterprise artificial intelligence deployments starting Tuesday, Dec. 2.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.