Applying genome editing oversight lessons to artificial intelligence governance

Insights from genome editing regulation are shaping new frameworks for responsible Artificial Intelligence development at Microsoft and beyond.

Generative artificial intelligence is forcing a re-examination of governance strategies across the tech industry, prompting experts to look to other domains—like genome editing—for guidance on testing, evaluation, and regulatory oversight. In a recent discussion hosted by Microsoft Research’s Kathleen Sullivan, professor emerita of law and bioethics R. Alta Charo detailed how the field of genome editing has managed risk and established coordinated frameworks across multiple agencies and international borders. Rather than regulating the technology itself, Charo explained, the biotechnology sector focuses on regulating specific applications, distinguishing between inherent hazards and contextual risks. This approach has enabled flexible yet robust oversight, adapting to different uses in medicine, agriculture, and the environment.

Charo traced the evolution of genome editing regulation, noting how early ambiguity in statutory language and the involvement of numerous stakeholders—government agencies, professional licensing bodies, institutional committees, and insurers—resulted in a complex but adaptable system. She illustrated how practical regulatory challenges, such as distinguishing between veterinary drugs and genetic edits in animals, led agencies like the FDA and USDA to collaborate on shared oversight. Charo highlighted gaps that still exist, especially with cross-border issues and harmonization of standards, but emphasized the necessity of proportional response: unfamiliar or riskier applications receive more rigorous oversight, while incremental or well-understood uses are subject to lighter control. She urged future policy leaders to see ethics and regulation as essential partners in innovation rather than obstacles.

Following Charo’s insights, Daniel Kluttz, general manager in Microsoft’s Office of Responsible AI, described how these bioethical frameworks are informing Microsoft’s own risk governance for artificial intelligence. Kluttz’s team works with internal product groups to identify high-impact or sensitive uses of artificial intelligence technologies, applying customized requirements to deployments that could affect legal standing, psychological or physical well-being, or human rights. Echoing genome editing’s application-level focus, Kluttz advocates for a proportional, use-case-based approach to regulating artificial intelligence, tailoring mitigation strategies to each context instead of blanket, one-size-fits-all rules. This includes tracking post-market deployment data, learning from customer and stakeholder feedback, and updating oversight practices as the technology landscape evolves. Both Charo and Kluttz identify cross-disciplinary learning as critical for developing nuanced, resilient regulatory frameworks that balance innovation with public trust and safety.

64

Impact Score

ChatGPT Images adds thinking capability

OpenAI has upgraded ChatGPT Images with a new thinking mode that can search the internet, generate multiple images, and verify outputs before finalizing results. The update also improves text rendering, dense compositions, multilingual support, and style flexibility.

YouTube expands deepfake detection to Hollywood talent

YouTube is opening its likeness protection system to actors, athletes, musicians and creators beyond its own platform. The move gives public figures a way to flag and request removal of damaging Artificial Intelligence-generated replicas while YouTube weighs broader rules and possible future monetization.

Adobe plans outcome-based pricing for Artificial Intelligence agents

Adobe is positioning its Artificial Intelligence agents around performance-based pricing, charging only when the software completes useful work. The approach points to a more results-oriented model for selling generative Artificial Intelligence tools to business customers.

Tech firms commit billions to Artificial Intelligence infrastructure

Amazon, OpenAI, Nvidia, Meta, Google and others are signing increasingly large cloud, chip and data center agreements as demand for Artificial Intelligence infrastructure accelerates. The latest wave of deals spans investments, compute purchases, chip supply agreements and data center buildouts.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.