Generative artificial intelligence is forcing a re-examination of governance strategies across the tech industry, prompting experts to look to other domains—like genome editing—for guidance on testing, evaluation, and regulatory oversight. In a recent discussion hosted by Microsoft Research’s Kathleen Sullivan, professor emerita of law and bioethics R. Alta Charo detailed how the field of genome editing has managed risk and established coordinated frameworks across multiple agencies and international borders. Rather than regulating the technology itself, Charo explained, the biotechnology sector focuses on regulating specific applications, distinguishing between inherent hazards and contextual risks. This approach has enabled flexible yet robust oversight, adapting to different uses in medicine, agriculture, and the environment.
Charo traced the evolution of genome editing regulation, noting how early ambiguity in statutory language and the involvement of numerous stakeholders—government agencies, professional licensing bodies, institutional committees, and insurers—resulted in a complex but adaptable system. She illustrated how practical regulatory challenges, such as distinguishing between veterinary drugs and genetic edits in animals, led agencies like the FDA and USDA to collaborate on shared oversight. Charo highlighted gaps that still exist, especially with cross-border issues and harmonization of standards, but emphasized the necessity of proportional response: unfamiliar or riskier applications receive more rigorous oversight, while incremental or well-understood uses are subject to lighter control. She urged future policy leaders to see ethics and regulation as essential partners in innovation rather than obstacles.
Following Charo’s insights, Daniel Kluttz, general manager in Microsoft’s Office of Responsible AI, described how these bioethical frameworks are informing Microsoft’s own risk governance for artificial intelligence. Kluttz’s team works with internal product groups to identify high-impact or sensitive uses of artificial intelligence technologies, applying customized requirements to deployments that could affect legal standing, psychological or physical well-being, or human rights. Echoing genome editing’s application-level focus, Kluttz advocates for a proportional, use-case-based approach to regulating artificial intelligence, tailoring mitigation strategies to each context instead of blanket, one-size-fits-all rules. This includes tracking post-market deployment data, learning from customer and stakeholder feedback, and updating oversight practices as the technology landscape evolves. Both Charo and Kluttz identify cross-disciplinary learning as critical for developing nuanced, resilient regulatory frameworks that balance innovation with public trust and safety.