The article surveys how governments worldwide are responding to the rapid spread of generative artificial intelligence that produces text, images, video, and deepfakes, and finds a broad consensus around transparency rather than prohibition. Regulators in regions such as the European Union, United States, China, and others are focusing on disclosure obligations, liability allocation, and platform responsibilities instead of banning artificial intelligence outputs outright. Most frameworks aim to ensure that audiences are informed when they encounter synthetic media, while outcome-based laws on fraud, defamation, and other harms continue to govern misuse of artificial intelligence content. At the same time, cross-border enforcement and uneven technical capabilities for detection are making it difficult to translate these principles into consistent practice.
A central trend is the rise of labeling and watermarking rules for artificial intelligence generated media. The European Union’s artificial intelligence act uses a risk-based model under which generative systems are treated as “limited risk,” triggering obligations to ensure that artificial intelligence content is identifiable, and that deepfakes or artificial intelligence written news aimed at the public are clearly labeled. China’s deep synthesis regulations and interim measures for generative artificial intelligence require visible or metadata-based labels on synthetic media and place extensive duties on providers to secure data, filter prohibited content and obtain consent for face or voice manipulation. The United States has no single artificial intelligence content law but agencies such as the Federal Trade Commission have warned that deceptive deepfakes fall under existing fraud and advertising rules, while executive orders have directed work on watermarking and provenance standards even as more recent directives seek to avoid over regulation of the private sector. Other jurisdictions, including the United Kingdom, Canada, Japan, Singapore, and Middle Eastern states, largely rely on sectoral regulators, soft law principles, and existing consumer protection or cybercrime statutes, while exploring future disclosure requirements.
Another major theme is how responsibility is split between individual creators and large platforms that host or distribute artificial intelligence content. European rules under the digital services act compel major platforms to assess and mitigate risks from manipulated media, including labeling or removing harmful deepfakes, while some United States states such as California and Texas place obligations either on platforms or on individuals who deploy synthetic videos to influence elections. China adopts a platform centric model in which services must monitor, label, and, when necessary, censor user generated artificial intelligence content or face serious penalties. International bodies like the OECD, the United Nations, and the G7 are promoting interoperable norms on transparency and accountability, but the article warns that technical limits on detection, jurisdictional conflicts, and divergent liability regimes risk creating a fragmented landscape. Policymakers and companies are therefore under pressure to invest in watermarking, provenance systems, and artificial intelligence detection tools, while pursuing cross border cooperation to keep artificial intelligence innovation compatible with safeguards against misinformation and harm.
