The briefing from the House of Commons Library explains that generative artificial intelligence refers to systems that use machine learning to create new content such as text, images, audio, video and code, and notes that content produced by generative artificial intelligence is becoming increasingly realistic, which makes it harder for people to distinguish between content created with, and without, artificial intelligence. Artificial intelligence content labelling is described as a set of practices used to alert people when they are engaging with content that has not been created by humans, by marking content that has been generated or altered by artificial intelligence so that users can better understand its origins and assess its reliability. The paper distinguishes between labels intended to warn of possible harm and those aimed at transparently describing how content was produced, reflecting a broader debate about what information users most need when encountering artificial intelligence-generated material.
Deepfakes are highlighted as a key driver of concern, since these are artificial intelligence-generated videos, images or audio files deliberately designed to appear real, which can spread disinformation, be used to commit fraud and depict individuals in misleading ways. Labels applied to deepfakes are sometimes described as impact-based labels, because they draw attention to the potential for harm, while process-based labels seek to communicate how a piece of content was created, including whether artificial intelligence was involved, rather than focusing on its consequences. The briefing outlines how artificial intelligence content labelling can use visible disclaimers such as text overlays, captions, watermarks or audio prompts, for example where artificial intelligence chatbots are labelled as simulation or parody so users know they are interacting with software, and how tags can be embedded in metadata to indicate the role of artificial intelligence in content creation. It describes work by the Coalition for Content Provenance and Authenticity on content credentials, an open-source protocol using cryptography to encode a content item’s origin and editing history, with a watermark label, usually a speech bubble containing cr, that can be clicked for provenance information, which companies such as Adobe and platforms like LinkedIn and Meta have begun to use.
The paper notes that invisible digital watermarks are being developed to embed technical signals in content that reveal its origin or composition to specialised algorithms, even though they remain unseen by ordinary viewers. It stresses that there is not yet a consensus on the most effective design for an artificial intelligence content label and that the most appropriate format may depend on whether the goal is to highlight the use of generative artificial intelligence in content creation or to alert people that a piece of content could be misleading. In the UK, there is no legislation requiring artificial intelligence-generated content to be labelled, and the government’s consultation on Copyright and Artificial Intelligence, which closed in December 2024, acknowledged the benefits of clear artificial intelligence labelling but pointed to technical challenges, while wider artificial intelligence regulation has been delayed and it is unclear whether future laws will include labelling requirements. In the European Union, article 50 of the EU AI Act sets transparency rules for content produced by generative artificial intelligence, stating that providers of artificial intelligence systems that interact with humans must alert users when they are interacting with artificial intelligence, and that providers of artificial intelligence systems that generate or manipulate content must mark outputs in a machine-readable way, and these rules are due to take effect in August 2026, although the European Commission has proposed delaying implementation until 2027. The briefing concludes that social media companies, news organisations, search engines and gaming platforms have adopted varying approaches, combining automatic detection technologies and user disclosures, sometimes distinguishing between artificial intelligence-edited and artificial intelligence-generated content, and that many news organisations now include artificial intelligence guidance in editorial policies while search and gaming policies often reflect the stance of their parent companies.
