The European Union’s general-purpose artificial intelligence code of practice has officially come into effect, marking a significant step in the phased implementation of the broader EU artificial intelligence act. The code, launched a year after the landmark act became law, sets out voluntary but influential standards intended for general-purpose artificial intelligence systems, and was developed with input from legal and technical experts such as Wayne Cleghorn, a partner at Excello Law and member of the code’s working group. Cleghorn emphasizes that while the code is not mandatory, organizations that voluntarily adopt its principles are likely to benefit from simplified enforcement under the EU artificial intelligence act.
Despite its voluntary nature, the influence of the code stretches far beyond the European Union’s borders. Businesses, whether based in the UK, the United States, or other jurisdictions, must determine whether the artificial intelligence systems they build, deploy, or utilize fall under the scope of the EU legislation—especially if their products or services enter the EU market. The code and accompanying documentation, including the new artificial intelligence model documentation form and detailed guidance for providers, aim to help companies categorize their systems as ‘unacceptable’, ‘high’, ‘limited’, or ‘minimal risk’, each of which entails a specific set of compliance obligations under the act.
August 2025 marks a key milestone: all 27 EU member states are now required to appoint notifying authorities and market surveillance authorities, establish national penalty frameworks, and report on the readiness of their artificial intelligence oversight infrastructure. This phase is expected to reveal the practical scale and consistency of artificial intelligence regulation across Europe. For international businesses, it brings new clarity to enforcement priorities and highlights the necessity of assembling multidisciplinary teams and seeking targeted legal advice to keep pace with evolving rules. According to Cleghorn, failure to proactively engage with these requirements could result in significant compliance challenges, as regulatory expectations for artificial intelligence rapidly become the global norm.