Artificial intelligence is now embedded across Swiss business operations, with the financial sector moving fastest. A survey by the Swiss Financial Supervisory Authority found that half of 400 supervised institutions already use or are developing artificial intelligence, averaging five applications in use and nine in development. This article consolidates what Swiss boards, legal and product teams need to know about the European Union Artificial Intelligence Act’s extraterritorial scope, Switzerland’s domestic regulatory path, new FINMA guidance, and Swiss data protection requirements, and offers a practical governance checklist.
The European Union Artificial Intelligence Act applies to Swiss companies in several scenarios, including when providers place systems or general purpose artificial intelligence models on the European Union market or put them into service in the European Union, and when output from systems used outside the bloc is intended for use within the European Union. Key dates have started to bite: since February 1, 2025, artificial intelligence systems presenting prohibited risks cannot be provided in the European Union; organizations within scope must ensure staff artificial intelligence literacy; and since August 1, 2025, providers of general purpose artificial intelligence models face documentation and transparency duties, with additional security requirements for some providers deemed high risk.
Switzerland is pursuing a sector specific, technology neutral approach rather than a single horizontal law. The federal government signed the Council of Europe’s artificial intelligence convention on March 27, 2025, and concluded that Swiss law meets some principles but needs adjustments on transparency and oversight, safe innovation, remedies, and procedural safeguards. The administration must present a bill with legal and non-legal measures by end 2026, ranging from minimal updates like public registers and targeted risk and impact assessments to more ambitious options or alignment with the European Union Artificial Intelligence Act. Trade frictions are looming: 12 of 20 product sectors under the Switzerland-European Union mutual recognition agreement will be affected if products contain artificial intelligence. From August 2027, equivalence will lapse for high-risk systems, triggering additional European Union conformity assessments and costs unless the agreement is extended to cover artificial intelligence.
FINMA’s Guidance 08/2024, issued in December 2024, is the first sector specific non-legal measure for financial institutions and covers all artificial intelligence uses, from in-house models to third party and free tools. FINMA flags operational, cybersecurity, and data protection risks, as well as third party dependencies and legal or reputational exposure. The guidance sets expectations for robust governance with clear roles and responsibilities, a centralized inventory and risk classification, data quality standards, scheduled testing and monitoring, comprehensive documentation of models and controls, and explainability for investors, clients, employees, supervisors, and auditors. FINMA’s survey indicates only about half of current users have an artificial intelligence strategy, highlighting a sizable gap to close.
Swiss data protection law already applies directly to artificial intelligence that processes personal data. The Federal Data Protection Act requires adherence to data minimization, purpose limitation, accuracy, storage limitation, and security, as well as clear information about processing purposes, retention, and recipients. For automated decisions that significantly affect individuals, organizations must provide notice, enable human review, and allow people to state their position and contest outcomes. High-risk profiling by private entities requires express consent, and data protection impact assessments are mandatory where processing poses high risks, with consultation of the data protection authority if residual risks remain. International transfers need adequate protections, and controllers must implement encryption, access controls, security testing, and safeguards for confidential business information, including trade secrets, professional secrecy, contractual protections, and technical isolation.
The regulatory direction is clear even as details evolve. Swiss companies should map their artificial intelligence use cases against the European Union Artificial Intelligence Act’s scope and timelines, align with FINMA’s governance expectations where applicable, and operationalize Federal Data Protection Act obligations. A practical starting point is to establish an enterprise artificial intelligence governance framework, maintain a risk classified inventory, implement testing and monitoring, and ensure documentation and explainability that can withstand supervisory scrutiny.