On 3 September 2025 EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) publicly released Apertus, described as Switzerland’s first large-scale open-source large language model. Trained on the Alps supercomputer in Lugano, Apertus was developed to support transparent, reproducible and sovereign generative Artificial Intelligence in Europe. The institutions position the model as inclusive, with design priorities that emphasise openness and regulatory compliance.
Apertus is released in two sizes, 8 billion and 70 billion parameters, and the project claims coverage of more than 1,000 languages. Around 40 percent of the model’s training data is non-English to strengthen multilingual performance and to better serve underrepresented languages including Swiss German and Romansh. Unlike proprietary models, the team has published the model architecture, training data recipes, model weights and documentation. The model can be downloaded from Hugging Face and will also be deployed by Swisscom on a sovereign Swiss Artificial Intelligence platform. The release is distributed under a permissive open-source licence that permits research and commercial use.
Project leads framed Apertus as a blueprint for trustworthy, sovereign Artificial Intelligence. Martin Jaggi, professor of machine learning at EPFL and a member of the Swiss Artificial Intelligence Initiative steering committee, highlighted reproducibility and regulatory compliance as key principles. Thomas Schulthess, director of CSCS, said the model should drive innovation across society and industry. The launch coincides with Swiss Artificial Intelligence Weeks hackathons where developers will test and experiment with the model. The teams expect to grow the Apertus family with future versions that improve efficiency and add domain-specific capabilities in areas such as law, health, climate and education, while maintaining data quality and ethical filtering to meet Swiss and EU standards.