
Switzerland to release fully open large language model trained on national supercomputer
9 July 2025

Developed by ETH Zurich, EPFL, and CSCS, the open-source LLM is trained on the Alps supercomputer and designed for transparency, scale, and multilingual use.
EPFL and ETH Zurich, co-leaders of the Swiss AI Initiative, will soon release a new large language model (LLM) developed entirely on Swiss public infrastructure. Trained on the Alps supercomputer at the Swiss National Supercomputing Centre (CSCS) in Lugano, the model represents a major milestone in the advancement of open, multilingual AI in Europe.
The announcement followed the International Open-Source LLM Builders Summit, held in Geneva earlier this month. The event gathered around 50 global organizations committed to open-source AI development and was hosted jointly by the AI centers of ETH Zurich and EPFL. Attendees were given a preview of the forthcoming model, which will be made available under the Apache 2.0 license later this summer.
Transparency and multilingual performance by design
The new model is multilingual by design, trained on a large corpus of texts in over 1,500 languages. Approximately 40% of the training data covers non-English languages, reinforcing the model’s usability across cultural and linguistic contexts. It will be released in two versions, 8 billion and 70 billion parameters, with the larger model among the most powerful fully open LLMs globally.
Beyond performance, the initiative places strong emphasis on transparency and accountability. The model’s source code, weights, and training data will be publicly available, enabling reproducibility and reuse across sectors such as education, science, government, and industry. According to ETH AI Center researcher Imanol Schlag, the fully open approach is designed to support trust, regulatory compliance, and collaborative innovation.
AI infrastructure enabling sovereign innovation
The project builds on Switzerland’s strategic investment in sovereign AI infrastructure. The Alps supercomputer, powered by over 10,000 NVIDIA Grace Hopper Superchips and running on 100% renewable energy, was instrumental in enabling efficient large-scale training. The system was developed through long-standing partnerships between CSCS, NVIDIA, and HPE.
Thomas Schulthess, Director of CSCS, notes that this collaboration showcases how public institutions and industry leaders can jointly advance independent, high-performance infrastructure to serve science and society.
A global model built for public benefit
The LLM is a flagship project of the Swiss AI Initiative, launched in December 2023 and supported by over ten academic institutions. With more than 800 researchers involved and access to 20 million GPU hours annually, the initiative represents the world’s largest open science effort dedicated to foundational AI models. Its goal is to support trustworthy, inclusive AI development while fostering global collaboration and talent.