Legal Landscapes: Switzerland – Artificial Intelligence
1. What is the current legal landscape for Artificial Intelligence in your jurisdiction?
Switzerland has not enacted dedicated legislation governing artificial intelligence (AI) yet. However, on 27 March 2025, Switzerland signed the Council of Europe’s Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law (“Council of Europe’s AI Convention”). As a consequence, Switzerland is expected to incorporate the AI Convention’s requirements into Swiss law. This shall be achieved by sector-specific legislative amendments and, where fundamental rights such as data protection are concerned, through cross-sectorial regulations. In addition, legally non-binding measures, such as self-declaration agreements or industry-led solutions, are planned to support implementation. A consultation draft for the necessary legislative amendments, along with an implementation roadmap for non-legislative measures, is expected by the end of 2026.
Even though Switzerland does not have any AI-specific legislation yet, various existing laws apply to AI, in particular the following:
- Federal Act on Data Protection (FADP): The FADP applies whenever personal data is processed and contains, among others, provisions on automated decision-making and profiling. In a communication published in May 2025, the Federal Data Protection and Information Commissioner (FDPIC) confirmed that the FADP applies directly to AI-supported data processing.
- Federal Copyright Act (CopA): The applicability of the CopA to AI systems is currently subject to legal debate. Key open questions include whether AI training involves copyright-relevant uses and whether AI-generated outcomes can be protected under copyright. According to the “Overview of artificial intelligence regulation” established by the Federal Department of the Environment, Transport, Energy and Communications (DETEC) and the Federal Office of Communications (OFCOM) in February 2025, these issues will require further legal clarification and likely legislative amendments.
- Federal Product Liability Act (PLA): The PLA being formulated in a technology-neutral way, it can currently be applied to liability issues involving AI. However, revisions to the PLA are anticipated to align Swiss law with technological advances and with the revised EU Product Liability Directive.
- Sector-specific laws, such as the Federal Act on Medicinal Products and Medical Devices (Therapeutic Products Act, TPA) and the Medical Devices Ordinance (MedDO) regulate the use of AI in the medical sector. Similarly, the Road Traffic Act (RTA) and the Ordinance on Automated Driving (OAD) apply to automated driving systems. Financial market regulation is another area where supervisory authorities, such as FINMA, are increasingly addressing the implications of AI.
- Other applicable laws, such as labour law, criminal law and civil liability, may also be applied to AI-related use cases. These legal frameworks will be reviewed in light of the Council of Europe’s AI Convention to determine whether amendments are needed.
2. What three essential pieces of advice would you give to clients involved in Artificial Intelligence matters?
Identify and classify AI systems, determine legal roles and applicable laws, and assess the risks
The first and most important step for any organisation involved in AI is to establish internal AI governance structures. This includes allocating clear internal roles and responsibilities for managing legal, ethical, and technical AI issues. Organisations should begin by identifying and inventorying all AI systems they develop, distribute or use. For each system, it is critical to determine:
- The organisation’s role (e.g., provider, deployer, distributor, importer) under applicable frameworks such as the EU AI Act.
- Which legal and regulatory frameworks apply, considering their operational model and jurisdiction (e.g., EU AI Act, Swiss FADP, US NIST AI RMF, ISO standards, sector-specific laws such as the Therapeutic Products Act).
- The AI system’s risk level, ideally using the EU AI Act’s classification (prohibited, high risk, low risk), especially for Swiss companies likely to fall within its extraterritorial scope.
Each AI system should be included in an AI system inventory that documents all relevant details, including the purpose, risk category, applicable legal obligations, and governance measures. This inventory forms the foundation for compliance and accountability.
Embed compliance from the outset and invest in training and governance policies
When planning the design, development and deployment of AI systems, legal and ethical considerations should be embedded into the AI lifecycle from the design phase onwards, applying principles of privacy by design, data minimisation, fairness and transparency. Compliance is significantly more effective and efficient when integrated from the outset rather than retrofitted after development.
Organisations operating in multiple jurisdictions should consider applying the strictest applicable legal standards across all systems to ensure global compliance.
Furthermore, they should develop and implement enterprise-wide AI-governance policies covering issues such as data protection, bias mitigation, transparency and documentation. These policies should be implemented, where needed, with local adaptations addressing country-specific regulatory requirements.
To operationalise these policies, employee training is essential. This includes general awareness programs for all staff and targeted training for key functions such as Legal, Compliance, Product Development, IT, and Data Science teams.
Establish a cross-functional AI governance structure and promote multi-stakeholder collaboration
Establishing a robust AI governance structure is crucial to ensure accountability and compliance across the organisation. Depending on the organisation’s size and complexity, various AI governance models can be adopted:
- Centralised model: A single governance body oversees AI policies, compliance, and risk management across the organisation and ensures uniform standards across all AI initiatives. The downside is that such an approach leaves little to no room for adaptations and is not very flexible. This model might be an option for highly regulated industries such as healthcare or finance.
- Decentralised model: Individual business units or departments manage their own AI systems and governance, allowing for agility but increasing the risk of inconsistent practices and compliance risks. This model might be suitable for organisations with a fragmented organisational structure and diverse AI applications across departments.
- Hybrid model (recommended): Comprises a core AI governance framework which ensures coherence, while departments retain flexibility though tailored guidelines.
Regardless of the model chosen, it is recommended to establish a cross-functional AI governance committee to coordinate governance, ethical AI oversight and compliance monitoring. This committee should ensure that AI practices align with applicable regulations, legal requirements, and the organisation’s broader strategic and ethical objectives. It should also oversee documentation, data quality controls, and monitoring procedures for deployed high-risk AI systems.
3. What are the greatest threats and opportunities in Artificial Intelligence law in the next 12 months?
The absence of dedicated AI legislation in Switzerland presents both an opportunity and a risk: an opportunity because it offers flexibility that may foster innovation, and a risk because it leads to legal uncertainties, for example with respect to the use of copyright-protected material for AI training, product liability for AI-generated outcomes, and transparency obligations when individuals interact with AI systems.
As the consultation draft for implementing the Council of Europe’s AI Convention is not expected before the end of 2026, existing Swiss laws will continue to apply in the interim. Their interpretation remains uncertain in key areas, and several open questions are likely to be clarified through case law in the near future.
For example, in its decision B-2532/2024 of 4 July 2025, the Federal Administrative Court confirmed that an AI system cannot be registered as an inventor in a patent application, upholding the position of the Swiss Federal Institute of Intellectual Property (IPI), which had refused to register an AI system as an inventor in the patent register. However, the Court clarified that a natural person who makes a significant contribution to the AI-assisted invention by designing the AI process, evaluating its output, and deciding to file a patent application, can and must be named as the inventor in the patent register.
4. How do you ensure high client satisfaction levels are maintained by your practice?
At FABIAN PRIVACY LEGAL, client satisfaction is built on deep expertise, practical insight and a commitment to delivering tailored, business-oriented solutions. As a boutique law firm specialising in privacy, cybersecurity, and most recently AI-related governance, we bring more than 30 years of cross-functional experience in data protection, information law, employment law, risk and compliance, and information security. Our combined experience as both external advisors and in-house counsels enables us to offer pragmatic, cost-effective advice that aligns with the realities of day-to-day operations.
To ensure the highest quality of service, we stay at the forefront of legal, regulatory, and technological developments. We not only closely follow emerging trends and actively participate in international conferences and training, but also speak on these topics as recognised experts. Most recently, we presented on implementing the EU AI Act in the MedTech sector, demonstrating our hands-on approach to helping clients translate complex legal frameworks into actionable governance.
We also maintain a strong professional network, collaborating with technical experts in IT and cybersecurity, as well as with trusted legal partners across jurisdictions, to support clients in complex, cross-border, and highly regulated environments.
What sets us apart is our ability to develop customised, scalable legal solutions that reflect each client’s specific regulatory context, risk landscape, and strategic goals. Our clients consistently value our responsiveness, precision, and ability to integrate legal compliance with operational implementation.
5. What technological advancements are reshaping Artificial Intelligence law and how can clients benefit from them?
Technological advancements in AI are reshaping the legal landscape and offering multiple benefits to clients and legal practitioners alike. These developments non only improve legal services delivery, but also create new opportunities for legal advisors in high-growth areas such as AI compliance and governance. Key areas include:
- Increased efficiency in legal workflows: AI tools can significantly reduce the time required for contract review, legal research, and document summarisation, enabling lawyers and in-house counsels to focus on strategic, high-value tasks and delivering faster and more-cost-effective services to clients.
- Competitive advantage through legal innovation: Law firms and in-house counsels that effectively leverage AI can offer more agile, scalable and innovative legal services. Clients can benefit from quicker turnaround times, reduced costs, and tailored legal services that leverage the latest technologies.
- New business opportunities in AI governance and compliance: The rapid development of AI technologies and their broad commercial applications, especially under new legal frameworks like the EU AI Act, generate strong demand for legal expertise in AI related compliance, risk management, and liability. Swiss companies operating in or targeting the EU market must navigate cross-border AI obligations, creating a growing need for legal advisors with specialist knowledge in this field.
- Optimised internal operations: AI-based solutions help law firms and in-house legal teams optimise internal processes and workflows, such as client communications, document and knowledge management, and task automation. These tools enhance operational efficiency and resource allocation, ultimately benefitting clients through higher service quality and responsiveness.
- Enhanced risk management and compliance automation: AI supports legal teams in conducting risk assessments, monitoring regulatory developments, and automating compliance tasks. With the help of predictive analytics and intelligent due diligence tools, legal departments can proactively manage complex legal and regulatory risks, improving overall client protection.
As both users and advisors in the AI space, legal professionals have a key role to play in guiding clients through this evolving technological and regulatory environment. Clients that engage with forward-looking legal advisors stand to benefit not only from improved service delivery but also from strategic insights into managing AI-related legal risks and opportunities.