-
What are your countries legal definitions of “artificial intelligence”?
In Switzerland, there is no binding legal definition of artificial intelligence yet.
However, the report to the Federal Council providing an overview of artificial intelligence regulation relies on the definition of AI set out in the Council of Europe’s AI Convention. According to this definition, AI refers to “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that may influence physical or virtual environments. Different artificial intelligence systems vary in their levels of autonomy and adaptiveness after deployment.”
-
Has your country developed a national strategy for artificial intelligence? If so, has there been any progress in its implementation? Are there plans for updates or revisions?
Switzerland’s approach to AI governance is integrated into its Digital Switzerland Strategy. Based on an overview of possible regulatory approaches to AI, prepared on behalf of the Federal Council by the Federal Department of the Environment, Transport, Energy and Communications (DETEC) and the Federal Department of Foreign Affairs (FDFA) that was presented to the Federal Council on 12 February 2025, the Swiss Federal Council decided on a regulatory approach to AI guided by three objectives: (i) strengthening Switzerland as a location for innovation; (ii) safeguarding the protection of fundamental rights, including economic freedom; and (iii) increasing public trust in AI.
To implement these objectives, the Federal Council has defined the following key measures:
- The Council of Europe’s AI Convention, signed by Switzerland on 27 March 2025, shall be incorporated into Swiss law.
- Any legal changes shall be made through sector-specific legislation wherever possible, while general, cross-sector regulation shall be limited to areas affecting fundamental rights, such as data protection.
- In addition to binding legal provisions, non-binding instruments, such as self-declaration agreements or industry-led solutions, shall be developed to support implementation.
A draft of the required legal amendments, along with an implementation plan for the non-legislative measures, is expected by the end of 2026.
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
Yes, Switzerland has already introduced several rules, guidelines, and voluntary standards relevant to AI, although it has not enacted a dedicated AI Act yet. A consultation draft for AI legislation to incorporate the Council of Europe’s AI Convention into Swiss law is expected by the end of 2026 (cf. question 2).
Guidelines
- AI Guidelines for the Confederation: In November 2020, the Federal Council adopted the Guidelines on Artificial Intelligence for the Confederation. These guidelines apply to federal agencies and external partners entrusted with governmental tasks and serve as a general frame of reference on the use of AI within the federal administration and the development of sectoral AI strategies or regulations, with the goal to ensure a coherent policy. The guidelines are based on seven principles: (1) Putting people first; (2) Regulatory conditions for the development and application of AI; (3) Transparency, traceability and explainability; (4) Accountability; (5) Safety; (6) Actively shape AI governance; and (7) Involve all relevant national and international stakeholders.
- FINMA’s Guidance 08/2024 outlines clear expectations for the use of AI in Swiss financial institutions, focusing on seven core areas: (1) AI governance, including a centrally managed inventory with a risk classification and resulting measures, the definition of responsibilities and accountabilities, requirements for model testing and system controls, documentation standards and training measures; (2) Inventory and risk classification based on a sufficiently broad definition of AI to ensure completeness; (3) Data quality assurance by defining requirements to ensure that data is complete, correct and of integrity; (4) Tests and ongoing monitoring to ensure data quality and functionality of AI applications, including checks for accuracy, robustness, stability and bias, definition of performance indicators and monitoring of changes; (5) Documentation including purpose, data selection and preparation, model selection, performance measures, assumptions, limitations, testing, controls and fallback solutions; (6) Explainability, including the understanding of the drivers of the AI applications and their behaviour under different conditions to be able to assess the plausibility and robustness of the results; and (7) Independent review of the entire model development process by qualified personnel and taking into account of the independent review’s results in the development of the application.
Institutions are expected to apply these principles proactively, while FINMA continues to monitor developments and may refine its expectations as supervisory experience and international standards evolve.
(i) Existing laws that apply to artificial intelligence
In the absence of a specific AI law, Swiss AI systems are governed by existing statutes, including:
- Federal Act on Data Protection (FADP): The FADP applies whenever personal data is processed and contains, among others, provisions on automated decision-making and profiling. In a communication published in May 2025, the Federal Data Protection and Information Commissioner (FDPIC) emphasised that the FADP is directly applicable to AI-supported data processing.
- Federal Copyright Act (CopA): The application of the CopA to AI systems is currently controversial from a legal perspective, the main questions being whether AI training is relevant to copyright and whether AI outcomes can be protected by copyright. According to the “Overview of artificial intelligence regulation” established by the Federal Department of the Environment, Transport, Energy and Communications (DETEC) and the Federal Office of Communications (OFCOM) in February 2025, these issues, in particular the use of copyright-protected material for AI training, will have to be examined and will necessitate legislative amendments to clarify the situation.
- Federal Product Liability Act (PLA): The PLA being formulated in a technology-neutral way, it can in its current form be applied to liability issues related to AI. However, it is expected that the PLA shall be revised in the near future in order to align it with technological developments and with the revised EU Product Liability Directive.
- Sector-specific laws, such as the Federal Act on Medicinal Products and Medical Devices (Therapeutic Products Act, TPA) and the Medical Devices Ordinance (MedDO) containing regulations for the use of AI in the medical sector, or the Road Traffic Act (RTA) and the Ordinance on Automated Driving (OAD) for the use of motor vehicles with an automated driving system.
- Other laws, such as labour law or criminal law, can also be applied to the use of artificial intelligence, and possible revisions shall be examined in the course of determining the necessary legislative amendments to incorporate the Council of Europe’s AI Convention into Swiss law (cf. question 2).
(ii) Difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence
As mentioned in section (i), there are some legal uncertainties as to the application of existing laws to artificial intelligence, the main difficulties identified being the use of copyright-protected material for AI training (cf. question 13) and liability issues (cf. questions 4-7).
(iii) Draft laws and legislative initiatives
Following the Federal Council’s 12 February 2025 decision, Switzerland is advancing toward implementing the Council of Europe’s AI Convention and strengthening AI governance by:
- Drafting cross-sectoral amendments to existing laws in areas of particular importance, such as data protection, product liability and copyright;
- Drafting sector-specific amendments to existing laws (e.g., in digital health, finance, autonomous vehicles);
- Introducing non-binding measures — such as self-declaration schemes and industry-led ethics frameworks;
- Preparing a draft legislative package and implementation plan by end 2026.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
In Switzerland, there are currently no AI-specific liability rules in force. However, general product safety and liability frameworks apply to artificial intelligence systems, particularly where such systems cause harm or fail to meet the safety expectations that the public is entitled to rely upon.
According to prevailing legal opinion, software—including AI systems—qualifies as a product within the meaning of the Swiss Product Liability Act (PLA). As such, the PLA is applicable to defective AI systems, subjecting producers to strict liability for damages caused by defects in their products.
In addition, the Federal Product Safety Act (PSA) and relevant sector-specific legislation—such as the laws governing machinery, medical devices, and other regulated technologies—apply to AI systems where appropriate, particularly in cases involving risks to health or safety.
Moreover, where AI systems process personal data, the Federal Act on Data Protection (FADP) applies. This includes provisions on data security, automated individual decision-making, and personal data breach obligations, all of which may be relevant in the context of AI malfunction or misuse.
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems. Have there been any court decisions or legislative developments clarifying liability frameworks applied to artificial intelligence?
Product liability under the Product Liability Act (PLA)
There are no specific liability rules for AI systems. Since Swiss law treats software, especially when embedded in physical products like medical devices or autonomous systems, as products, the PLA applies. Consequently, manufacturers (including software providers) may face strict liability if their AI systems are defective and cause harm or fail to meet the safety the public is entitled to expect.
A revision of the PLA is expected in the near future to align it with technological developments and with the revised EU Product Liability Directive.
Contractual and tort liability under the Code of Obligations (CO)
Where the PLA does not apply, AI providers may still be liable under the provisions of:
- Contractual liability: Art. 97 CO for failing to deliver promised functionality or performance based on a contract;
- Tort law: Art. 41 CO for damage due to negligence or intentional misconduct, required proof of fault and causation.
Sector-specific liability regimes
Sector-specific rules apply in regulated industries, such as healthcare, autonomous vehicles, finance.
Data protection offenses under the Federal Act on Data Protection (FADP)
When AI processes personal data, failures involving automated decisions, data breaches, or inadequate security can trigger criminal sanctions under the FADP—up to CHF 250,000 in fines for individuals.
As of the end of June 2025, there are no published Swiss court decisions that directly address liability arising from the use of artificial intelligence systems.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the deployer, the user and the victim?
As of July 2025, Swiss law does not provide for an AI-specific liability regime. Responsibility for harm caused by an AI system is allocated under general principles of civil liability, product liability, and, where applicable, sector-specific regulations. The allocation of liability depends on the role of the party involved (developer, deployer, user) and the type of harm (personal injury, property damage, data breach, etc.). Multiple actors may share responsibility. Swiss law allows for joint and several liability in such cases (Art. 50 CO).
The manufacturer or developer may be liable according to Art. 1-5 Product Liability Act (PLA) if the AI system is defective and causes harm, or according to Art. 41 ff. and 97 ff. Code of Obligations (CO) for a breach of contract or negligence in development, for example for design flaws, inadequate testing or failure to comply with safety standards.
The deployer or provider may be liable under contractual and tort liability, or under sector-specific laws, if applicable, for example for incorrect configuration or integration of the AI system, misuse of the AI system beyond the intended purpose and instructions, or for failure to implement appropriate human oversight or safeguards.
The user of the AI system may be liable for negligent use under the CO.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
The burden of proof for victims of damage caused by an AI system depends on the legal basis under which the claim for compensation is constituted (cf. question 5). Since there is no AI-specific liability regime yet, existing rules under the Swiss Code of Obligations (CO) and the Product Liability Act (PLA) apply.
The Product Liability Act (PLA) provides for strict liability, i.e. the claimant must prove the defectiveness of the product, the damage caused by the defective product and the casual link between the damage caused and the defective product.
The Swiss Code of Obligations (CO) provides for strict or fault liability, depending on the context:
- Fault-based liability (Art. 41 CO): The victim must demonstrate the damage, the unlawful act, causation, and fault.
- Contractual liability (Art. 97 CO): The victim (contractual partner) must prove that the service or product was not delivered as agreed (non-performance or defective performance). The service provider must then prove that they are not at fault.
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
Yes, risks related to the use of AI can be insured. Depending on the insurance terms and conditions, they may be covered by standard insurances such as professional liability insurance or cyber insurance.
The Swiss insurance market has responded to AI-related risks through existing insurance products, and customised policies are increasingly available depending on the risk profile and sector. Some Swiss insurers are increasingly offering custom AI risk policies.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
No, artificial intelligence cannot be named as an inventor in a patent application in Switzerland. Under current Swiss law, only natural persons (i.e., human beings) can be listed as inventors.
This position was recently confirmed by the Federal Administrative Court, in its decision B-2532/2024 of 4 July 2025. The Court upheld the decision of the Swiss Federal Institute of Intellectual Property (IPI), which had refused to register an AI system as the inventor in a patent application.
However, the Court clarified that a natural person who makes a significant contribution to the AI-assisted development, such as by designing the AI process, evaluating its output, and deciding to file a patent application, can and must be named as the inventor in the patent register.
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
In Switzerland, images generated by or with artificial intelligence (AI) generally do not benefit from copyright protection unless human creativity plays a decisive role in their creation. The key criterion under Swiss copyright law is an “individual character” resulting from human intellectual creation (Art. 2 para. 1 Federal Act on Copyright and Related Rights (Copyright Act, CopA).
However, if a human designs prompts with creative specificity, selects, curates, or modifies AI-generated images with artistic judgment, or otherwise guides the AI output meaningfully, the resulting work may qualify as a copyright-protected work. In this case, the human who provided this creative input would be considered the author, which is defined as the natural person who has created the work.
Generic prompting (e.g., generate the image of a dog) is not considered sufficient to establish authorship, as it lacks the required creative contribution.
-
What are the main issues to consider when using artificial intelligence systems in the workplace? Have any new regulations been introduced regarding AI-driven hiring, performance assessment, or employee monitoring?
When using artificial intelligence (AI) systems in the workplace in Switzerland, several legal and ethical issues must be considered—especially in areas like hiring, performance evaluation, and employee monitoring. While Switzerland has not yet introduced AI regulation relating to the workplace, existing laws and evolving European standards (notably the EU AI Act) strongly influence compliance expectations for Swiss companies, particularly those operating across borders.
Employee monitoring
Employee monitoring is regulated under labour law. Art. 328 of the Swiss Code of Obligations (CO) contains a general provision that the employer must acknowledge and safeguard the employees’ personality rights, while Art. 26 of Ordinance 3 to the Employment Act (EmpO 3) explicitly prohibits the use of monitoring or control systems for the (sole) purpose of monitoring the behaviour of employees in the workplace. Where monitoring or control systems are necessary for other reasons, they must be designed and installed in such a way as not to affect the employees’ health and ability to move around normally without being under constant surveillance.
When employee monitoring involves the processing of personal data, the employer must comply with the principles of the Federal Act on Data Protection (FADP) and the Ordinance on Data Protection (DPO), in particular with the principles of lawfulness, transparency and proportionality.
Hiring and performance assessment
The FADP contains provisions on automated individual decisions that apply, irrespective of whether AI is involved or not. In particular, data subjects must be informed about any decision based exclusively on automated processing that has a legal consequence or a considerable adverse effect on them. Furthermore, they have the right to express their point of view and to request that the automated individual decision be reviewed by a natural person. These provisions do not apply when the data subject has explicitly consented to the decision being automated.
-
What privacy issues arise from the development (including training) and use of artificial intelligence?
The development and use of artificial intelligence (AI) systems raise a number of privacy issues, particularly under Swiss data protection law (FADP) and—in cross-border contexts—the EU General Data Protection Regulation (GDPR). These issues arise throughout the AI lifecycle, from data collection and training to deployment and decision-making.
- Lawfulness of data processing: Processing must comply with Art. 6 and 8 FADP, meaning it must be lawful, proportionate, for a specific and evident purpose, and protected by appropriate technical and organisational measures.
- Transparency and automated individual decisions: Under the FADP, data subjects must be informed transparently about the data processing and automated individual decisions. For automated individual decisions with legal or significant effects, data subjects can request human review and challenge the outcome.
- Training on personal and sensitive personal data: When training AI on personal or sensitive personal data, developers must either anonymise the data (ensuring no re-identification is reasonably possible) or ensure a valid legal basis and compliance with all obligations, including purpose limitation and transparency.
- Purpose limitation: Personal data may only be used for the purpose originally communicated at the time of collection, unless a new justification exists.
- International data transfers: Where personal data is transferred abroad, appropriate safeguards must be in place, such as adequacy decisions or standard contractual clauses.
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view? Are there any recent precedents addressing the legality of data scraping for AI training?
Regulations under the Federal Act on Copyright and Related Rights (CopA)
Under Swiss copyright law, only works with individual character are protected (Art. 2 CopA); raw data or factual information (e.g., basic listings or unstructured datasets) do not enjoy copyright protection. As a result, the scraping of non-protected data is generally permitted. However, scraping may infringe copyright if it involves the use of protected works, such as creatively structured websites or databases. According to the “Overview of artificial intelligence regulation” established by the Federal Department of the Environment, Transport, Energy and Communications (DETEC) and the Federal Office of Communications (OFCOM) in February 2025, the permissibility of using copyright-protected material for AI training remains unclear and may require future legislative clarification.
Regulations under the Federal Act on Data Protection (FADP)
Where scraped data includes personal data, its use is subject to the FADP. Any processing, including for AI training, must comply with the core principles of the FADP, in particular lawfulness, transparency, proportionality, and purpose limitation. The FADP generally does not require a legal basis for data processing, provided it adheres to those principles, is not carried out against the express wishes of the data subject and no sensitive personal data is disclosed to third parties. Therefore, the data subjects’ explicit consent is not mandatory for AI training involving personal data, but the controller must inform data subjects about such use and, depending on the context, provide an opportunity to object or opt out (cf. question 14).
Regulations under the Federal Act on Unfair Competition (UCA)
The UCA may apply to data scraping in the context of AI training. In particular, Art. 5c UCA prohibits any person to take over and exploit another person’s work product that is ready for the market by means of technical reproduction processes without any reasonable effort of their own. Depending on the circumstances, the mass scraping of commercially valuable datasets could constitute unfair exploitation under Swiss competition law.
As of July 2025, there are no Swiss court decisions directly addressing the legality of data scraping in the context of AI training.
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
In Switzerland, the prohibition of data scraping in a website’s terms of use is generally enforceable depending on the legal relationship between the parties and the circumstances of access.
If a scraper has accepted the website’s terms of use, for example by registering an account, making an order, or otherwise manifesting consent, the terms form part of a binding contractual agreement under Swiss contract law. In these cases, prohibiting scraping in the terms of use is contractually enforceable, and breach of contract may lead to civil claims.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
The Federal Data Protection and Information Commissioner (FDPIC) has not issued AI-specific guidelines to date. However, the FDPIC has consistently affirmed that the Federal Act on Data Protection (FADP) applies directly to data processing involving artificial intelligence and must be embedded in both the design and operation of AI systems.
In his most recent press release on the subject, published in May 2025, the FDPIC emphasised that manufacturers, providers and users of AI systems must (a) ensure transparency regarding the purpose, functionality, and data sources of AI-based processing, (b) comply with data subjects’ rights with respect to automated individual decisions (cf. question 11) and (c) in the case of AI-powered conversational systems (e.g., intelligent language models) inform users that they are interacting with a machine and disclose whether input is being used to train or further develop the system.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
Yes, in early summer 2024, the Federal Data Protection and Information Commissioner (FDPIC) opened a preliminary investigation into the use of personal data by Twitter International Unlimited Company (TIUC), the operator of Platform X (formerly Twitter), for the training of the AI model Grok. The FDPIC examined whether the use of public user posts on Platform X for the training and fine-tuning of Grok, a machine learning-based chatbot, complied with the Federal Act on Data Protection (FADP).
During the preliminary investigation, TIUC appointed a representative in Switzerland and disclosed that public posts were used to train Grok and related models. In July 2024, TIUC introduced an opt-out mechanism, allowing users to object to the use of their content for training purposes through the platform’s data protection settings.
The FDPIC concluded that the introduction of an opt-out mechanism met the transparency and objection rights requirements under the FADP and, as a result, closed the preliminary investigation.
This case highlights the importance of transparency, proportionality and user autonomy in the context of personal data processing for AI training purposes, as well as the FDPIC’s determination to enforce these privacy principles and rights. AI developers are therefore strongly advised to consider and implement data protection requirements from the beginning of the development phase and throughout the lifecycle of the AI system.
-
Have your national courts already managed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
As of July 2025, Swiss courts have not yet issued landmark rulings specifically addressing artificial intelligence (AI).
There has been a preliminary investigation by the Federal Data Protection and Information Commissioner (FDPIC) into the operator of the Platform X regarding the use of personal data for AI training purposes, but it has been concluded without any sanctions, as the operator implemented the necessary measures to comply with data protection requirements (cf. question 16).
Furthermore, in its decision B-2532/2024 of 4 July 2025, the Federal Administrative Court confirmed that artificial intelligence cannot be named as an inventor in a patent application, thus upholding the decision of the Swiss Federal Institute of Intellectual Property (IPI), which had refused to register an AI system as the inventor in a patent application (cf. question 9).
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
No, as of July 2025, there is no central authority for monitoring the use and development of AI in Switzerland. However, there are various sectoral authorities that supervise the use of AI in their respective sector, e.g.:
- The Financial Market Supervisory Authority (FINMA) monitors the use of AI by financial institutions under its supervision;
- The Swiss Agency for Therapeutic Products (Swissmedic) supervises the development and use of AI in medical devices;
- The Federal Roads Office (ASTRA) oversees the regulations on automated driving;
- The Federal Data Protection and Information Commissioner (FDPIC) is responsible for any issues involving the processing of personal data.
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited? Which sectors have seen the most rapid adoption of AI technologies?
Switzerland has a long-standing reputation for excellence in research and innovation and is home to an ecosystem of multinational corporations, SMEs and start-ups. The adoption of AI is broadly established and growing rapidly across industries.
According to Microsoft’s 2025 Work Trend Index, 52% of Swiss companies use AI agents to automate business processes, compared to 46% globally and 43% in Europe. Furthermore, 72% of Swiss business leaders indicated plans to deploy AI agents as digital team members within the next 12 to 18 months.1 According to the Tortoise Media Global AI Index 2024, Switzerland ranks fourth in terms of AI intensity, which indicates AI capacity relative to the country’s population or economy, and fifth in talent and research.2
Among the sectors with the most rapid adoption and/or development of AI technologies, the following can be highlighted:
- Healthcare and life sciences: Switzerland’s strong pharmaceutical, biotech and medtech sectors are key drivers of AI innovation, particularly in diagnostics, clinic analytics, and medical devices. Leading academic institutions, such as the ETH Zurich and the University Bern, host dedicated research centres for AI in medicine.
- Finance and insurance: AI technologies are already widely used in Switzerland’s banking and insurance sector for applications such as fraud detection, risk scoring and customer service automation. A survey conducted by the Swiss Financial Market Supervisory Authority (FINMA) between November 2024 and January 2025 found that around 50% of the financial institutions were either using AI or had initial development projects. An additional 25% intended to introduce AI systems within the next three years.
Footnote(s):
2 Source: https://www.tortoisemedia.com/data/global-ai#rankings
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how? Are AI-driven legal tools widely adopted, and what are the main regulatory concerns surrounding them?
Yes, AI is increasingly used in the legal sector in Switzerland, both by law firms and in-house legal departments. While adoption is still moderate compared to more data-driven industries, it is growing steadily, particularly in large law firms, multinational legal teams, and legal tech providers.
AI tools are used mainly for legal research and drafting, document review, contract analysis and automation, e-discovery and litigation support, text generation, process optimisation, and compliance and risk monitoring.
The main concerns are compliance with data protection regulations, whenever personal data is involved, as well as maintenance of professional confidentiality and ethics when deploying AI tools.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
Key challenges:
- Data protection and professional secrecy: Lawyers must ensure that AI tools comply with the Swiss Federal Act on Data Protection (FADP) and the obligations of professional secrecy.
- Bias and discrimination: Without adequate review, lawyers may inadvertently rely on biased tools.
- Lack of transparency and explainability: When using AI systems that operate as a “black box,” there is a risk that their outputs cannot be adequately explained or justified in legally defensible terms. This poses a significant challenge in contexts where accountability, transparency, and traceability are essential, such as litigation, regulatory compliance, or contractual interpretation.
- Legal uncertainty: As Switzerland has not yet adopted any AI-specific legislation, there are several areas of legal uncertainty with respect to the use of AI systems. Key areas of uncertainty include copyright issues when using data for AI training, product liability for AI systems and their output, etc. This legal ambiguity presents a challenge for lawyers advising clients on the development, deployment, and commercial use of AI technologies.
- Implementation cost & resources: Especially for small to medium-sized organisations, the implementation costs for high-quality AI tools and the resources needed to integrate them into the existing infrastructure can be a challenge.
Key opportunities:
- Increased efficiency in legal workflows: AI can significantly reduce the time required for contract review, legal research and document summarisation, enabling lawyers to focus on high-value tasks.
- Competitive advantage: Lawyers and law firms that effectively leverage AI can offer clients faster, more cost-efficient and innovative legal services, differentiating them in a competitive legal landscape.
- New business opportunities: The rapid development of AI technologies and their broad commercial applications create significant advisory opportunities for lawyers. Legal practitioners with expertise in AI governance, compliance and liability are increasingly in demand, particularly as many Swiss companies fall within the scope of the EU AI Act. This expands the role of Swiss lawyers with cross-border regulatory knowledge.
- Streamlined internal processes: AI tools can help law firms and in-house legal teams optimise internal processes and workflows, such as client communications, data, document and knowledge management, and task automation. This leads to improved operational efficiency and better resource allocation.
- Risk management and compliance automation: AI enhances risk assessments and legal strategy through predictive analytics and can assist in regulatory monitoring, compliance management and due diligence, helping legal departments in managing complex regulatory and legal risks more effectively.
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months? Are there any ongoing initiatives that could reshape AI governance?
Switzerland adopted the Council of Europe’s AI Convention on 27 March 2025. As a result, Switzerland will incorporate its requirements into Swiss law through a combination of sector-specific legislative amendments and cross-sectorial regulations, particularly in areas involving fundamental rights such as data protection. The implementation of the Convention will further be supported by legally non-binding measures, such as self-declaration agreements, codes of conduct, and industry-driven solutions (cf. question 2).
A consultation draft for the necessary legislative amendments, along with an implementation roadmap for the non-legislative measures are expected by the end of 2026.
Switzerland: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in Switzerland.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence? If so, has there been any progress in its implementation? Are there plans for updates or revisions?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems. Have there been any court decisions or legislative developments clarifying liability frameworks applied to artificial intelligence?
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the deployer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace? Have any new regulations been introduced regarding AI-driven hiring, performance assessment, or employee monitoring?
-
What privacy issues arise from the development (including training) and use of artificial intelligence?
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view? Are there any recent precedents addressing the legality of data scraping for AI training?
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
-
Have your national courts already managed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited? Which sectors have seen the most rapid adoption of AI technologies?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how? Are AI-driven legal tools widely adopted, and what are the main regulatory concerns surrounding them?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months? Are there any ongoing initiatives that could reshape AI governance?