-
What are your countries legal definitions of “artificial intelligence”?
There is no universally accepted definition of artificial intelligence (“AI”), as definitions that have been formulated usually make reference to human intelligence – as opposed to intelligence demonstrated by machines. Such definitions are however problematic as they imply the need to define “intelligence”, and specifically human intelligence.
In light of this consideration, the Swiss administration has not, to this day, established a clear definition of AI, applicable to all sectors.
In its report “Artificial intelligence and international rules” dated 13 April 2022 , the Federal Department of Foreign Affairs (“FADF”) defined AI in the following way: “Artificial intelligence (AI), also known as “machine learning”, refers to building or programming computers to do things that normally require human or biological intelligence, for example visual perception (image recognition), speech recognition, translation between languages, visual translation and game playing (with set rules). AI is concerned with building smart machines capable of performing tasks normally undertaken by humans, i.e. self-learning machines that function in an “intelligent” manner”.
In addition, the “Guidelines on Artificial Intelligence for the Confederation” established by the Federal Department of Economic Affairs, Education and Research (« EAER »), in coordination with the Federal Department of the Environment, Transport, Energy and Communications (« DETEC ») and the Interdepartmental Working Group on Artificial Intelligence, define AI through reference to the various structural features that are typically found in the current applications of AI systems. AI systems are capable of:
- Evaluating the complexity and quantity of data in a way that would not be possible with other technologies currently available;
- Making predictions as an essential basis for (automated) decision-making;
- Replicating abilities associated with human cognition and intelligence; and
- Acting largely autonomously on this basis.
Switzerland actively participates to international AI governance and discussions, including at the Council of Europe where a representative of Switzerland has been appointed as the chair of the Committee on Artificial Intelligence (“CAI”). The CAI aims to negotiate the first binding convention on AI with global scope. The guidelines developed by the Swiss administration were taken into account while establishing Switzerland’s mandate within the CAI.
In its first public draft of the “Convention on artificial intelligence, human rights, democracy and the rule of law” dated 6 January 2023, the following definition of “artificial intelligence system” is given:
any algorithmic system or a combination of such systems that, as defined herein and in the domestic law of each Party, uses computational methods derived from statistics or other mathematical techniques to carry out functions that are commonly associated with, or would otherwise require, human intelligence and that either assists or replaces the judgement of human decision-makers in carrying out those functions. Such functions include, but are not limited to, prediction, planning, classification, pattern recognition, organisation, perception, speech/sound/image recognition, text/sound/image generation, language translation, communication, learning, representation, and problem solving.
-
Has your country developed a national strategy for artificial intelligence?
By way of introduction, it must be highlighted that the approach of Switzerland in terms of strategy and regulation of AI application is that of self-regulation and adaptation in the margin of the existing legal framework.
In light of the development of AI in recent years, the Swiss Federal Council made AI a core theme of the “Digital Switzerland Strategy” in 2018 and set up an interdepartmental working group under the guidance of the State Secretariat for Education, Research and Innovation (“SERI”). This group drew up guidelines on the use of AI within the Federal Administration.
The national strategy (i.e. programming norm) revolves around seven guidelines, which set out the frame for the use of AI:
- Putting people first: when developing and using AI, the dignity and well-being of the individual as well as the common good;
- Regulatory conditions for the development and application of AI: ensure the best possible regulatory conditions so that the opportunities presented by AI for increasing value creation and improving sustainable development can be exploited;
- Transparency, traceability and explainability;
- Accountability;
- Safety;
- Actively shape AI governance;
- Involve all relevant national and international stakeholders.
The guidelines and strategy are written in a manner that is abstract and general, which ensures they remain valid. However, at the same time, this can make them difficult to apply. The guidelines are also broad-based to ensure they cover all types of AI-related projects. They are not mandatory but they set the principles that all stakeholders, public (both at the federal and cantonal level) and private stakeholders should apply when using AI.
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
As of today, there is no binding specific legislation on AI in Switzerland. For the moment, artificial intelligence is subject to traditional laws, which provisions are general enough to cover the activities of artificial intelligence and to be interpreted broadly by the courts. Where there are gaps in the law, it is left to self-regulation to fill them in particular through technical or other voluntary standards and ethical principles.
For the purposes of conciseness, foreign standards with extra-territorial effect will be excluded from the scope of the analysis. Our responses will also focus on private rather than public players.
However, the most concrete example of legislation adaptation in light of AI evolution probably is in the field of data protection: the new Federal Act on Data Protection (“nFADP”; SR 235.1) will enter into force on 1 September 2023. The nFADP includes several provisions specific to automated processing (which will be discussed further at Q12). Moreover, the nFADP creates interesting co-regulation mechanisms (i.e. regulation of self-regulation) by regulating the development of codes of conduct by professional, industry and trade association (art. 11 nFADP) and the certification by independent certification bodies (art. 13 nFADP).
For the rest, the following general laws may also be relevant to the regulation of artificial intelligence in the private sector :
Norms Regulation Major difficulties Data protection Data Protection Act and ordinances Regulation Definition of personal data in the context of AI. Certifcation and codes of conduct Co-regulation Compliance on a voluntary basis. Technical, ethical and other standards (e.g. ISO norms) Self-regulation Fragmentation of the standards and difficulties in enforcement. Civil law SCC and SLPL Regulation Determination of the fault and responsibility (subjective responsibility) and determination of the stakeholders (objective responsibility). Anti-discrimination Federal Constitution, Convention for the Protection of Human Rights and Fundamental Freedoms, Swiss Gender Equality Act, Swiss Disability Discrmination Act Regulation Enforcement and burden of proof for a discrimination. Criminal law SCC and administrative criminal law. Regulation Punishability of criminals and enforcement of law. -
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
The Federal Council’s “Artificial Intelligence” interdepartmental working group believes that current Swiss liability law is flexible and technology-neutral, as it is very general in nature. The general liability rules therefore apply to new technologies. Since AI can involve a large number of components, a large number of stakeholders may be involved in the development of AI. The analysis below will therefore necessarily be general and in abstracto. The specific case of civil liability in the event of the use of free or open source artificial intelligence is therefore excluded from the scope of the answer [1]. The specific question of the civil liability of autonomous AI will not be explored further.
There are no specific criminal provisions applicable solely to IA activity and services. Criminally punishable offenses against protected legal assets are governed by the general and topic criminal dispositions. the question of the identity of the perpetrator (the IA in itself or its provider) and the liability thereof will be further discussed at Q5.
The nFADP includes criminal provisions. Namely, on complaint, violations of the nFADP can lead to fines up to CHF 250’000.-
Footnotes:
- On this topic, see Michel José Reymond, Questions de responsabilité civile et contractuelle soulevées par la distribution de « logiciels libres » (open source), 2022, pp. 69-76.
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
The areas of tort and product liability represent important mechanisms to mitigate algorithm-induced harms and are also highly illustrative of the challenges and limitations existing legal approaches are facing [1]. Complications may arise due to AI-specific features such as an AI’s autonomy, its frequent appearance as a “service” (thus not subject to product liability law), potentially multi-layered third-party involvement and the interface between human and AI.
Thus, the possible presence of several stakeholders will make it difficult to assign responsibility in the event of harm caused by the AI system. The precise attribution of liability to the stakeholders involved will depend on the civil liability regime in question. The following three civil liability regimes may arise :
1. Tort liability :
Under Swiss law, any person who unlawfully causes damage to another, whether wilfully or negligently, is obliged to provide compensation (art. 41 par. 1 of the Swiss Code of Obligations (“SCO”; RS 220)). Four conditions must be met: an unlawful act, fault on the part of the tortfeasor, damage and a causal relationship (natural and appropriate) between the wrongful act and the damage [2].
The plaintiff bears the burden of proof for the four conditions aforementioned and must prove the faulty behavior, the suffering of a damage and establish a causal link between the fault and the damage, in order to give rise to a compensation [3]. Fault of the tortfeasor is not presumed.
The degree of responsibility may vary according to the degree of autonomy of the AI system. The doctrine identifies the following three cases in particular: “human in the loop” (a human being is involved in the system as a responsible person and decision-maker), “human on the loop” (a human is involved in both implementing and monitoring the activities of an independant AI system) or “human off the loop” (the AI system runs completely autonomously in every process) [4]. As mentioned earlier, the last case, that of autonomous AI, will not be examined further.
Depending on the constellation involved, a fault on the part of the user or the provider of the AI system can therefore arise [5].
2. Product (or strict) liability :
The producer is liable for damage caused by a defect in his product (art. 1 par. 1 Swiss Law on Product Liability (SLPL)). Thus, the following five cumulative conditions must be satisfied: a product, producer, a defect, a damage and a natural and adequate causal relationship between the defect, the product and the damage. This provision creates an objective liability based solely on the defect in the product ; the victim does not have to establish fault on the part of the producer [6]. Only a defect in the product has to be proven (art. 4 SLPL).
Regarding AI, product liability can emerge if the AI is a component of a movable thing (art. 3 par. 1 lit. a SLPL). On the contrary, when the AI-based software and the thing on which the AI runs are separate, the producer of the product cannot bear responsibility for the software component [7]. However, a large part of the doctrine advocates, that the creator of an AI solution should be held liable as a producer under product liability law for property damage and personal injury [8].
In case of property damage, product liability is limited to privately used objects (art. 1 par. 1 SLPL). However, death or injury to a person applies independently of the purpose of the AI system; when AI-based systems are used for business purposes, only death or injury to a person is covered [9].
3. Contractual liability :
An obligor who fails to discharge an obligation at all or as required must make amends for the resulting damage unless he can prove that he was not at fault (art. 97 par. 1 SCO).
Liability based on art. 97 para. 1 SCO is subject to four conditions: breach of contract, damage, causal relationship (natural and appropriate) between the breach of contract and the damage, and fault. The creditor bears the burden of proof (art. 8 SCC) for the first three conditions (or relevant facts), which means that if the court is not convinced or is unable to determine whether each of these facts did or did not occur, it must rule against the creditor [10]. On the other hand, it is for the debtor, whose fault is presumed, to prove the fourth condition, namely that no fault is attributable to him; he thus bears the burden of proving the discharging facts in the event that the court is convinced neither of the existence of fault nor of its absence (reversal of the burden of proof) [11].
In this case, liability for damage caused by an AI system will depend on the existing contractual context surrounding the AI system. In this context, a liability could arise for AI-using service providers and AI providers [12].
If the debtor of an obligation uses an AI system to perform it and the creditor suffers damage as a result, the question of the debtor’s legal liability arises. The fact that the debtor can no longer be blamed for having breached his duty of care could create a gap in terms of liability. Although controversial, this gap could, however, be filled by applying the liability for auxiliaries provided for in art. 101 para. 1 SCO to digital systems [13]. Among other things, it will have to be clarified in which cases digital systems should be treated as vicarious agents and when they only have the character of tools [14]. Then it will have to be examined which “misconduct” of the digital system should be attributed to the contract debtor via art. 101 para. 1 SCO and whether there can be a kind of “digital standard of care” in the digital age [15]. However, the prevailing doctrine in Switzerland rejects this extension of liability for auxiliaries, pointing out that digital “auxiliaries”, like AI systems, do not enjoy civil rights [16].
It should be noted that considerations of civil liability are potentially subject to change, depending in particular on the degree of autonomy that artificial intelligence acquires – a parameter that mostly depends on future technological developments. In this respect, as noted by the Federal Council’s interdepartmental working group on artificial intelligence, a “risk approach” to liability tends to be adopted when considering civil liability for new technologies; those who benefit from the new technology tend to bear its risk [17].
Finally, when it comes to potential criminal liability of AI systems, it should be noted that the Swiss Criminal Code (“SCC”; SR 311.0) does not provide any specific provisions regarding criminally relevant behavior of AI systems. According to the general principles of Swiss criminal law, the following elements are required for criminal liability: i) the typical elements of the considered criminal offence are reunited; ii) illegality of the action; iii) guilt of the author.
The possibility of AI applications acting culpably is currently denied, as they have neither legal capacity nor legal personality. Indeed, in the recent decision 6B_1201/2021 dated 9 February 2023, the Swiss Federal Court concluded that the driver for a smart car who exceeded a speed limit was fully liable and that no responsibility could be attributed to the failure of the car’s speed limit recognition system.
Footnotes:
- White Paper, “Artificial Intelligence and Algorithmic liability – A technology and risk engineering perspective from Zurich Insurance Group and Microsoft Corp.”, July 2021, p. 14.
- ATF 132 III 122 recital 4.1 and references.
- Yaniv Benhamou/Justine Ferland, Artificial Intelligence and Damages: Assessing Liability and Calculating Damages, p. 168.
- Mauro Quadroni, Künstliche Intelligenz – praktische Haftungsfragen, 2021, p. 347ff.
- For more details, see Mauro Quadroni, op. cit, 2021, p. 347ff.
- Yaniv Benhamou/Justine Ferland, op. cit., p. 168.
- Mauro Quadroni, op. cit, p. 352.
- Mauro Quadroni, op. cit, p. 352.
- Ibid.
- ATF 132 III 689, recital 4.5; ATF 129 III 18, recital 2.6 p. 24 ; ATF 126 III 189, recital 2b
- Ibid.
- For more details, see Mauro Quadroni, op. cit, p. 352ff.
- Christapor Yacoubian, Digitale Systeme als «Erfüllungsgehilfen» – Relevanz der fehlenden Rechtsfähigkeit?, 2023, p. 412.
- Ibid.
- Ibid.
- Ibid.
- Conseil fédéral, Défis de l’intelligence artificielle, Rapport du groupe de travail interdépartemental « Intelligence artificielle » au Conseil fédéral, 2019, p. 36ss.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
The answer will depend on which liability regime is triggered – see developments under Q5.
The question of liability is becoming increasingly important as artificial intelligence becomes more and more autonomous. From a legal standpoint, however, artificial intelligence is considered a “thing“, not a “person“. No matter how autonomous it may be, an artificial intelligence has no legal personality under Swiss civil law; only a natural or legal person can be held liable in the event of harm caused by an AI system [1]. This reasoning applies even when artificial intelligence has no direct human supervision [2]. Under Swiss law, an AI system is considered to lack the capacity of discernment and to be unable to act either intentionally or negligently [3]. No fault can therefore be imputed to an AI system.
Footnotes:
- Conseil fédéral, Défis de l’intelligence artificielle, Rapport du groupe de travail interdépartemental « Intelligence artificielle » au Conseil fédéral, 2019, p. 36ss ; Yaniv Benhamou/Justine Ferland, op. cit, p. 165.
- Conseil fédéral, Défis de l’intelligence artificielle, Rapport du groupe de travail interdépartemental « Intelligence artificielle » au Conseil fédéral, 2019, p. 36ss.
- Ibid.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
The answer will depend on which liability regime is triggered – see developments under Q5.
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
AI liability is at the core of ensuring a trustworthy AI legal framework. However, the issues and challenges presented by AI do not fall neatly within the current liability systems.
The insurance industry is in the early stages of understanding AI algorithmic risks and developing coverages and service propositions that effectively manage those risks, due to the lack of loss experience data as well as models that estimate the potential frequency and severity of AI risks [1]. Thus, there are few AI insurance policies commercially available and AI losses are not explicitly covered by traditional insurance products.
The only recent example of an insurance product for companies developing AI systems guarantees the AI solution’s performance. If the AI solution underperforms, the customer receives a payout from the AI provider – which is then reimbursed by the insurance.
On the other hand, entities using AI cannot just wait for formal legislation to safeguard against potential liability and should conduct risk assessment on an autonomous and volunteer basis. Indeed, because of the interconnectedness of businesses, losses associated with AI risks may spread fast across the world, increasing substantially the accumulation of risks and raising insurability issues due to the lack of risk diversification. Companies offering AI based products and services need to be aware of legal uncertainty and possible evolution and be ready to integrate emerging regulations, while also considering liability risks.
Footnotes:
- White paper “Artificial Intelligence and algorithmic liability – A technology and risk engineering perspective from Zurich Insurance Group and Microsoft Corp.”, 2021, p. 22.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
The Federal Council’s Interdepartmental Working Group on Artificial Intelligence believes that – at least for the time being – there is no need to amend intellectual property law. Their report (“Rapport du groupe de travail interdépartemental “Intelligence artificielle” au Conseil federal” of 13 December 2019 (French version, no English version available) provides that the legal framework in Switzerland is a priori adequate, including the integration of emerging applications and new business models using AI.
As the legislation is flexible, the administrative and judicial authorities have, a priori, sufficient “room for maneuver” to adapt intellectual property law to artificial intelligence [1].
The question of whether an artificial intelligence can be considered the inventor or co-inventor of a patent application is not explicitly codified in Swiss law.
However, various provisions exclude or prevent the consideration of an AI entity as an inventor in a patent application filed in Switzerland.
On a general note, the Federal Act on Patents for Inventions (“PatA”; SR 232.14) sets that “the inventor, his successor in title, or a third party owning the invention under any other titles has the right to the grant of the patent” (art. 1 para. 1 PatA). The inventor is the natural person who creates the technique that constitutes an invention24.
The obligation for the inventor to be a natural person can be deduced from the following provisions and procedural conditions:
- Art. 5 para. 2 PatA provides that the [natural] person named by the patent applicant shall be mentioned as the inventor in the Patent Register;
- Art. 34 para. 1 of the Ordinance on Patents for Inventions (“PatO”; SR 232.141) provides that the inventor is to be designated in a separate focument together with his/her given name, family name and domicile;
- Various acts before the IPI or Swiss courts require the signature of an inventor, as does the transfer of rights from an inventor to the applicant.
Nevertheless, it should be noted that the involvement of an AI during the inventive process does not exclude the invention from patentability in Switzerland.
Footnotes:
- Conseil fédéral, Défis de l’intelligence artificielle, Rapport du groupe de travail interdépartemental « Intelligence artificielle » au Conseil fédéral, 2019, p. 101ss.
- Decision S2018_003 of the Federal Patent Court, 24 August 2018, §9.
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
For the reasons set in answer 9 above (ab initio) this question should be answered by analysing traditional copyright.
In general, works are literary and artistic intellectual creations with individual character, irrespective of their value or purpose (art. 2 par. 1 of the Federal Act on Copyright and Related Rights (“CopA”; SR 231.1)). Photographic depictions and depictions of three-dimensional objects produced by a process similar to that of photography are considered works, even if they do not have individual character (art. 2 par. 3bis CopA).
Derivative works, defined as intellectual creations with individual character that are based upon pre-existing works, whereby the individual character of the latter remains identifiable, are also protected as such (art. 3 par. 1 and 3 CopA). If the input, which is also protected by copyright, is used, the work generated by artificial intelligence may be able to meet the conditions for protection as a derivative work [1].
The author is the natural person who has created the work (art. 6 CopA). Consequently, the status of “original” author can only be attributed to a natural person. A legal entity may, however, acquire ownership of copyright – generally property rights – as a result of a transfer of rights or inheritance (art. 16 CopA).
As far as the protection of an image generated by artificial intelligence is concerned, the question of the protection of the output arises. More specifically, the issue at stake revolves around the recognition of a creation that does not originate from the human mind. Consequently, the distinction between creations generated by AI with human intervention or direction (i.e. AI assisted works) and creations generated by AI without human intervention (i.e. AI generated works) is of importance in Swiss copyright law [2]. This distinction depends generally on the creative added value by a human in the output. In other words, “the more autonomous the AI, the less likely it is to have a causal link between the developer and/or the user” [3].
As it stands, in the absence of human intervention (i.e. AI generated works), an image generated falls into the public domain, unless it is protected by another legal regime, such as trade secrets [4].
In the case of a creative human intervention (i.e. AI assisted works), the image generated could benefit from copyright protection. A difficulty will arise, however, in the potential plurality of individuals (e.g. data engineer, programmer, user) involved in the creation of the image, who ultimately may have the status of co-authors (art. 7 CopA) [5]. In this respect, the Federal Council’s Interdepartmental Working Group on Artificial Intelligence stresses that “if it is no longer possible to attribute a creative or inventive act with certainty to a human being or to an AI system, the criteria of “creation of the mind” and “inventive activity”, intrinsically linked to human nature, will no longer be truly usable and a change of system may then be necessary” [6].
Footnotes:
- Yaniv Behamou, Art génératif, prompt art et intelligence artificielle: que dit le droit d’auteur? [online], 2022.
- Yaniv Benhamou, Big Data and the Law: a holistic analysis based on a three-step approach – Mapping property-like rights, their exceptions and licensing practices, 2020, p. 407 ; WIPO, Conversation on intellectual property (IP) and artificial intelligence (AI), WIPO/IP/AI/2/GE/20/1, 21 May 2022, p. 4.
- Yaniv Benhamou, Big Data and the Law: a holistic analysis based on a three-step approach – Mapping property-like rights, their exceptions and licensing practices, 2020, p. 407.
- Yaniv Behamou, Art génératif, prompt art et intelligence artificielle: que dit le droit d’auteur? [online], 2022.
- Ibid.
- Conseil fédéral, Défis de l’intelligence artificielle, Rapport du groupe de travail interdépartemental « Intelligence artificielle » au Conseil fédéral, 2019, p. 101ss.
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
The use of AI by employees for the accomplishment of their tasks is submitted to the general principles applicable to all uses of AI, and raises the same questions – especially in terms of liability and qualification of the AI system. However, the use of AI by employers towards their employees, especially for recruitment or surveillance, raises a number of additional issues in terms of protection of the employees’ personality.
The use of automated decision making processes and people analytics are submitted to the provisions of the nFADP (Q12).
Art. 328 SCO stipulates the notion of protection of the employee’s personality rights. The protection of employees’ personality is additionally highlighted in the Federal Act on Work in Industry, Trade and Commerce (SR 822.11) and its ordinances. In order to protect the physical and mental health of workers and to respect their personality, it is forbidden to use surveillance or control systems designed to monitor their behavior at the workplace. More specifically, 1) it is forbidden to use surveillance or control systems designed to monitor the behavior of workers at their workstations; 2) when monitoring or control systems are necessary for other reasons, they must be designed and arranged in such a way as not to interfere with workers’ health and freedom of movement.
There is no general anti-discrimination law in Switzerland. However, the Swiss Constitution (SR 101) sets the prohibition of discriminations as a general principle of the Swiss legal system (art. 8 Constitution). Additional specific provisions of non-discrimination apply in the context of employment:
- Swiss Gender Equality Act (SR 151.1): prohibits direct and indirect discrimination based on gender;
- Swiss Disability Discrimination Act (SR 151.3): only applicable to the public sector;
- Agreement of Free Movement of Persons between the EU and Switzerland (SR 0.142.112.681): prohibits discrimination of European migrant workers with regard to recruitment, employment and working conditions.
In summary, it is paramount that AI applications in the employment context are programmed by default in order not to discriminate directly nor indirectly.
-
What privacy issues arise from the use of artificial intelligence?
The main issues that arise from the use of AI with respect to personal data are transparency with respect to the purpose of the treatment and conservation of the data.
The new Federal Act on Data Protection (“nFADP”; SR 235.1) will enter into force on 1 September 2023. This welcome revision of the data protection regulation puts Switzerland legal framework up to date with the increasing use of automated decision making.
The nFADP sets as general principles that the personal data may only be processed in a manner that is compatible with the specific purpose (recognisable for the data subject); and that the personal data shall be destroyed or anonymised as soon as they are no longer required for the purpose of processing (art. 6 para 3 and 4 nFADP). In addition, the controller must provide the data subject on collecting the data with the information required for the data subject to exercise their rights and to guarantee transparent data processing (art. 19 nFADP). With respect to cross-border disclosure of personal data (which could be the case for servers located abroad), personal data may be disclosed abroad if the Federal Council has decided that the legislation of the State concerned or the international body guarantees an adequate level of protection (art. 16 para. 1 nFADP).
The nFADP provides for the duty to carry out a data protection impact assessment if planned data processing presents a high risk to privacy or fundamental rights (art. 22 nFADP). In addition, a duty to provide information is introduced for decision-making based entirely on automated data processing (art. 21 nFADP). In such case, the data subject may request that the automated individual decision be reviewed by a natural person.
-
What are the rules applicable to the use of personal data to train artificial intelligence systems?
In theory, numerous rules can find application while using personal data to train artificial intelligence systems. In fact, this questions comes down to defining and determining the property rights over the data ; this topic is still the subject of much controversy [1].
However, the most prominent rules applicable to the use of personal data are the following :
- Data Protection Act ;
- Copyright law ;
- Patent law ;
- Trade secret protection ; and
- Contract law.
In such a case, there is an overlap between data protection law, intellectual property law and contractual rights [2].
Footnotes:
- On this topic, see Yaniv Benhamou, Big Data and the Law: a holistic analysis based on a three-step approach – Mapping property-like rights, their exceptions and licensing practices, 2020.
- Ibid.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
In Switzerland, the “privacy authorities” are the Federal Data Protection and Information Commissioner (“FDPIC”) and the Cantonal/communal Data Protection and Transparency Commissioners. The Federal Data Protection and Information Commissioner (FDPIC) supervises the application of the federal data protection regulations (art. 4 nFADP). The FDPIC supervision covers data processed by federal bodies, private individuals and organisations. Data processing carried out by communal and cantonal authorities falls within the remit of the Cantonal or Communal Data Protection and Transparency Commissioners.
The FDPIC and the Cantonal or Communal Data Protection and Transparency Commissioners only have an advisory power and thus cannot make decisions; they mostly issue recommendations and/or public communications (e.g. Communication of the FDPIC issued the 30th March 2023 about the Helvetia Chatbot ; Communication of the FDPIC issued the 4th April 2023 about ChatGPT). If the recommendations are not followed, the FDPIC may at most file a complaint before the Federal Administrative Court.
However, since the regulation of AI involves, for the time being, different stakeholders – private and public, national and international, or sectoral and pluridisciplinar -, issue non-mandatory guidelines on AI.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
Cf. ad question 14.
-
Have your national courts already managed cases involving artificial intelligence?
Yes. As mentioned above, courts are considered, in the national strategy (cf. ad question 1), as main actors to adapt existing laws to the development of new technologies.
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
Switzerland does not have a specific office or department for the supervision of the use and development of AI. As the use of AI extends to multiple domains, Switzerland relies, for the moment, on existing sectoral laws and private regulation (i.e. self-regulation) regarding specific questions about the use and the development of AI.
As Switzerland has a multistakeholder approach to the regulation of artificial intelligence, several regulators and several authorities are involved in the use and development of artificial intelligence.
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
AI is increasingly adopted by businesses in Switzerland. Several factors contribute to this trend, including the availability of skilled professionals, government support, and the culture of innovation. The country is known for its advanced technology sector, with renowned institutions like ETH Zurich and EPFL contributing to AI advancements. These institutions foster collaboration between academia and industry, facilitating the implementation of AI in the industry.
It is important to note that although many businesses make use of AI, the extent of usage may vary across industries and individual businesses. The sectors in which AI is already widely used are the finance sector, insurance and healthcare the most common areas of use for AI applications in the financial market include client and transaction monitoring, credit card misuse and payment transaction fraud; portfolio analysis and suitability analysis; trading systems and trading strategies; process automation for document processing, IT or human resource management and deployment in marketing and saled promotion. The insurers use it mainly in customer interactions, as well as in claims processing and distribution.
These sectors have begun to set up committees to professionalise and further develop AI-specific processes.
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
While the adoption of AI in the legal field may not be as advanced as in some other sectors (see Q18), there have been notable developments. Recently, a company started offering the services of an AI to automatise the preliminary work of lawyers: based on their dataset of previously solved cases, the AI provides a preliminary legal orientation that is then reviewed by the team, who then presents the client the right strategy to undertake.
In 2019, the project “Justitia 4.0” was launched. The electronic communication in the legal field will become mandatory for users, lawyers, courts, public prosecutors and administrative authorities. The management of registers (criminal records, commercial registers, civil status registers, etc.), the extrajudicial stages of debt collection and bankruptcy proceedings, and administrative procedures within the Confederation and cantons are not part of the Justitia 4.0 project.
Although this project has a limited scope and does not involve artificial intelligence, it does reflect a genuine desire on the part of the justice sector to keep up with technological development. This is a first step, and we can look forward to further progressive advances in the months and years to come.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
Five key challenges raised by AI for lawyers in Switzerland:
- Multiplicity of stakeholders and regulations : e.g. overlapping and conflicting regulations, legal fragmentation;
- Lack of access to justice for users and enforcement of laws : e.g. qualification of IA for questions of responsibility and liability, contracts and technological protection measures (TPM);
- Untrustworthiness of AI : e.g. data quality and avoidance of bias and discrimination, mass surveillance ;
- Rapid technological change : e.g. obsolescence of legal norms and standards;
- Diplomatic stakes and conflicting AI strategies : e.g. extra-territorial effect of laws, Brussels effect (EU), Digital Silk Road (China).
Five key opportunities raised by AI for lawyers in Switzerland:
- Creation of innovative models of AI governance and regulation : e.g. bottom-up approach ;
- Enhance possibilities of alternative dispute resolution mechanisms for AI actions : e.g. mediation, arbitration ;
- Trustworthiness of AI : e.g. improving the transparency of artificial intelligence ;
- Develop and adapt the legislative methods : e.g. use of technology-neutral language, create regulations from a multisectorial and pluridisciplinary approach ;
- Enhance international cooperation : e.g. creation of international platforms, international standards.
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?
It is unlikely that there will be any significant changes or acceleration in terms of the regulatory process in Switzerland, especially when taking into account the length of the legislative procedure.
The rules laid down by the European Union, in particular within the AI Act, will also apply to companies established outside the European Union as soon as they wish to offer their products on its territory. As a result, the Brussels effect created by the new European legislation governing artificial intelligence is likely to have a normative effect on Switzerland.
There will undoubtedly be an increase in the number of legal disputes involving IA, which will give a better insight on how courts, authorities, parties and stakeholder tackle the dispute resolution and interpret the current legal regulations. The resolution of disputes will be impacted by the evolution of European legislation, especially with the likely entry in force of the Artificial Intelligence Act (“AI Act”).
However, certification bodies and code of conducts (preferably designed under state supervision) might help users making their choices when it comes to AI services (e.g. CE marking for a trustworthy IA).
In terms of approach to IA, we believe that the risk-based approach -adopted by the European Union in the AI Act- will become more widespread; those who benefit from the new technology tend to bear its risk . The questions of liability will therefore be at the core of the coming legal developments, and closely linked to the evolution of AI’s capacity of autonomy – a parameter that mostly depends on future technological developments. We might head towards recognition of AI’s legal personality or the creation of a new legal concept of personality proper to AI – which will require to first answer the philosophical question of where personality begins. In response to the uncertainties associated with liability issues, the demand for insurance is going to increase.
Finally, the question of property rights over the data fed to AI systems will require further developments.
Switzerland: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in Switzerland.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
-
What privacy issues arise from the use of artificial intelligence?
-
What are the rules applicable to the use of personal data to train artificial intelligence systems?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
-
Have your national courts already managed cases involving artificial intelligence?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?