-
What are your countries legal definitions of “artificial intelligence”?
As of the date of this writing, July 13, 2023, Canada does not have a statutory definition of artificial intelligence (“AI”). However, Bill C-27, the Digital Charter Implementation Act (“DCIA”)1, pending in Canada’s House of Commons, proposes to define an “artificial intelligence system” as: “a technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions.”
1 Bill C-27, Digital Charter Implementaton Act, 2022, 1st Sess, 44th Parl, 2021 [DCIA].
-
Has your country developed a national strategy for artificial intelligence?
In 2019, the federal government of Canada unveiled Canada’s “Digital Charter”,2 which includes a “Pan-Canadian AI Strategy” comprised of three pillars: (1) commercialization through the financial support of three national AI institutes and five innovation clusters; (2) financial support for the Standards Council of Canada to develop standards related to AI; and (3) attracting AI development talent by supporting three centres of academic training and research as well as organizations providing dedicated computing capacity for AI researchers.
The DCIA, referenced above, includes the Artificial Intelligence and Data Act (“AIDA”), applicable to the use of AI by the private sector.
In July 2021, the federal government launched the Consultation on Modern Copyright Framework for Artificial Intelligence and the Internet of Things (the “Consultation”).3 The Consultation sought comments and information to help the government consider copyright policy in view of the challenges posed by AI following a 2018–2019 parliamentary review of the Copyright Act. The Consultation received comments from over 65 stakeholders and closed in September 2021. To date, the government has not announced further steps relating to the Consultation.
2 Government of Canada, “Canada’s Digital Charter”, (last modified 13 March 2023), online: <https://isedisde. canada.ca/site/innovation-better-canada/en/canadas-digital-charter-trust-digital-world>.
3 Government of Canada, “A Consultation on a Modern Copyright Framework for Artificial Intelligence and the Internet of Things”,
(last modified 16 July 2021), online: <https://ised-isde.canada.ca/site/strategic-policy-sector/en/marketplace-frameworkpolicy/ copyright-policy/consultation-modern-copyright-framework-artificial-intelligence-and-internet-things-0> [Consultation]. -
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
Guidelines
The federal government and the province of Ontario have issued guiding principles on the use of AI by government ministries/agencies. These guidelines focus on issues such as system impacts, transparency, “explainability”, sharing of source code and training data, user training, ethical and legal use, risk assessment, safety, and governance. The Office of the Superintendent of Financial Institutions has also published a framework in support of safe AI development.Legislation
AIDA
In its present form, AIDA is quite rudimentary and leaves the substance of the law to regulations to be enacted at a later date. AIDA seeks to regulate “high impact” AI systems and requires the person “responsible” for the AI system to identify, assess, and mitigate the risks of using the system and to monitor compliance with mitigation measures. Under section 5(2) of AIDA, a person is responsible for an AI system “if, in the course of international or interprovincial trade and commerce, they design, develop or make available for use the artificial intelligence system or manage its operation”. AIDA would also require that persons carrying out a regulated activity establish measures regarding the manner in which data are anonymized, used, and managed. AIDA leaves the establishment of an administrative monetary penalty scheme to later regulations but does provide for offences with penalties of up to the greater of $10 million CAD or 3% of global annual gross revenues.Copyright Act
The application of Canada’s Copyright Act4 to AI systems and to works generated or assisted by AI raises several issues and uncertainties, many of which are addressed in the Consultation. To cite just a few examples, these
issues include:- whether it is an infringement to reproduce copyrighted material via text and data mining activity to train or develop an AI model, or whether text and data mining are, or should be, covered by an exception;
- uncertainty about the authorship and first ownership of AI-generated/assisted works;
- uncertainty about the circumstances in which a human or humans (e.g., an AI programmer, the end-user, etc.) might be liable for an AI work that is found to be infringing; and
- potential challenges in establishing infringement due to the “black box” nature of many AI systems.
4Copyright Act, RSC 1985, c C-42 [Copyright Act].
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
No rules currently apply to defective AI systems. AIDA would provide for administrative monetary penalties and regulatory and criminal prosecution for defective AI systems.
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
At present, there are no novel or specific rules or causes of action in civil litigation that would apply in the case of damages caused by AI systems. Claims would have to be made under existing legal frameworks, such as tort law (including negligence and intentional or strict liability torts), intellectual property law, and human rights and privacy laws.5
Broadly speaking, negligence holds a person liable for damages caused by their failure to exercise reasonable care. A plaintiff must prove that: (i) the defendant owed a duty of care to the plaintiff to avoid the kind of loss alleged;
(ii) the defendant breached that duty by failing to observe the applicable standard of care; (iii) the plaintiff has suffered damages; and (iv) the damages were caused, in fact and in law, by the defendant’s breach. AI will likely
pose challenges to the negligence analysis. For example, if an AI system operates autonomously, such as in the case of an automated vehicle, it might raise questions as to whether a duty of care is owed by any of the parties
involved in the manufacture, distribution, or use of the system. Due to the “black box” nature of AI systems, it might also be technically difficult to prove that an AI system “malfunctioned” or that it did so because of a lack of reasonable care by any one or more of the parties involved, including the victim.Certain uses of AI could potentially constitute one or more intentional torts. For example, creating a “deepfake” image or video of another person could give rise to liability for various intentional torts, including: (i) portraying the person in a false light; (ii) publication of embarrassing facts; (iii) appropriation of that person’s likeness; (iv) non-consensual sharing of intimate images; and (v) intentional infliction of emotional distress.
In Canada, strict liability in tort is limited. Although future plaintiffs might seek to hold the operator of an AI system strictly liable for damages caused by the system 6, it remains to be seen how that issue will develop in Canada.
In addition, the training or use of an AI system may give rise to a claim for copyright or moral rights infringement under the Copyright Act, or a discrimination claim under the Canadian Human Rights Act7 or provincial and
territorial human rights statutes.With respect to criminal liability, Canada’s Criminal Code8 does not explicitly address AI. It does contain several provisions related to the unlawful use of computers, computer systems, and data. For example, section 342.1 pertains to the unauthorized use of a computer and contains language related to obtaining “directly or indirectly any computer service”. A “computer service” is defined as including “data processing and the storage or retrieval of computer data.” While this provision seems to be aimed at “hacking”, it could potentially capture illegal activities of AI and, by extension, its operators or inventors. Section 342.1 is a hybrid offence, which may be punishable by up to ten years in prison. Various sections of the Criminal Code could also apply where AI is used to create a “deepfake” of another person, depending on the nature of the deepfake.
5Common law torts would not be available in Quebec, which is a civil law jurisdiction. Principles of liability set out in the Civil Code of Québec would apply. See Civil Code of Québec, CQLR c CCQ-1991.
6By way of analogy, see: Del Giudice v Thompson, 2021 ONSC 5379 (dismissing a claim of strict liability for a data breach).
7Canadian Human Rights Act, RSC 1985, c H-6.
8Criminal Code, RSC 1985, c C-46. -
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
Assessing responsibility for harm caused by an AI system will be one of the challenges of litigating AI system claims, particularly in negligence cases, where a plaintiff must prove the elements described in response to question 5, above. In the case of AI, where damages have been caused by an AI system acting autonomously, establishing the duty of care may be problematic unless the court is prepared to hold any of the parties involved in the commercialization of the system liable or to recognize a novel duty of care or cause of action. How liability will be allocated as between such parties and the victim will not be straightforward, as it may be difficult to assess which, if any, of the parties involved failed to exercise reasonable care and to establish factual and legal causation between their actions and the ultimate damages.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
In a negligence action, the plaintiff must prove the elements of negligence on the civil standard of proof, namely on a balance of probabilities. Among other things, the plaintiff must prove that the defendant’s failure to exercise reasonable care caused the plaintiff’s damages. However, it remains to be seen whether Canadian courts might adopt a strict liability approach to damages caused by AI systems.
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
At present, losses related to the use of AI are not routinely listed as exclusions in commercial insurance policies. As AI advances and uses become more routine and widespread, we can expect that AI-specific policy exclusions for certain types of AI-related losses may become more common.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
The Manual of Patent Office Practice is silent on the issue, which has not yet been specifically addressed by Canadian case law. Canada’s Patent Act does not have a provision that expressly provides that an inventor must be an “individual”. While the Supreme Court of Canada has stated that “the inventor is the person or persons who conceived of” the invention and who is “responsible for the inventive concept”,9 the Court did not explicitly consider whether this is limited to a natural person.
9 Apotex Inc v Wellcome Foundation Ltd, 2002 SCC 77.
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
Canadian courts have not yet considered whether copyright subsists in AI-generated or AI-assisted works, or who would be the author(s) of such works.
To receive copyright protection, a work must be “original”. An original work is one that originates from an author, is not copied from another work, and is the product of the author’s exercise of skill and judgment, which must not be so trivial that it could be characterized as a purely mechanical exercise.10 There is also a geographic requirement, which may be satisfied based on, among other criteria, the citizenship or residency of the author of the work at the time the work was made.11
The Copyright Act does not define the term “author”. However, courts have held that an author must be a natural person,12 since the term of copyright protection is tied to the author’s life and death.13 The Copyright Act grants
certain moral rights to the author of a work, which, due to their personal nature, might also suggest that an author must be a natural person.In the Consultation, the federal government identified three possible approaches to clarifying these issues in the Copyright Act, namely: (1) attributing authorship to the person who arranged for the work to be created; (2) clarifying that copyright and authorship apply only to works generated by humans or involving some form of human participation; or (3) creating a new and unique set of rights for AI-generated works.
In December 2021, the Canadian Intellectual Property Office (“CIPO”) issued a copyright registration for a painting that lists a human and an AI program as co-authors (no. 1188619). However, because CIPO does not conduct substantive examinations of copyright registration applications, this is not necessarily indicative of CIPO’s position. In addition, although registration creates a statutory presumption as to the subsistence and ownership of
copyright,14 the presumption is rebuttable. Therefore, these issues remain to be determined in court or clarified through legislative reform.10 CCH Canadian Ltd v Law Society of Upper Canada, 2004 SCC 13, at paras 16, 25.
11 Copyright Act, s 5.
12 P.S. Knight Co Ltd v Canadian Standards Association, 2018 FCA 222, at para 147; Setana Sport Limited v 2049630 Ontario Inc (Verde Minho Tapas & Lounge), 2007 FC 899, at para 4.
13 Copyright Act, ss 6, 9.
14 Copyright Act, s 53(2). -
What are the main issues to consider when using artificial intelligence systems in the workplace?
The Canadian Human Rights Act as well as provincial and territorial human rights statutes protect against discrimination in employment on the basis of various protected grounds, including race, gender identity, and age. The data used to develop AI systems and algorithms can, in some cases, reflect unconscious bias and unintentional discrimination, which can, in turn, create biased outputs contrary to human rights legislation.
-
What privacy issues arise from the use of artificial intelligence?
The Personal Information Protection and Electronic Documents Act (“PIPEDA”)15 defines personal information as “information about an identifiable individual”. If AI systems are using anonymized/de-identified personal information and not collecting consent for such use, individuals’ privacy rights are affected, particularly if that information can be subsequently re-identified.
Canadian privacy laws are generally based on the concept of informed and meaningful consent to the collection, use, and disclosure of personal information, unless an exception applies. In an AI world, obtaining a valid consent may not be feasible, particularly since the consent request must explain the consequences of granting consent. AI can analyze, infer, and predict individuals’ behaviour in ways that could affect a person’s ability to obtain credit, employment, insurance, or other benefits. AI could make unfair, biased, incorrect, or discriminatory decisions about individuals. The individual may not have provided an informed consent to this use of their personal information, particularly if the information that AI generates about the individual is considered by that individual to be untrue.
AI systems often use training data collected from public sources. However, under PIPEDA, only certain “publicly available” personal information may be used without consent, and the permitted uses are strictly limited. Permitting AI systems to go beyond those limits to train on publicly available personal information and generate new information from it without consent could result in individuals effectively losing control over their personal information and possibly their identifies.
PIPEDA is based on the principle of minimizing the collection of personal information to that which is necessary to achieve the purposes of collection and to limit retention. AI models harvesting vast quantities of data to learn and draw inferences fly in the face of those principles. Additionally, as an overarching principle, PIPEDA requires that collection, use, and disclosure of personal information must be for purposes that a reasonable person would consider appropriate in the circumstances. If AI systems are making unfair, biased, incorrect, or discriminatory decisions about individuals, then the use of the personal information by such AI systems would not meet that threshold test.
Canadian privacy laws are also based on the concept of access rights and transparency. Under PIPEDA, individuals have a right to access their personal information, to receive an accounting of its existence, uses, and disclosures, and to correct any information they can demonstrate is inaccurate. How will individuals be able to exercise this right of correction with respect to information that AI generates about them? Individuals also have the right to withdraw consent, subject to legal/contractual limits, and reasonable notice. In an AI world where personal information has been used to generate further insights and information, how far will that right of withdrawal extend? At present, these are unanswered questions.
15 Personal Information Protection and Electronic Documents Act, SC 2000, c 5, [PIPEDA].
-
What are the rules applicable to the use of personal data to train artificial intelligence systems?
At present, there are no statutory rules that are specific to the use of personal data to train AI systems. However, under Canada’s private sector privacy legislation,16 the use of personal information to train an AI system would be a use for which meaningful prior consent must be obtained, and the individual must understand the consequences of granting consent.
16 Ibid.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
The Office of the Privacy Commissioner of Canada (“OPC”) has not issued guidelines on AI, but, based on a 2020 public consultation, the OPC published sixteen recommendations for amendments to PIPEDA as well as “A Regulatory Framework for AI: Recommendations for PIPEDA Reform”.17 This framework was based on the following principles:
- Exception to consent – using data for socially beneficial and legitimate commercial purposes;
- Recognizing privacy as a human right;
- Amending the law to regulate AI’s impact on privacy rights by providing specific protections in
relation to automated decision-making; and - Accountability to the regulator.
The OPC has also commissioned a number of research studies into AI.
17 Office of the Privacy Commissioner of Canada, “A Regulatory Framework for AI: Recommendations for PIPEDA Reform” (November, 2020), online: <https://www.priv.gc.ca/en/about-the-opc/what-we-do/consultations/completed-consultations/consultation-ai/regfw_ 202011>
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
On May 25, 2023, the OPC and its provincial counterparts in Alberta, British Columbia, and Quebec, announced a joint investigation into OpenAI’s ChatGPT in response to a complaint that OpenAI collected, used, and disclosed personal information without consent. The investigation will also consider whether OpenAI has met its obligations of openness and transparency, access, accuracy, and accountability and whether it has met the overarching principle under PIPEDA that it has “collected, used and/or disclosed personal information for purposes that a reasonable person would consider appropriate, reasonable or legitimate in the circumstances, and whether this collection is limited to information that is necessary for these purposes”.18
In 2021, the same regulators conducted an investigation into the use of Clearview AI, Inc.’s (“Clearview”) facial recognition technology in Canada, considering the same issues regarding consent and appropriate purposes. The
regulators concluded that Clearview did not collect valid consents and that its collection, use, and disclosure of personal information was “neither appropriate nor legitimate”.19 Clearview disagreed with the findings and
ultimately withdrew from the Canadian market.18 Office of the Privacy Commissioner of Canada, “OPC to investigate ChatGPT jointly with provincial privacy authorities” (May 25, 2023), online: <https://www.priv.gc.ca/en/opc-news/news-and-announcements/2023/an_230525-2/>.
19 Office of the Privacy Commissioner of Canada, “Joint investigation of Clearview AI, Inc. by the Office of the Privacy Commissioner of Canada, the Commission d’accès à l’information du Québec, the Information and Privacy Commissioner for British Columbia, and
the Information Privacy Commissioner of Alberta” (February 2, 2021), online: <https://www.priv.gc.ca/en/opc-actions-anddecisions/investigations/investigations-into-businesses/2021/pipeda-2021-001/#toc6-2> -
Have your national courts already managed cases involving artificial intelligence?
In Haghshenas v Canada (Citizenship and Immigration),20 the Federal Court considered the use of AI in the administrative decision-making process. The case involved an application for judicial review of a decision by a Canadian immigration officer, which had denied a work permit application. The officer’s decision involved input assembled by an AI system, known as Chinook. On judicial review, the Federal Court held that the decision was procedurally fair because it had been made by the immigration officer, not the software. The court also rejected the argument that the officer’s use of the software rendered the decision substantively unreasonable.
In Orpheus Medica v Deep Biologics Inc., the plaintiff sought an interlocutory injunction against former employees who it alleged had misappropriated confidential information.21 The plaintiff claimed that its confidential information included its approach of using artificial intelligence to analyze a database of certain types of antibodies. The Ontario Superior Court of Justice dismissed the motion, finding that the concept of using AI for that purpose was not “unique” to the plaintiff or confidential. In addition, the AI system was not proprietary to the plaintiff. Rather, the plaintiff had been using open source, publicly available computer programs.
In James v Amazon.com.ca,22 Inc., the Federal Court denied the applicant’s request for a declaration that the respondent’s AI-based automated data request decision-making process did not comply with PIPEDA. The Court
dismissed the application on the basis that: (i) the relief sought was beyond the scope of the applicant’s original complaint to the OPC; (ii) the allegation was not considered by the OPC in its investigation; and (iii) there was no basis in the record upon which to consider the issue.Other decisions have involved parties that develop or use AI systems, but the claims in those decisions were not directly related to the AI systems. In one case, an online market research firm that used AI for web analytics and related purposes infringed copyright by posting the plaintiff’s photographs on its website. However, the decision does not indicate that the photographs were generated by AI, and AI was not directly relevant to the decision.23 Another decision involved a commercial dispute between an AI developer and one of its customers, but the AI system was not the subject of the claims.24
20 2023 FC 464.
21 Orpheus Medica v Deep Biologics Inc, 2020 ONSC 4974.
22 2023 FC 166.
23 Stross v Trend Hunter Inc, 2020 FC 201, aff’d 2021 FC 955.
24 Core Insight Strategies Inc v Advanced Symbolics (2015) Inc, 2021 ONSC 1717; -
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
Not at present. If AIDA in its present form becomes law, the Ministry of Innovation, Science and Economic Development, or another ministry to be designated, would be responsible for the enforcement of the legislation.
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
According to a 2022 Borealis AI study,25 54% of Canadian companies use AI, primarily for: (i) data collection and analysis; (ii) fraud detection, CRM analysis, and improving sales and marketing decisions; and (iii) helping with bookkeeping/accounting.
A recent KPMG Canada survey26 found that only 35% of Canadian businesses surveyed said that they use AI in their operations. More than 40% of the Canadian companies surveyed stated that they are using AI in their call centres, and 37% stated that they are experimenting with ChatGPT.
25Borealis AI, “2022 Report: Canadian businesses’ use of AI” (December 13, 2022), online:<https://www.borealisai.com/news/report-canadian-businesses-use-of-ai/>.
26KPMG Canada, “More than one third of Canadian businesses experimenting with ChatGPT, KPMG Canada survey” (April 19, 2023),
online: <https://kpmg.com/ca/en/home/media/press-releases/2023/04/us-outpacing-canada-in-business-adoption-of-ai.html>. -
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
AI is being used in the legal sector. Several Canadian legal technology companies offer machine learning–based legal analytics products, which can be used to, among other things, analyze content in contracts and documents,
and predict outcomes in future legal proceedings based on past judicial decisions. AI is also used by legal resource databases, including to classify, summarize, and analyze case law. In litigation, e-discovery platforms offer technology–assisted review to streamline the review of large sets of documents.In June 2023, the Court of King’s Bench of Manitoba issued a Practice Direction requiring the disclosure to the court of the use of AI “in the preparation of materials filed with the court”. The Supreme Court of Yukon issued a similar Practice Direction, also in June 2023, requiring disclosure if AI is used for “legal research or submissions in any matter and in any form before the Court”.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
AI will raise many challenges and opportunities for lawyers and the practice of law. Some of the more notable challenges include the following:
- Advising clients about the use of AI or assessing the risks and merits of disputes involving the use of AI while the law is evolving.
- The selection of training datasets for AI systems used in law firms may pose challenges, which include the risk of confidential client information being inappropriately used and potentially disclosed, as well as the risk of performing analyses based on outdated datasets. This could also pose potential issues for ethical walls established within firms.
- Over-reliance on AI without appropriate review of the results risks reduction in opportunities for legal training and development of lawyers and staff and could result in a corresponding decline in critical thinking and practical legal skills. This may also result in a failure to meet the expected standard of care for legal professionals.
- The use of AI tools in law firms creates uncertainties relating to professional liability insurance coverage.
- Powerful AI systems designed specifically for the legal field may be beyond the financial reach of some law firms.
Some of the more notable opportunities include:
- The appropriate use of AI tools in a law firm may enhance efficiency and productivity without jeopardizing the quality of legal services. Examples include automating legal research, due diligence review, contract legal review, and document assembly.
- Depending on the dataset used to train an AI tool, a user will have easier access to a breadth of information that exceeds what might be possible to access via manual search methods.
- The possible use of an AI tool to evaluate the costs and benefits of litigation.
- The use of low- or no-cost open source AI tools may allow smaller law firms to effectively compete.
- The cost savings offered by the use of AI in law firms may make legal services more widely available and accessible, including for small- and medium-sized businesses.
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?
Key developments are likely to include: (i) the reform of privacy law through the enactment of AIDA; (ii) amendments to PIPEDA to address AI and strengthen regulators’ oversight; (iii) AI-related copyright litigation; (iv) government monitoring of the approaches to AI taken by Canada’s major trading partners; (v) further consideration of the use of AI in judicial and administrative decision-making; and (vi) regulation of autonomous vehicles and other products incorporating AI systems.
Canada: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in Canada.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
-
What privacy issues arise from the use of artificial intelligence?
-
What are the rules applicable to the use of personal data to train artificial intelligence systems?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
-
Have your national courts already managed cases involving artificial intelligence?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?