-
What are your countries legal definitions of “artificial intelligence”?
The UK does not have a statutory definition of artificial intelligence (AI).
In a White Paper issued by the UK government in March 2023 (“A pro-innovation approach to AI regulation”) it was noted that “there is no general definition of AI that enjoys widespread consensus”. The government has sought instead to define AI by reference to two key characteristics, which are, (1) the ‘adaptivity’ of AI, whereby AI systems are trained to operate by “inferring patterns and connections in data which are often not easily discernible to humans”, and (2) the ‘autonomy’ of AI, whereby AI systems are able to make decisions without a human’s intent or control.
-
Has your country developed a national strategy for artificial intelligence?
The UK government published its National AI Strategy in September 2021, which it states has been developed with a view to making the UK a “global AI superpower” in the next decade and to translate the “potential of AI into better growth, prosperity and social benefits for the UK”. The strategy does not set out the detail of any future legal principles or planned statutory regime, but the aims of the plan are stated to be to:
- “invest and plan for the long-term needs of the AI ecosystem”;
- “support the transition to an AI-enabled economy”; and
- “ensure the UK gets the national and international governance of AI technologies right”.
The Strategy says that this will best be achieved “through broad public trust and support, and by the involvement of the diverse talents and views of society”.
In July 2022, the UK also published its AI Action Plan, which outlines the government’s further activities to advance the National AI Strategy.
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
The UK has not yet implemented clear standalone rules or guidelines on AI. The UK government’s March 2023 White Paper does, however, set out a framework for a regulatory regime that aims to be “pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative”. A consultation on the White Paper closed on 21 June 2023.
The White Paper identifies five “values-focused cross-sectoral principles” by which regulators will be expected to implement AI regulation. These are: (1) safety, security and robustness; (2) appropriate transparency and explainability; (3) fairness; (4) accountability and governance; and (5) contestability and redress. These principles are intended to guide businesses in designing, developing and using AI in a responsible manner. Regulators will be required to refer to the principles in order to publish their own more detailed sectoral guidance.
It should be noted that the White Paper remains predominantly a policy document, and therefore any attempts to align AI compliance strategies with its positions and recommendations should be approached with caution, as the White Paper does not itself implement binding rules or detailed guidelines on AI. At the time of writing, however, the UK government does not appear to be planning to bring forward specific legislation around AI in the near future.
To the limited extent that AI is currently specifically regulated, this is implemented through existing legal frameworks such as the UK’s data protection regime (see Questions 12 – 15) or (in relation to materials accessed by AI systems or generated as outputs by those systems) existing UK intellectual property laws and principles (see Questions 9 & 10).
In addition to the principles and guidance expounded in the White Paper, there are a number of other indicative standards that may have a bearing on the deployment of AI in the UK, such as:
- The “AI Standards Hub”, led by The Alan Turing Institute and supported by the British Standard Institution, which has been adopted by the UK government as part of its National AI Strategy;
- The Institute for Ethical AI & Machine Learning’s “Responsible Machine Learning Principles”, which support the responsible development, deployment and operation of machine learning systems.
The above are not, however, rules with an overarching statutory footing, and do not therefore provide a definitive legislative framework for AI compliance or liability, as outlined elsewhere in this UK chapter.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
As the UK does not have an overarching AI legislative framework, defective AI systems will be dealt with by general causes of action available under UK law, on a fact-specific basis in the context of the deployment and use of those systems and the nature of harm caused in each case. The potential routes for liability include contractual liability, the tort of negligence (if a duty of care is owed between parties), and product safety legislation (where the AI is integrated into a product) under the Consumer Protection Act 1987.
The Consumer Rights Act 2015 may also protect consumers where they have entered into a contract for AI-based products and services.
Finally, discriminatory outcomes from the use of AI systems may contravene the protections in the Equality Act 2010.
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
Civil liability rules are set out at Question 4. Criminal liability may be attributable to harm caused by an AI system if it can be attributable to a legal person. For example, in the most extreme cases, a corporate entity can be liable for corporate manslaughter under the Corporate Manslaughter and Corporate Homicide Act 2007.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
AI systems present a potentially complex nexus of liability between the different parties within the AI supply chain, from developers through to corporate customers and any ultimate end-users. However, there is no current AI-specific statutory basis on which responsibility or liability for claims related to harm caused by AI is allocated between parties in the UK. As such, claims will be managed in accordance with the general rights and causes of action outlined above (in Questions 4 & 5).
Any contractual claims will sit with the contracting party. Similarly, any claim of negligence would sit with the party to whom a duty of care is owed (which in turn would be very fact-specific), and would be brought against the party owing that duty of care. If the AI system is embedded in a product, a claim can be pursued against any of the following: (i) the producer (i.e., the manufacturer), (ii) a person who holds themselves out as a producer, or (iii) the importer of the product into the UK pursuant to the Consumer Protection Act 1987.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
As there is no current AI-specific statutory basis for determining claims in the UK, the burden of proof in AI-related cases will depend on the cause of action (as with general civil claims).
The most common burden of proof in civil claims in England and Wales is for the claimant to prove their case on the balance of probabilities. However, various statutory causes of action may differ. For example, under the Consumer Protection Act 1987, the claimant must prove that the product is defective and that the defect caused the damage which is the subject of the claim. Claimants should therefore seek advice as to the specific merits of an individual case, and any necessary burden of proof that needs to be met for that claim to be brought, in respect of any prospective claim involving AI.
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
The use of AI is insurable and, as is always the case with technological developments or emerging risks, insurers will consider very carefully the nature of the risk(s) during the underwriting process and price accordingly. Insurers will also continue to carefully monitor the potential shift in liability (towards manufacturers, software developers etc.) and adapt their policy wordings, which set out the scope of cover provided, applicable exclusions, and rights to bring subrogated actions, as they see fit.
A recent example of how developments in technology laws are affecting how insurers operate is the Automated and Electric Vehicles Act 2018. This Act extended compulsory motor vehicle insurance to cover the use of vehicles in automated mode, so that all victims of an accident caused by a fault in the automated vehicle will be covered. The insurer is initially liable to pay compensation to the victims, but can recover costs from the party who is ultimately liable. The Act also clarified that insurers may exclude or limit their liability in respect of damage resulting from software alterations made without authorisation or from failure to install safety-critical software updates. This may give an indication of how the law might evolve to treat insurance of AI-related technology.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
In the UK, patent applications must name a human as the inventor or inventors.
Although patent applications have been submitted in two cases which have named AI as an inventor, the courts have refused to recognise inventorship in these cases.
The UK Intellectual Property Office has recognised that developments in technology mean that AI is making significant contributions to innovation, and held a consultation to consider whether the current rule for inventorship in the UK could potentially be improved to better support innovation and incentivise the generation of new AI-devised inventions as the capability of AI increases.
The outcome of the consultation, published in June 2022, determined that for the time being there would be no change to the rule that patent applications must name a human as the inventor or inventors.
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
In the UK, images or artistic works may benefit from copyright protection to the extent they are original, i.e. the author’s own intellectual creation. The threshold for originality in the UK is very low and does not require particular creativity. However, the author will have given the work their “personal touch”.
It is certainly the case that images may be created by a human who has assistance from AI, and, provided the work meets the usual threshold for originality, it will benefit from copyright protection like a work created using any other tool.
However, issues of copyright ownership may arise because the technology underpinning AI must be trained and improved through exposure to large datasets, including vast numbers of images available on the Internet. These images will already be protected as artistic works with the copyright owned by a third party. Although UK copyright law generally permits text data mining of copyright works for non-commercial purposes, the commercial aspect of many Al platforms means this fair-use exception cannot necessarily be relied upon. For example, if AI is directed to create “an image in the style of David Hockney”, the AI may look to its source data and return an image similar or identical to an existing David Hockney work, therefore calling originality and ownership of the AI generated image into question and putting the human creator at risk of copyright infringement.
In the UK, images generated by a computer where there is no human creator are capable of copyright protection. The “author” of a “computer-generated work” is defined as “the person by whom the arrangements necessary for the creation of the work are undertaken”. However, there is a degree of ambiguity in this. In the case of images generated through an AI platform, this could mean that the person directing the AI through keywords or instructions would be deemed to be the author. Alternatively, the creator of the AI platform itself could assert ownership, although many AI platforms clarify through their terms and conditions that ownership of any AI generated work vests in the user, thereby passing over to the user any risks of third-party claims of infringement.
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
There are four main issues for businesses to consider when using AI in the workplace. These are:
- Discriminatory outcomes: The risk of AI producing discriminatory and/or biased outcomes that are both contrary to what a business wants and may expose the business to potentially expensive discrimination claims and significant PR damage (discrimination/bias may occur as a result of a failing in the data or training of the AI system);
- Changes in established working practices and roles: Increased use of AI is expected to result in increased efficiency and cost savings for businesses. This is likely to lead to: employees working more with AI systems as opposed to people; opportunities for some employees to carry out higher-value or more interesting work; and, potentially, redundancies where AI performs all or part of certain existing work functions within a business;
- Managing communication with employees: Businesses will need to communicate clearly with employees to allay concerns regarding privacy and monitoring when implementing AI systems in the workplace. Employees may be concerned that “Big Brother” is watching, or will in certain scenarios replace their roles, so businesses may lose valued employees who choose to leave before that happens, when in fact the business may have had no plan to replace such roles;
- Changes in business processes: Businesses may become increasingly dependent on AI systems, so will need to develop and maintain adequate operational plans to address situations where these systems fail or are temporarily unavailable. Businesses will also need to be satisfied that there is sufficient human involvement and oversight of the AI systems, both at the time they are being designed and implemented, and on an ongoing basis to ensure their use remains valid, accurate and delivers appropriate outcomes both internally and for external users or customers.
Existing UK employment laws will apply in the normal way in relation to the employment and treatment of personnel within a business, irrespective of whether their role interacts with or involves the use of AI technologies.
-
What privacy issues arise from the use of artificial intelligence?
Privacy issues which arise from the use of AI include:
- Automated Decision Making. AI can be used to make automated decisions about individuals. Unless an exemption applies, the UK General Data Protection Regulation (“UK GDPR”) gives individuals the right not to be subject to a solely automated decision, including profiling, which produces a legal or similarly significant effect. If an exemption applies, organisations must implement suitable measures to safeguard the rights, freedoms and legitimate interests of individuals. This will include providing meaningful human intervention so that individuals can express their point of view and contest a decision.
- Transparency. The UK GDPR requires organisations to provide individuals with “meaningful information about the logic involved, as well as the significance and the envisaged consequences of” any automated decision-making. This can be challenging for organisations given the complexities of AI algorithms. If the information is too technical, individuals may struggle to understand anything meaningful within the information. Organisations will need to provide information in a clear and comprehensible fashion so that individuals can fully understand the reasoning behind any automated decision-making.
- Data Protection Impact Assessments (“DPIA”). Under the UK GDPR, a DPIA is mandatory if the processing of personal data is likely to result in a high risk to the rights and freedoms of individuals. A DPIA’s purpose is to identify and minimise the data protection risks associated with a project. It is highly likely that the use of AI will trigger the need for a DPIA where this involves the processing of personal data. Additionally, a prior consultation with the ICO may be required if the DPIA indicates that the processing would result in a high risk to individuals which cannot be suitably mitigated.
- Data Minimisation. Processing large amounts of data is central to the development and use of AI. Organisations will need to balance this need with the requirement of data minimisation under the UK GDPR. Data minimisation means that organisations must only process personal data to the extent it is adequate, relevant, and limited to what is necessary. There is a risk of function creep with AI, which would threaten the principle of data minimisation.
- Vendor Due Diligence. Most AI systems will likely be provided by a third party, which means vendor due diligence will play a crucial role in ensuring organisations can comply with their data protection obligations. Organisations should only engage third parties that provide sufficient guarantees to implement appropriate technical and organisational measures in accordance with the UK GDPR. Organisations will also need to ascertain the data protection roles of vendors and, where relevant, put in place compliant data processing terms with third parties that process personal data on their behalf as processors.
-
What are the rules applicable to the use of personal data to train artificial intelligence systems?
There are no specific rules applicable to the use of personal data to train AI systems, but the general rules under the UK GDPR and the Data Protection Act 2018 will apply to any personal data used with or created by AI systems, including training data.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
Yes, the Information Commissioner’s Office (“ICO”) has published the following guidance:
- Artificial intelligence and data protection – Guidance on AI and data protection | ICO;
- Explaining decisions made with AI – Explaining decisions made with AI | ICO;
- How to use AI and personal data appropriately and lawfully – how-to-use-ai-and-personal-data.pdf (ico.org.uk);
- Commentary on generative AI – Don’t be blind to AI risks in rush to see opportunity – ICO reviewing key businesses’ use of generative AI | ICO.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
The most significant enforcement action by the ICO to date has been in relation to Clearview AI.
Clearview AI provides a service that allows customers, including the police, to upload an image of a person to the company’s app, which is then checked for a match against all images in the company’s global online database. The app provides a list of images that have similar characteristics to the photo provided by the customer, with a link to websites from which those images were sourced.
The ICO fined Clearview AI Inc £7,552,800 in May 2022 for using images of people in the UK and elsewhere that were collected from the web and social media, and which were stored in the company’s database and could be used for facial recognition. The individuals were not informed that their images were being collected or used in this way.
The ICO found that Clearview AI Inc breached UK data protection laws by:
- failing to use the information of people in the UK in a way that was fair and transparent, given that individuals were not made aware or would not reasonably expect their personal data to be used in this way;
- failing to have a lawful reason for collecting people’s information;
- failing to have a process in place to stop the data being retained indefinitely;
- failing to meet the higher data protection standards required for biometric data (classed as ‘special category personal data’ under the GDPR and UK GDPR); and
- asking for additional personal information, including photos, when questioned by members of the public whether they were on the company’s database. This may have acted as a disincentive to individuals who wished to object to their data being collected and used.
-
Have your national courts already managed cases involving artificial intelligence?
Without bespoke AI legislation, most of the court cases around AI stem from either technology disputes (primarily contractual claims) and product claims around defective systems embedded in products.
However, in 2023, the Supreme Court considered issues around intellectual property rights that are created by AI systems in Thaler v Comptroller-General of Patents, Designs and Trademarks, with judgment (at the time of writing) expected to be handed down shortly.
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
There is no central regulator or authority responsible for supervising the use and development of AI in the UK. The Office for Artificial Intelligence (“OAI”), set up by the UK government to maintain general oversight of and input to issues involving AI, currently has no statutory enforcement powers and is not directly responsible for supervision or regulation. Instead, sectoral regulators are expected to supervise AI within their area of jurisdiction. According to guidance from the OAI, the regulators overseeing AI at the sectoral level are likely to include:
- The Information Commissioner’s Office (“ICO”) in respect of personal data. The ICO has already published guidance on AI systems;
- The Equality and Human Rights Commission (“EHRC”). The EHRC’s role may be important in setting standards for mitigation of bias or similar unfair, discriminatory treatment;
- The Employment Agency Standards Inspectorate (“EAS”) in respect of the employment sector;
- The Financial Conduct Authority (“FCA”) in respect of the use of AI within regulated financial services settings; and
- The Intellectual Property Office (“IPO”) in respect of intellectual property issues around AI.
Other bodies that have, or may have, a role in supervising AI in the UK include:
- The CMA: the CMA is responsible for promoting competition and preventing anti-competitive behaviour;
- The Medicines and Healthcare products Regulatory Agency (“MHRA”): the MHRA has already published guidance on how AI systems can be used in healthcare and medical devices.
Many other organisations in the UK are also currently working to develop standards and best practices for the use of AI, including The Alan Turing Institute. Such organisations may play an advisory role in the development of policy and future regulation of AI in the UK, and businesses operating in one or more specific sectors (particularly where those sectors are regulated) should take steps to liaise with relevant industry bodies, regulators or other organisations with industry oversight, in order to understand current best practice for the use of AI in a particular sector context.
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
The use of AI by business in the UK varies, depending on the sector, market or industry in which businesses operate, although use of AI is generally thought to be growing. For example, the Competition and Markets Authority, in its initial review of AI models (in 2023), noted, “thousands of scientists, engineers and researchers are incorporating AI models into products and services spanning search and productivity software to medical research and scientific discovery”. It is, however, difficult to determine precise statistics.
A 2022 survey by a leading global IT service provider found that, even within the IT sector, only 26% of IT professionals in the UK reported the active use of AI in their organisations. This compared with almost 60% in both China and India. The UK figure does, however, sit above AI adoption rates in other OECD nations; for example, in the US, 25% of IT professionals reported active use of AI in their organisations, while in Australia the figure was 24%, and in South Korea, 22%.
A 2021 report by one of the UK’s largest consultancy firms also noted that within the private sector, 90% of ‘large organisations’ have planned or already adopted AI, whereas the figure was considerably smaller, at only 48%, for SMEs.
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
AI is being used by the legal sector in the UK.
A June 2023 report by the Law Society found that adoption of “lawtech” (defined as “technologies which aim to support, supplement or replace traditional methods for delivering legal services”, including AI) was reasonably high for “legal databases” (28.2% of respondents noted ‘regular usage’). However, outside this area, regular usage fell to between 0.6% and 16.2% depending on the area of application. An earlier report from the University of Oxford in 2020 found that take up of lawtech technology assisted by AI was “especially modest”.
The use of AI in the legal sector in the UK does however appear to be growing. One ‘Magic Circle’ firm has adopted a generative AI platform into its operating model since November 2022, and at least one of the ‘Big Four’ accounting firms (which also provides legal services) has publicly stated its investment into AI. A wide-ranging report by a leading UK university found that the most popular AI-assisted technology in law firms (and the respective adoption rates) were:
- legal research at 25.0%;
- due diligence at 18.2%;
- eDiscovery, eDisclosure and technology assisted review at 14.0%;
- regulatory compliance at 12.3%;
- contract analysis at 10.2%;
- fee-earner utilisation analytics and/or predictive billing, also at 10.2%;
- “other” at 5.1%; and
- predictive analytics for litigation at 2.1%.
Legal publishers and resource databases are also rolling out products which deploy AI in their legal research tools.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
AI seems likely to be among the most disruptive technologies to be deployed in law firms (and the legal industry more generally) in the coming years. Five key challenges linked to this increased usage include:
- A lack of ‘traditional’ rules and regulation for AI within the legal industry, when compared with the way lawyers and their client businesses have delivered and accessed legal advisory services over a prolonged period of time. The ‘pacing problem’ (i.e., the inability of institutions to keep up with technological change) means that the legal industry (like many other industries) risks operating under standards which do not keep up with technological advances such as AI, and may continue to struggle to do so in the near term.
- A lack of specific rules for particular scenarios or use cases for AI deployment. Lawyers and businesses will need to become accustomed to there being limited or no established or specific rules, and may have to anticipate future rules by horizon-scanning and trying to future-proof compliance processes in line with existing corporate and legal frameworks for functions such as data privacy (see Questions 12 – 15) or information security.
- Uncertainty if a ‘patchwork’ of laws is created, and a possible risk of duplication or gaps in the law. The Digital Regulation Cooperation Forum, comprising the CMA, FCA, ICO and Ofcom, was established in June 2020 to ensure greater cooperation on online regulatory matters, but it remains to be seen whether such bodies will be effective in ensuring a uniform approach to AI, particularly as the UK government’s White Paper does not anticipate a significant ‘horizontal’ oversight body to ensure consistency in regulation between individual sector regulators.
- Indifference in the legal sector towards adopting AI technology, or a simple lack of understanding. A June 2023 report by the University of Manchester, University College London and The Law Society found that the adoption of lawtech generally remains relatively limited, and noted that “lawyers have mixed perceptions as to whether top managers consider lawtech a strategic priority and therefore worthy of investment”. Further, the report found that, although lawyers see “the positive benefits” of increasing productivity, “they were generally less convinced of the benefits to them personally”. Legal professionals may continue to be wary of lawtech in general (including AI).
- Timing in adoption of the technology. Many sectors appear to be waiting for the ‘AI tipping point’, where a critical mass of businesses have adopted and are using the technology. In the legal sector, law firms and businesses may be holding off adopting AI through fear that they may enter the market too early. Conversely, leaving adoption too late may risk missing commercial opportunities. Finding the best time to adopt technology will be a key challenge for businesses in all sectors.
Five key opportunities include:
- New areas of legal advice. Although it is unclear how the law on AI will develop, a corpus of rules (and supporting case law) is likely to come into being over the coming years. Lawyers should be well-placed to advise on the resulting laws and regulations. The strength of the UK’s reputation for regulation and consistent application of the rule of law means UK lawyers should have the opportunity to help develop these rules in a way which can help build trust in AI within and beyond the UK.
- New business models. Businesses may be able to use AI to develop new business models, perhaps operated through alternative business structures, which could change how legal services are produced and priced. Alternative legal services providers, accountants and retailers already operate in the legal sector, but there should be opportunities for development and implementation of new AI-powered products and solutions.
- The opportunity to ‘go global’. UK legal businesses may be able to leverage the UK’s traditional reputation as a global influence on both legal and technological development and regulation to use AI to create products and solutions which can be rolled out internationally, particularly in countries that have equivalent common law legal systems.
- The opportunity to ‘add value’. The increased use of AI in traditional tasks may mean that legal professionals can focus more on where they can add real value for their clients. Lawyers may increasingly struggle to compete against hyper-efficient AI-powered computers or systems for certain tasks. This should lead to a greater focus on specific client needs and more complex tasks where lawyers can provide personalised and bespoke support, supplemented by AI.
- Greater access to legal advice. A 2021 Lawtech UK report put the unmet legal needs of SMEs and consumers in the UK at £11.4 billion. According to the Legal Services Board in 2020, “every year 3.6 million people had an unmet legal need involving a dispute”. Unmet legal needs in many areas may potentially be better served by taking advantage of AI.
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?
In respect of overarching UK statutory regulation of AI, there is unlikely to be significant change within the next 12 months. Instead of giving responsibility for AI governance to a new single regulator, the government will empower existing regulators in different sectors to develop their own sector-specific approaches. As a result, we may see regulators issuing their own evolving regulatory guidance for AI in their sector in the next 12 months. However, it remains unclear how different regulators will work together to regulate AI in practice, or how those approaches may be influenced by, or align with, broader international standards or regulation.
We can also expect to see the government’s response to the consultation it set out in its March 2023 White Paper, which may provide further detail on the ‘AI regulation roadmap’.
Beyond this, we anticipate more developments around how the UK’s AI regime interacts with the various international approaches to AI regulation. So far, the UK’s approach appears to differ from that of the EU, whose forthcoming AI Act (which will have extra-territorial application to providers delivering AI technology solutions into the EU from non-member states) looks set to provide a more prescriptive regulatory regime. Similarly the USA is in the early stages of developing its own ‘AI Bill of Rights’, which may evolve into more concrete legislative protections. Whether or not the UK chooses to follow any of their positions, developments in those emerging regulatory regimes are likely to influence the UK’s direction on AI regulation in the coming years.
United Kingdom: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in United Kingdom.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
-
What privacy issues arise from the use of artificial intelligence?
-
What are the rules applicable to the use of personal data to train artificial intelligence systems?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
-
Have your national courts already managed cases involving artificial intelligence?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?