-
What are your countries legal definitions of “artificial intelligence”?
The White Paper issued by the UK government in March 2023 (last updated in August 2023 – “A pro-innovation approach to AI regulation”) noted “there is no general definition of AI that enjoys widespread consensus”. This is still the case in the UK at the time of writing. The UK government’s approach instead focuses on the following core characteristics of AI: “adaptivity”, whereby AI systems can infer patterns and connections in data which are not easily discernible to humans; and “autonomy”, whereby AI systems possess the capability to make decisions independently of human input.
The draft UK Artificial Intelligence (Regulation) Private Members’ Bill defines AI as “Technology enabling the programming or training of a device or software to perceive environments and use data to make decisions or take actions”.
-
Has your country developed a national strategy for artificial intelligence? If so, has there been any progress in its implementation? Are there plans for updates or revisions?
In January 2025, the UK government published a policy paper (“AI Opportunities Action Plan: Government Response”), endorsing all recommendations set out within the related AI Opportunities Action Plan (commissioned in July 2024 and authored by Matt Clifford CBE). The 50 advisory recommendations within the Action Plan focus on investment and cross-sector collaboration, intended to position the UK as an “AI maker”. The UK government has shown firm commitment in implementing the Action Plan, having recently published the Industrial Strategy Digital and Technologies Sector Plan, which outlines the actions taken by the government to-date and builds on proposals to be implemented over the coming decade.
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
The UK has not yet implemented clear rules or guidelines on AI. The government’s White Paper (see Question 1) does, however, set out a regulatory framework that aims to be “pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative”. It identifies five “values-focused cross-sectoral principles” for AI regulation. These are: (1) safety and security; (2) transparency and explainability; (3) fairness; (4) accountability and governance; and (5) contestability and redress. These principles are intended to guide businesses in designing, developing, and using AI in a responsible manner.
Regulators will be required to publish their own sectoral guidance. Indeed, in February 2024, the UK government said that some regulators had already started to act in line with the White Paper’s recommendations, including the Competition and Markets Authority (“CMA”) and the Information Commissioner’s Office (“ICO”).
The July 2024 briefing notes to the King’s speech stated that the new Labour Government would “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”, but did not contain any associated timelines. At present, the Artificial Intelligence (Regulation) Bill, a private members bill, which was re-introduced into the House of Lords in March 2025, is still under consideration. As a private members’ bill, the bill requires significant parliamentary support to progress.
To the limited extent that AI is currently specifically regulated in the UK, this is implemented through existing legal frameworks such as the data protection regime (see Questions 12 – 15) and intellectual property laws (see Questions 9 & 10).
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
As the UK does not have an overarching AI legislative framework, defective AI systems will be dealt with by general causes of action available under UK law, on a fact-specific basis in the context of the deployment and use of those systems and the nature of harm caused in each case. The potential routes for liability include contractual liability, the tort of negligence (if a duty of care is owed between parties), and product safety legislation (where the AI is integrated into a product) under the Consumer Protection Act 1987.
The Consumer Rights Act 2015 may also protect consumers where they have entered into a contract for AI-based products and services. Finally, discriminatory outcomes from the use of AI systems may contravene the protections in the Equality Act 2010.
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems. Have there been any court decisions or legislative developments clarifying liability frameworks applied to artificial intelligence?
There have not been any court decisions or legislative developments clarifying liability frameworks applied to artificial intelligence as yet, and there is none where judgment is imminent.
Civil liability rules are set out at Question 4. Criminal liability may be attributable to harm caused by an AI system if it can be attributable to a legal person. For example, in the most extreme cases, a corporate entity can be liable for corporate manslaughter under the Corporate Manslaughter and Corporate Homicide Act 2007.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the deployer, the user and the victim?
AI systems present a potentially complex nexus of liability between the different parties within the AI supply chain, ranging from developers through to corporate customers and any ultimate end-users. There is, however, no current AI-specific statutory basis on which responsibility or liability for claims related to harm caused by AI is allocated between parties in the UK. As such, claims will be managed in accordance with the general rights and causes of action outlined in Questions 4 & 5.
Any contractual claims will sit with the contracting party. Similarly, any claim of negligence sits with the party to whom a duty of care is owed, and would be brought against the party owing that duty of care. If the AI system is embedded in a product, a claim can be pursued against any of the following: (i) the producer (i.e., the manufacturer), (ii) a person who holds themselves out as a producer, or (iii) the importer of the product into the UK under the Consumer Protection Act 1987. As to any product liability claim, the defendant may seek a contribution where the third party is liable to the claimant for the same damage/loss. Such claims must be brought within two years of judgment/settlement.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
As there is no current AI-specific statutory basis for determining claims in the UK, the burden of proof in AI-related cases will depend on the cause of action (as with general civil claims). The most common burden of proof in civil claims in England and Wales is for the claimant to prove their case on the balance of probabilities. However, various statutory causes of action may differ. For example, under the Consumer Protection Act 1987, the claimant must prove that the product is defective and that the defect caused the damage which is the subject of the claim. Claimants should therefore seek advice as to the specific merits of an individual case, and any necessary burden of proof that needs to be met for that claim to be brought, in respect of any prospective claim involving AI.
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
The use of AI is insurable in the UK, and insurers are increasingly offering tailored products to address AI-related risks. The market has seen the development of specific policies covering AI performance risk, including third-party liability arising from the operation or failure of AI systems. These policies are particularly relevant in high-risk sectors such as healthcare, finance and autonomous technologies. In addition to bespoke AI policies, general liability policies, such as professional indemnity, directors’ and officers’ liability, cyber, and technology errors and omissions, may also respond to AI-related losses, even if not explicitly referenced. Insurers are, however, increasingly reviewing policy wordings and introducing exclusions to limit unintended exposure to so-called “silent AI” risks.
Insurers will consider carefully the nature of the technology, its intended use and the insured’s risk mitigation strategies during the underwriting process, and will price accordingly. Particular attention is paid to the transparency, explainability, and governance of AI systems, especially where decision making is automated or semi-automated.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
In the UK, patent applications must name a human as the inventor or inventors. Although patent applications have been submitted in two cases which have named AI as an inventor, the courts have refused to recognise inventorship. The Supreme Court maintained this position, ruling unanimously that a patent application naming an AI machine, rather than a natural person, as the inventor is invalid under the UK Patents Act 1977.
The Intellectual Property Office has recognised that technology developments mean that AI is making significant contributions to innovation, and held a consultation to consider whether the current rule for inventorship could potentially be improved to better support innovation and incentivise the generation of new AI-devised inventions. The outcome was that for the time being there will be no change to the rule that patent applications must name a human as the inventor.
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
In the UK, images or artistic works may benefit from copyright protection to the extent they are original, i.e., the author’s own intellectual creation. The threshold for originality in the UK is low and does not require particular creativity, although the author will have given the work their “personal touch”.
It is certainly the case that images may be created by a human who has assistance from AI, and, provided the work meets the usual threshold for originality, it will benefit from copyright protection like a work created using any other tool.
Issues of copyright ownership may arise, however, because the technology underpinning AI must be trained and improved through exposure to large datasets, including the vast number of images available on the Internet. These images are protected by copyright as artistic works. Although UK copyright law generally permits text data mining of copyright works for non-commercial purposes, many AI platforms’ commercial aspect means this fair-use exception cannot necessarily be relied upon. For example, if AI is directed to create “an image in the style of David Hockney,” it may return an image similar or identical to an existing David Hockney work, calling into question the originality and ownership of the AI-generated image and risking copyright infringement for the human creator.
Images generated by a computer where there is no human creator are capable of copyright protection. The “author” of a “computer-generated work” is defined as “the person by whom the arrangements necessary for the creation of the work are undertaken”. There is a degree of ambiguity, however. In the case of images generated through an AI platform, this could mean that the person directing the AI through keywords or instructions is deemed the author. Alternatively, the creator of the AI platform itself could assert ownership, although many AI platforms clarify through their terms and conditions that ownership of any AI generated work vests in the user, thereby passing over to the user any risks of third-party claims of infringement.
-
What are the main issues to consider when using artificial intelligence systems in the workplace? Have any new regulations been introduced regarding AI-driven hiring, performance assessment, or employee monitoring?
Four main issues for businesses to consider when using AI in the workplace are:
- Discriminatory outcomes: The risk of AI producing discriminatory and/or biased outcomes that are both contrary to what a business wants and may expose the business to potentially expensive discrimination claims and PR damage;
- Changes in established working practices and roles: Increased use of AI is expected to result in increased efficiency and cost savings for businesses. This is likely to lead to: employees working more with AI systems as opposed to people; opportunities for some employees to carry out higher-value or more interesting work; and, potentially, redundancies where AI performs all or part of certain existing work functions within a business;
- Managing communication with employees: Businesses will need to communicate clearly with employees to allay concerns regarding privacy and monitoring when implementing AI systems in the workplace. Employees may be concerned that their role may be replaced in certain scenarios, and may choose to leave before that happens, when in fact the business may have had no plan to replace such roles;
- Changes in business processes: Businesses may become increasingly dependent on AI systems, so will need to develop and maintain adequate operational plans to address situations where these systems fail or are temporarily unavailable. Businesses will also need to be satisfied that there is sufficient human involvement and oversight of the AI systems, both at the time they are being designed and implemented, and on an ongoing basis to ensure their use remains valid, accurate and delivers appropriate outcomes both internally and for external users or customers.
Existing UK employment laws apply in the normal way in relation to the employment and treatment of personnel within a business, irrespective of whether their role interacts with or involves the use of AI.
-
What privacy issues arise from the development (including training) and use of artificial intelligence?
a. Automated Decision Making. AI can be used to make automated decisions about individuals. Unless an exemption applies, the UK General Data Protection Regulation (“UK GDPR”) gives individuals the right not to be subject to a solely automated decision, including profiling, which produces a legal or similarly significant effect. If an exemption applies, organisations must implement suitable measures to safeguard individuals’ rights, freedoms and legitimate interests, including by providing meaningful human intervention so individuals can contest a decision. The Data (Use and Access) Act 2025 limits controls on solely automated decision making unless the processing involves special categories of data under Article 9 UK GDPR. This change will come into force at a date to be determined by Commencement Regulations.
b. Transparency. The UK GDPR requires organisations to provide individuals with meaningful information about the logic involved, as well as the significance and the envisaged consequences of automated decision-making. This can present challenges given the complexities of AI algorithms. If the information provided is too technical, individuals may struggle to interpret it. Organisations must therefore deliver information in a clear fashion.
Further challenges arise when AI is trained using personal data scraped from the internet. Providing the information required under Article 14 UK GDPR to data subjects in this context can be operationally challenging. Controllers often seek to rely on one of the exemptions under the UK GDPR and UK Data Protection Act 2018 (“DPA 2018”) to the right to be informed, such as impossibility or disproportionate effort. However, determining whether such an exemption applies is not always clear cut. Furthermore, controllers who seek to rely on an exemption must consider the effect of such reliance on the overall lawfulness, fairness and transparency of the processing and whether additional safeguards are required. This area is likely to see further regulatory guidance in light of the Data (Use and Access) Act 2025.
The ICO cautions that processing of this nature, i.e., “invisible processing”, results in additional risks to a data subject as they cannot exercise control over the use of their data. In such circumstances, privacy information should still be published on the controller’s website and the controller should carry out a DPIA.
c. Data Protection Impact Assessments (“DPIAs”). Under the UK GDPR, a DPIA is mandatory if the processing of personal data is likely to result in a high risk to the rights and freedoms of individuals. A DPIA’s purpose is to identify and minimise the data protection risks associated with a project. It is likely that the use of AI will trigger the need for a DPIA where this involves the processing of personal data. Additionally, a prior consultation with the ICO may be required if the DPIA indicates that the processing would result in a high risk to individuals which cannot be suitably mitigated. The ICO has shown a tendency to closely examine DPIAs in the context of AI systems, and has published guidance on DPIAs in the context of AI (here) which makes it clear that the “vast majority” of AI use cases will require a DPIA.
d. Data Minimisation. Processing large amounts of data is central to AI. Organisations will need to balance this need with the data minimisation requirement under the UK GDPR, which requires that organisations only process personal data to the extent it is adequate, relevant, and necessary. There is a risk of function creep with AI, which threatens the principle of data minimisation.
e. Vendor Due Diligence. Most AI systems are provided by third parties, which means vendor due diligence plays a crucial role in ensuring organisations can comply with their data protection obligations. The ICO cautions that assurances from the AI vendors should be sought about any bias testing they conducted, or the controller should test the model themselves. Organisations must also ascertain the data protection roles of vendors and, where relevant, implement compliant data processing terms with third parties that process personal data on their behalf as processors.
f. Controller/processor/joint controller roles. Identifying controller, joint controller, and processor roles in the context of AI can be complex, not least because many parties are involved in the development and deployment of AI systems. The ICO has published initial guidance and scenarios to assist with the assessment (here), which includes indicators of when an organisation may act as a controller in the context of an AI system.
g. Lawful basis for training data. Most AI systems rely on publicly accessible sources for their training data. Where training data contains personal data, processing is subject to the UK GDPR. It can, however, be difficult to identify an applicable lawful basis to such web scraping activities. There are arguments that “legitimate interests” may not be an available basis if data is processed in ways the data subject does not reasonably expect or privacy information is not provided.
Obtaining training data via web scraping, in most cases, will be invisible processing. The ICO issued draft consultation guidance specifically on how to determine whether there is a valid lawful basis for web scraping (in the context of training generative AI), here. The ICO states “five of the six lawful bases are unlikely to be available for training generative AI on web-scraped data” and therefore the draft guidance “focuses on the legitimate interests lawful basis (Article 6(1)(f) of the UK GDPR).”
Following the consultation, the Open Rights Group lodged a complaint with the ICO against Meta over UK GDPR violations related to Meta’s plans to use personal data for AI model training. The complaint alleges a “clear intentional breach of the law” including lack of legitimate interest and a lack of transparency. It requests a legally binding decision under Article 58(2) to prevent unauthorised data processing and a prohibition on the use of personal data for AI technologies without consent. This contradicts the ICO’s position in respect of legitimate interests in its draft guidance. We await a final decision from the ICO on its position, which may itself be subject to legal challenge. The ICO issued a statement in October 2024 in response to Meta’s re-use of UK social media posts.
h. Fairness and accuracy. Under the UK GDPR’s fairness principle, controllers must only process personal data in ways that people would expect and not use it in ways that may cause unjustified adverse effects. Ensuring the statistical accuracy of AI outputs is part of this fairness principle. The UK GDPR (at Recital 71) highlights the need for statistical accuracy in automated decision-making, and states organisations should put in place “appropriate mathematical and statistical procedures” for the profiling of individuals. The ICO provides guidelines for technical specialists and compliance professionals on fairness here.
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view? Are there any recent precedents addressing the legality of data scraping for AI training?
Data scraping by UK entities may be prohibited as follows:
Database right: Depending on the geographical location where the database holding any source data was made, extraction and reutilisation of all or a substantial portion of data from that database may be a violation of the EU sui generis database right and/or its UK equivalent.
The EU sui generis database right protects data held in databases which were either (i) made in an EU member state where there has been a “substantial investment” in obtaining, verifying, or presenting the contents of the database; or (ii) made in the UK prior to 1 January 2021. Following Brexit, a similar right exists in the UK in respect of databases made in the UK.
A person resident in the UK who conducts widespread data scraping may violate the rights of an EU organisation or a UK organisation which has invested in its database.
Privacy: As a processing activity, data scraping is regulated by the ICO and subject to UK data protection laws, ICO guidance and applicable case law.
In August 2023 the ICO and 11 other Data Protection Authorities published a joint statement calling for the protection of personal data from unlawful data scraping on social media sites. The statement sets out expectations for how social media companies should protect personal data from unlawful data scraping, which may lead to increased vigilance in this context from social media companies.
Following consultation, the ICO finalised its position (here) on the issue of whether there is an applicable lawful basis to data scraping. It confirms that “legitimate interests” is the only available lawful basis and will require developers to pass the necessity test and the balancing exercise with data subject rights. The ICO has stated that to carry out data scraping in compliance with the UK GPDR, and for any lawful basis to be available, the relevant controller must do so in compliance with the law (and particularly the lawfulness principle under Article 5 UK GDPR), including any applicable website terms and conditions. Refreshed guidance to reflect clarifications in the Data (Use and Access) Act, including the applicability of new Article 84A on processing for scientific research, is awaited.
Copyright: Copyright may protect the contents of a database, or individual items of source data, where the data in question are considered copyright works under the Copyright, Designs and Patents Act 1988. Where a person resident in the UK undertakes data scraping in respect of the collection of images, photographs, articles or similar without the permission of the owner of those data items, this may infringe copyright and leave the scraper facing legal action.
Developments in IP law may influence the legality of data scraping in the UK for AI training. The landmark case of Getty vs Stability AI, commenced in the High Court in June 2025, had the potential to shape the how UK copyright laws are applied in an AI context, particularly in the collection and use of copyright materials for training purposes, but the primary copyright infringement complaint was dropped for jurisdictional reasons. That said, the case has heightened awareness and increased pressure on the government to consider reforming copyright laws around AI training. An ongoing consultation is expected to lead to a legal framework for use of copyright materials in an AI context, and is expected to influence best practice. Proposals aim to balance developer access to data for AI training by way of a text and data mining exemption, with protection of rights for copyright owners through transparency and compensation mechanisms. The outcome of the consultation is expected later in 2025.
Breach of terms and conditions: Many database owners specifically reserve all rights in their source data, and apply terms and conditions which specifically prohibit the collection or use of any data gathered in this way. Any UK person who scrapes data from any source – whether UK or overseas, may face a claim under one or more of the aforementioned grounds.
Competition: In a report published in 2024, the CMA observed that limitations on the ability to use web-scraped data could benefit those holding the data or with resources to purchase access to data. The CMA also noted that imbalances could emerge between early movers and later entrants in the ability to train models on web-scraped data, and that developments in firms’ abilities to use web-scraped data could impact the range of models available to deployers and users.
These concerns appear increasingly relevant in light of reports of a recent complaint to the CMA, alleging abuse by Google of its dominant position in search by using publishers’ content to promote autogenerated AI overviews by Google’s Large Language Model AI Gemini, which competes with publishers and makes it harder for them to reach their readers. According to the complaint, the Gemini summaries are based on news articles written by journalists and scraped from publishers’ websites. The complaint contends that publishers should have the right to exclude their content from these AI-generated summaries without facing removal from Google’s search results.
In June 2025, the CMA published its proposed decision under the Digital Markets, Competition and Consumers Act 2024 in relation to Google’s general search and search advertising services. The CMA proposes designating Google as having Significant Market Status in relation to general search services. If adopted, this designation would subject Google to conduct requirements. Alongside the proposed decision, the CMA has also published a “Roadmap of possible measures to improve competition in search”. The Roadmap identifies “Category 1 measures”, which the CMA is expected to consult on in the autumn and includes measures to ensure publishers have effective transparency, attribution and choice in how their content collected by Google for search is used in AI-generated responses, including AI Overviews and Gemini AI Assistant, without affecting if and how they appear in Google Search results.
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
There are no reported cases in England and Wales on the enforceability of terms prohibiting data scraping. However, a European Court of Justice case prior to Brexit (Ryanair Ltd v PR Aviation BV) supports the proposition that such terms would be enforceable. In the absence of directly applicable reported judgments, enforceability would depend on the principles of English contract law.
Separately from any potential actions for breach of contract, the ICO’s position at the time of writing is that data scraping carried out in breach of website terms and conditions cannot comply with the lawfulness requirements under Article 5 UK GDPR, and therefore will be in breach (See Question 13). In terms of enforceability:
- A data subject may complain (either to the ICO or bring a court claim), as a result of damage they have suffered due to the breach of the UK GDPR;
- The ICO could bring enforcement action, issue fines, issue an order to cease the processing activity, or bring a court claim against the relevant controller;
- criminal liability may arise under section 170 of the DPA 2018, which sets out an offence of obtaining or disclosing personal data without the consent of the controller.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
The Information Commissioner’s Office (“ICO“) has published the following guidance:
- Artificial intelligence and data protection – Guidance on AI and data protection | ICO;
- Explaining decisions made with AI – Explaining decisions made with AI | ICO;
- How to use AI and personal data appropriately and lawfully – how-to-use-ai-and-personal-data.pdf (ico.org.uk);
- How data protection law applies to biometric recognition – Biometric data guidance: Biometric recognition | ICO;
- Tools including its AI and data protection risk tool kit and toolkit for organisations considering using data analytics;
- Commentary on generative AI – Don’t be blind to AI risks in rush to see opportunity – ICO reviewing key businesses’ use of generative AI | ICO;
- Consultation series on generative AI – ICO consultation series on generative AI and data protection | ICO;
The ICO has collated various materials it has published on AI, such as guidance and thought pieces, on its website: Our work on Artificial Intelligence | ICO.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
The ICO’s most significant enforcement action to date has been against Clearview AI, a company that allows customers to upload images to its app for facial recognition. The ICO fined Clearview AI £7,552,800 in May 2022 for collecting images from the web without informing individuals, storing them in its database, and using them for facial recognition purposes without consent.
The ICO found that Clearview AI Inc breached UK data protection laws by:
- failing to use the information of people in the UK in a way that was fair and transparent;
- lacking a lawful reason for collecting people’s information;
- lacking a process in place to stop the data being retained indefinitely;
- failing to meet the higher data protection standards required for biometric data; and
- asking for additional personal information, including photos, when questioned by members of the public about whether they were on the company’s database, which may have acted as a disincentive for individuals to object to their data being collected.
In October 2023, the First Tier Tribunal (Information Rights) upheld Clearview AI’s appeal of the ICO findings on jurisdictional grounds, stating Clearview AI’s data processing fell outside the UK GDPR due to the law enforcement exemption, as it served foreign law enforcement agencies.
In November 2023 the ICO published a statement disagreeing with the First Tier Tribunal’s judgment instead reasoning that “Clearview itself was not processing for foreign law enforcement purposes and should not be shielded from the scope of UK law on that basis.” The ICO is seeking permission to appeal the First Tier Tribunal’s finding.
The ICO has taken additional enforcement action since Clearview AI, and appears to be paying close attention to DPIAs in this context:
- Chelmer Valley High School | ICO: in July 2024 the ICO issued a reprimand to Chelmer Valley High School in respect of Article 35(1) UK GDPR. The school introduced facial recognition technology to take cashless canteen payments from students but failed to complete a DPIA prior to doing so. The ICO found that facial recognition technology is likely to result in high data protection risks and a DPIA is essential for identifying and managing the higher risks.
- Serco Leisure Operating Limited and relevant associated Trusts | ICO: in February 2024 the ICO issued enforcement notices against Serco Leisure and various associated entities (“Serco“) ordering them to stop using facial recognition technology and fingerprint scanning for employee attendance monitoring purposes. Serco had failed to demonstrate that its use of facial recognition technology and fingerprint scanning was necessary or proportionate, when other, less intrusive means are available.
- UK Information Commissioner issues preliminary enforcement notice against Snap | ICO: in October 2023 the ICO issued a preliminary enforcement notice to Snap, Inc and Snap Group Limited (“Snap“) over potential failures to adequately address privacy risks associated with Snap’s generative AI chatbot “My AI”. The ICO found that the DPIA Snap carried out prior to introducing My AI did not adequately consider the risks involved, particularly to children whose data would be processed. In May 2024 the ICO published a statement and decision following its investigation. The ICO stated Snap had taken “significant steps” to review the risks associated with My AI and demonstrate that it had implemented appropriate mitigations, and that following review of the fifth version of the DPIA the ICO was satisfied that the DPIA was now compliant.
-
Have your national courts already managed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
Without bespoke AI legislation, most of the court cases around AI stem from technology disputes (primarily contractual claims) and product claims around defective systems embedded in products.
In 2023, the Supreme Court considered issues around intellectual property rights that are created by AI systems in Thaler v Comptroller-General of Patents, Designs and Trademarks, concluding that an AI cannot be an inventor for the purposes of English patent law. Issues around the enforcement of intellectual property rights where generative AI are “taught” are currently proceeding through the courts, including Getty v Stability AI (see Question 13).
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
The UK does not have a central AI regulator or authority. That said, the Artificial Intelligence (Regulation) Bill (the “AI Bill“) would require the creation of a central AI Authority to oversee regulation to ensure alignment across sectors. The AI Bill is, however, only a private members’ bill, meaning it does not currently have the backing of the government, and is not guaranteed support in the House of Commons.
In April 2024, in response to a government request, certain regulators published updates on AI regulation in their respective areas, including the Bank of England, the CMA, the Equality and Human Rights Commission, the FCA, and the ICO. Other bodies have also issued guidance around AI, including the Intellectual Property Office.
In the public sector, responsibility for AI sits across the Department for Science, Innovation & Technology (DSIT) and the Cabinet Office. The government is expanding the role of the AI Safety Institute (AISI), which is a directorate of DSIT involved with research into AI safety and the development of infrastructure needed for effective AI governance. Although AISI is a research institution rather than a regulatory authority, the government’s AI Opportunities Action Plan notes that the AISI will “provide clarity on how frontier models will be regulated”.
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited? Which sectors have seen the most rapid adoption of AI technologies?
In 2024, the UK AI market was more than £72 billion and the US International Trade Administration estimates that it will be valued at over £1 trillion by 2035. After the US and China, the UK is the third largest AI market in the world.
The UK government has been attempting to ramp up AI adoption across the UK through the introduction of an AI Opportunities Action Plan (January 2025), and the publication of an AI Playbook (February 2025). These provide public sector organisations with technical guidance on the use of AI and how to mitigate risks.
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how? Are AI-driven legal tools widely adopted, and what are the main regulatory concerns surrounding them?
AI use in the UK legal sector appears to be rising. In November 2023, the Solicitors Regulation Authority (SRA) reported that at the end of 2022 three quarters of the largest solicitors’ firms were using AI, with over 60% of large law firms exploring the potential of generative AI systems.
LawtechUK categorises legal AI into several areas:
- risk identification and prediction, including automating compliance tasks and predicting case outcomes;
- administration, such as information gathering and client communication, often through chatbots;
- profiling, involving analysing documents for clarity and prioritising cases;
- searching (i.e., automating document discovery and precedent identification); and
- text generation (i.e. producing legal documents and summaries).
The SRA acknowledges that “responsible use of AI by law firms could improve legal services, while making them easier to access and more affordable” but that “trust and confidence in regulated legal services depends on the public knowing that high professional standards are being met.”
In May 2025, the SRA approved the first AI-driven law firm in the UK, making it the first firm to provide regulated legal services purely based on AI.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
Five key challenges include:
- The rapid pace of AI development, which can outstrip the creation of traditional regulation (something that can be seen in the slow-moving pace of the Artificial Intelligence (Regulation) Bill). This “pacing problem” means that legal standards do not typically reflect AI’s latest technological capabilities, presenting challenges for lawyers advising clients on the law in this area. Law firms may also lack guidelines for internal AI use cases.
- Trust in adopting AI technology. If lawyers do not trust AI and its output, they are less likely to use it. As AI deployment progresses, the issue of trust is likely to grow and will need to be managed carefully, particularly following well-publicised instances where fictitious case law and fake citations have been included in AI-drafted legal documents.
- Uncertainty if a “patchwork” of laws is created in the UK, and a possible risk of duplication or gaps in the law, particularly as the government’s proposals do not anticipate a significant consolidated oversight body to ensure regulatory consistency.
- Timing in adoption of technology. Players in the legal sector may be holding off adopting AI through fear that they may enter the market too early, or too boldly, although this may change following the SRA’s approval of an AI-driven law firm.
- Regulators may be under-resourced. The House of Commons Science, Innovation and Technology Committee noted in May 2024 that the government’s February 2024 announcement of £10 million to support regulators was “insufficient to meet the challenge, particularly when compared to the UK revenues of leading AI developers”. Having said this, the Government’s 2025 AI Opportunities Action Plan included a commitment to funding regulators to scale up their AI capabilities, acknowledging that this needs “urgent addressing”.
Five key opportunities include:
- New areas of legal advice. Lawyers should be well-placed to advise on new laws and regulations that seem likely to come into being over the coming years. UK lawyers should have the opportunity to help develop these rules in a way which can help build trust in AI within the UK and beyond.
- New business models. Businesses may be able to use AI to develop new business models, which could change how legal services are delivered, a prime example being the SRA’s 2025 approval of an AI-driven law firm. There should be opportunities for development and implementation of new AI-powered products and solutions.
- The opportunity to “go global”. UK legal businesses may be able to leverage the UK’s global reputation and use AI to create products and solutions which can be rolled out internationally, particularly in countries with common law legal systems.
- The opportunity to “add value”. Lawyers may increasingly struggle to compete against AI for certain tasks. This should lead to a greater focus on specific client needs and more complex tasks where lawyers can provide personalised and bespoke support, supplemented by AI.
- Greater access to legal advice. Unmet legal needs in many areas may potentially benefit from AI. In a March 2024 speech, the Master of the Rolls said, “AI has great potential within the digital justice system which promises to provide quicker, cheaper and more efficient ways to resolve the millions of disputes that arise in British society every year”.
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months? Are there any ongoing initiatives that could reshape AI governance?
While there is still no clear direction on the regulation of AI in the UK, recent developments, such as the reintroduction of the Artificial Intelligence (Regulation) Private Members’ Bill, and certain commitments in the AI Opportunities Action Plan indicate that some form of regulation – whether sector-specific or statutory – is increasingly likely within the next 12 months. The Bill, which has not reached royal assent and is not government-sponsored, may struggle to pass into law. That said, it does reflect growing pressure for formal oversight of AI development in the UK.
The Action Plan signals that the government will preserve its light-touch regulatory approach, calling this a “source of strength relative to other more regulated jurisdictions”. The Plan does, however, note that “well-designed and implemented regulation” can fuel the development and adoption of AI. The Plan appears to be maintaining the approach of deferring to regulators to introduce regulation at a sectoral level, and does not suggest the government will install a new single regulator in the near future. Uncertainty remains, therefore, as to how regulators will work together and how their sector-specific approaches may align with broader international standards or regulation.
We anticipate developments around how the UK’s AI regime interacts with the various international approaches to AI regulation. So far, the UK’s approach has differed from the EU’s, for example, but developing international regimes are nevertheless likely to influence the UK’s direction in the coming years
Explanatory note:
For the purposes of this Guide, we refer to UK-wide approaches to AI. Scotland has published its own separate AI strategy; more information is available at: www.scottishai.com