-
What are your countries legal definitions of “artificial intelligence”?
Artificial intelligence (AI) has been defined in 15 U.S.C. § 9401(3) as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.” This definition was a part of the National Artificial Intelligence Initiative Act of 2020 and has been used and referenced (sometimes with context-specific additions) in other proposals, laws, and executive orders since then, including the 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Apart from implementation of the National Artificial Intelligence Initiative Act (and other laws that expressly adopt this definition), the definition is not necessarily binding on courts or intellectual property offices like the U.S. Patent and Trademark Office. This definition is similar to the definition used by the Organisation for Economic Co-operation and Development that is often used in laws in other countries.
-
Has your country developed a national strategy for artificial intelligence? If so, has there been any progress in its implementation? Are there plans for updates or revisions?
The United States has developed several recent national strategies for AI, focusing on different aspects of the development, use, and regulation of AI. These are under active development by the Trump administration, revising strategies and guidance from the previous administration.
- The Trump administration rescinded the 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence and has not replaced it with a similarly comprehensive order. Instead, a number of executive orders have been released, with one calling for an AI action plan to define future policy efforts around AI expected in July of 2025.
- A number of strategies are also under development, including a National Artificial Intelligence Research and Development Strategic Plan, produced in collaboration between the Office of Science and Technology Policy and a number of other cross-government committees and working groups. This strategy is updated every few years. Additionally, the 2023 National Standards Strategy for Critical and Emerging Technology may be updated to reflect a focus on competitiveness for American AI technologies.
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
The U.S. has in place a range of guidelines and principles on artificial intelligence, but most applicable laws are not specific to artificial intelligence technologies. Agencies with AI-specific or AI-related guidelines are currently re-evaluating these regulations based on a 2025 Office of Management and Budget memo outlining government rules for the use of AI, Accelerating Federal Use of AI through Innovation, Governance, and Public Trust (Memorandum M-25-21).
Prior to the current administration, several federal agencies had already issued AI-related guidelines. Many guidelines and rules released prior to 2025 are being reviewed under the new administration’s priorities.
- NIST AI Risk Management Framework: The National Institute of Standards and Technology (NIST) released its AI Risk Management Framework in January 2023, providing voluntary guidance to organizations on managing risks associated with AI systems. More recently, the Center for AI Standards and Innovation has focused on security and national security risks in both American and non-American AI systems.
- White House Voluntary AI Security Commitments: Companies developing Frontier Models have entered into voluntary commitments to ensure the safe, secure, and trustworthy development of AI systems by focusing on ensuring the safety of AI systems, protecting against cybersecurity threats to AI, and developing watermarking systems to help detect AI-generated content.
- FDA’s AI/ML-Based Software as a Medical Device: Released in January 2021, this plan outlines the FDA’s approach to regulating adaptive AI and machine learning in medical devices as part of existing premarket submission processes. The approach emphasizes a “total product life cycle” regulatory approach, which includes clear expectations for quality systems and “good machine learning practices,” and focuses on the importance of real-world performance monitoring.
- S. Copyright Office: Since launching an initiative in early 2023, the Copyright Office has been examining the copyright law and policy issues raised by AI. The Copyright Office has been publishing its findings in a multi-part report, Copyright and Artificial Intelligence. Part 1 was published on July 31, 2024 and addresses the topic of digital replicas. Part 2 was published on January 29, 2025 and addresses the copyrightability of outputs created using generative AI. On May 9, 2025, the Office released a pre-publication version of Part 3 in response to congressional inquiries and expressions of interest from stakeholders. A final version of Part 3 will be published in the future, and it is not expected that there will be substantive changes in the analysis or conclusions.
- Federal Trade Commission: The FTC has published guidance on using AI and algorithms fairly, emphasizing transparency, explainability, and accountability. More recently, it has undertaken a number of enforcement actions against deceptive claims related to AI.
- State Regulations: States such as Tennessee, Colorado, Texas, and Utah have implemented their own AI-related laws. It is likely that additional states will follow. These AI-focused laws supplement state data protection laws that are broader in reach but may also impact AI, such as provisions related to automated decision making.
Additionally, many relevant U.S. laws were intended to be “technology neutral” and apply to processes and outcomes rather than to the methods used to achieve them, and they may be likely to apply, regardless of whether AI is used. Some examples include:
- Financial: The Fair Credit Reporting Act and Fair and Accurate Credit Transactions Act, the Equal Credit Opportunity Act, the Dodd-Frank Wall Street Reform Act, and the Consumer Financial Protection Act impose well-established rules for eligibility decision making, credit reporting, and eligibility explainability, which provide a lens through which to examine AI.
- Health: The Department of Health and Human Services regulates discriminatory outcomes under a number of laws, including the Civil Rights Act, the Rehabilitation Act, the Age Discrimination Act, and the Affordable Care Act.
- Insurance: Insurance is regulated primarily by states; however, anti-discrimination rules at the federal level came into force with the Civil Rights Act.
- Housing: The Civil Rights Act and the Fair Housing Act both address discriminatory housing practices, including the use of background screening.
- Employment: The Equal Employment Opportunity Commission has rules around employment and hiring policies and practices that prohibit discrimination throughout the hiring and employment life cycle. These rules are increasingly applied throughout automated hiring and employment processes.
The U.S. Supreme Court issued a landmark decision in Loper Bright Enterprises v. Raimondo, 603 U.S. 369 (2024), that may greatly curtail the ability of federal government agencies to promulgate strategy and regulation relating to AI. It remains to be seen how Congress and agencies will respond to this development.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
Defective AI systems in the United States may be subject to various legal rules and regulations that are intended to ensure safety, consumer protection, and liability for damages. The legal framework for defective AI systems is evolving, but it currently involves aspects of product liability, negligence, and consumer protection laws (in addition to the laws, regulations, and guidelines identified in response to Question No. 3 above).
a. Product Liability
Product liability laws hold manufacturers, distributors, and sellers accountable for placing defective products into the hands of consumers. AI systems, as products, may be subject to these laws. There are three main types of product defects:
- Design Defects: If the AI system’s design is inherently unsafe.
- Manufacturing Defects: If the AI system deviates from its intended design during production.
- Marketing Defects: If there are inadequate instructions or warnings regarding the AI system’s use.
Under product liability law, if strict liability applies, a plaintiff does not need to prove negligence, only that the product was defective and caused harm.
b. Negligence
A cause of action based on negligence may be available if it can be demonstrated that the developer or provider of an AI system failed to exercise reasonable care in the design, development, testing, or deployment of the system. A plaintiff would need to establish these elements:
- Duty of Care: The AI developer owes a duty to the user and the public to create a safe product.
- Breach of Duty: The developer failed to meet the standard of care.
- Causation: The breach of duty caused harm.
- Damages: There are measurable damages as a result.
c. Consumer Protection Laws
Consumer protection laws, such as those enforced by the Federal Trade Commission (FTC), can apply to AI systems. The FTC Act prohibits unfair or deceptive acts or practices. If an AI system fails to perform as advertised or poses safety risks, the FTC may take action against a company making or offering the system.
d. Federal and State Regulations
- Several federal and state agencies have regulations that can apply to AI systems. These include:
- Federal Trade Commission: Monitors and enforces consumer protection and privacy standards.
- Food and Drug Administration: Regulates AI systems used in healthcare and medical devices.
- National Highway Traffic Safety Administration: Regulates AI in automotive technologies, including autonomous vehicles.
- State Laws: Various states have their own consumer protection laws and may have specific regulations pertaining to AI. For example, Colorado has passed a comprehensive law around high-risk AI systems and any resulting consequential decisions.
As noted in response to Question No. 2, the U.S. Supreme Court issued a landmark decision in Loper Bright Enterprises v. Raimondo, 603 U.S. 369 (2024), that may greatly curtail the ability of federal government agencies to promulgate strategy and regulation relating to AI. It remains to be seen how Congress and agencies will respond to this development.
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems. Have there been any court decisions or legislative developments clarifying liability frameworks applied to artificial intelligence?
The civil and criminal liability that may follow in case of damages caused by AI systems tracks the underlying substantive laws that give rise to the liability. This is to say, a person or company that commits a crime or tort (or breach of contract) by means of an AI system will face the same liability as if the person or company had performed the culpable conduct without the use of an AI system. The use of an AI system in no way excuses or avoids liability and, at present, does not enhance or expand liability. There have been some recent decisions evaluating the availability of fair use defenses to copyright infringement claims relating to the use of protected works to train large language models, like Bartz v. Anthropic PBC, No. 3:24-cv-05417-WHA (N.D. Cal.) (ECF No. 231 June 23, 2025), and Kadrey v. Meta Platforms, Inc., No. 3:23-cv-03417-VC (N.D. Cal.) (ECF No. 598 June 25, 2025), but these cases have not yet reached decisions on damages.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the deployer, the user and the victim?
It is unclear as of July 2025 how liability will be allocated between and among various parties. Although numerous class actions have been filed against companies building large language models and AI-related products (as detailed in responses below), few, if any, have progressed to a point where courts have provided substantive guidance on the allocation of liability for harm caused by an AI system.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
There is no indication yet that the burden of proof in cases arising from or related to the use of AI systems will be different from the burden of proof in other cases brought under the same substantive laws. Criminal prosecutions will always require proof of guilt beyond a reasonable doubt. Civil liability will usually follow where a preponderance of the evidence (i.e. proof that a thing is more likely than not) indicates the defendant performed or was responsible for the actions giving rise to liability. There are some frameworks in U.S. jurisprudence where liability may be premised upon “strict liability,” where liability follows from the mere fact of performing the act that caused the harm (even in the absence of fault or criminal intent). There are no statutes or cases yet that have applied strict liability in relation to the use of AI systems. Even in the absence of new statutes that impose strict liability in relation to AI systems, it is foreseeable that courts may apply strict liability to AI-related cases where that is the standard applied by the underlying substantive law (e.g. certain construction and products liability cases).
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
The answer to this question is still unfolding in 2025. It is not yet clear whether and how insurers will cover the range of risks posed by AI. Insurers may cover some AI-related risks under existing insurance policies; alternatively, they may add endorsements or exclusions that expressly address AI-related risks. Currently, many insurers are asking more questions about prospective policyholders’ use of AI during the underwriting process.
a. Cyber Liability Insurance
So-called cyber liability insurance policies generally provide coverage for first-party losses and third-party liabilities arising out of cyber incidents like network security events, data breaches, and ransomware attacks, but some cyber liability policies also provide coverage for less-common exposures that have heightened importance with the rise of AI, such as regulatory liability and media liability. In contrast, cyber liability policies generally do not provide coverage for breach of contract claims.
b. Technology Errors and Omissions Insurance
Technology errors and omissions (TE&O) insurance policies provide coverage for third-party claims that allege a wrongful act, error, or omission in the performance of technology services or the failure of a technology product to perform as intended. Unlike cyber liability policies, TE&O policies generally provide coverage for breach of contract claims. TE&O policies, however, typically exclude coverage for liability arising out of bodily injury or property damage. Thus, a key issue for companies providing AI-powered products or services that expose them to bodily injury or property damage liability will be whether their general liability insurance policies provide coverage for bodily injury or property damage claims arising out of AI-powered products or services.
c. General Liability Insurance
Commercial general liability (CGL) insurance policies generally provide coverage for third-party claims that allege bodily injury or property damage. CGL policies, however, often exclude coverage for professional liability claims. Furthermore, as an example, there is a standard professional liability exclusion that excludes coverage for third-party claims that allege bodily injury or property damage arising out of the selling, licensing, or furnishing of computer software. As a result, there may be a potential gap in coverage for companies that sell AI-powered software products or services that lead to bodily injury or property damage claims.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
No, not at present. Both the U.S. Patent and Trademark Office (USPTO) and courts that have considered the issue have agreed that an inventor must be a “natural person,” which excludes AI systems from being identified as inventors. However, while AI systems cannot be identified as inventors, USPTO guidance indicates that use of such systems by natural persons does not preclude the possibility of those natural persons obtaining a patent so long as the natural persons have contributed inventive subject matter to the invention. Subsequent litigation to-date has affirmed the USPTO’s position.
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
No, not at present. Both the U.S. Copyright Office and courts that have considered the issue have agreed that an author must be a “natural person,” which excludes AI systems from being identified as authors. However, while AI systems cannot be identified as authors, the Copyright Office issued guidance in March 2023 indicating that use of such systems by natural persons does not preclude the possibility of those natural persons securing copyright protection so long as the natural persons have contributed original subject matter to the work. In other words, AI-generated images can benefit from copyright protection only to the extent that a human contributes original, creative input to the work. Fully autonomous AI-created content is not protected. Authorship is attributed to the human who exercises creative control, whether through extensive prompt engineering, curation, selection, arrangement, or post-processing.
-
What are the main issues to consider when using artificial intelligence systems in the workplace? Have any new regulations been introduced regarding AI-driven hiring, performance assessment, or employee monitoring?
There are many issues to consider when integrating AI systems into the workplace. While these issues span a range of areas, the following are some key ethical and legal considerations.
Bias and Fairness: AI systems can perpetuate and amplify biases present in training data, leading to unfair treatment of employees or applicants. It is crucial to ensure that AI algorithms are developed and trained on diverse and representative data sets.
Transparency and Explainability: Employees and stakeholders should understand how AI systems make decisions, especially in critical areas like hiring, performance evaluations, and promotions. AI systems should be explainable and transparent.
Privacy and Data Protection: The use of AI systems often involves collecting and processing large amounts of data. It is essential to ensure compliance with data protection laws and regulations such as the General Data Protection Regulation in the EU or the California Consumer Privacy Act in the U.S. These laws govern how personal data is collected and processed.
Employment Laws: AI systems must comply with existing employment laws and regulations, such as non-discrimination laws and labor standards.
Intellectual Property: Companies must consider the infringement exposure that may follow, both in relation to the technical operation of an AI system and in relation to a system’s output. Companies ought also to consider the protectability of their own inventions and works.
-
What privacy issues arise from the development (including training) and use of artificial intelligence?
AI operates on data, often at all points in its development life cycle. Thus, the use of AI can raise questions about the appropriate use of data and unauthorized disclosures of personal information based on the data these systems may collect, process, and analyze. This may include biometric data, financial records, health information, and behavioral patterns.
Additionally, the use of AI in decision-making processes, particularly in areas like employment, credit scoring, or law enforcement, can raise concerns about transparency. These types of processing activities are subject to federal and state laws.
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view? Are there any recent precedents addressing the legality of data scraping for AI training?
There is no unified framework for regulation of data scraping in the U.S. However, several types of laws may apply. These include copyright, contract, and misappropriation law.
Copyright. There is neither per se liability for copyright infringement nor a blanket “fair use” exception to copyright liability for copying and/or making derivative works of copyrightable works that are collected by means of data scraping. Operators of large language models are arguing in pending litigation in the U.S. that data scraping to train their models is a form of fair use. It will be some years before the courts reach a final determination of this issue under current copyright law, and it is possible the U.S. Congress will amend copyright law in the meantime or after such a final court determination. It bears mentioning that copyright law in the U.S. protects the expression of ideas, not ideas themselves or unarranged data. So, a threshold determination in a copyright claim brought in relation to data scraping is whether the scraped data is protectable under copyright in the first instance. Note that the Copyright Office recently addressed the issue of data scraping for AI training purposes in Part 3 of its Copyright and Artificial Intelligence report of May 9, 2025. This pre-publication report raised significant legal concerns surrounding some aspects of data scraping under current copyright law. Specifically, it highlights that unauthorized copying of copyrighted works, even if publicly accessible, may constitute infringement. The report advises caution and recommends that AI developers consider licensing agreements and other legal mechanisms to ensure compliance. Even more recently, two decisions have issued in cases evaluating the availability of a fair use defense to copyright infringement relating to the use of protected works to train large language models. Bartz v. Anthropic PBC, No. 3:24-cv-05417-WHA (N.D. Cal.) (ECF No. 231 June 23, 2025), and Kadrey v. Meta Platforms, Inc., No. 3:23-cv-03417-VC (N.D. Cal.) (ECF No. 598 June 25, 2025). The decisions suggest that the defense may be available where the works used as training data were lawfully acquired, but it will be some time before the law settles in this area.
Contract. Information made available pursuant to a contract is governed by the terms of the contract, and parties to a contract may agree that certain copying is not permitted, even if copyright law would otherwise allow it. There have been cases in the U.S. that found liability for data scraping on the basis of a trespass theory (specifically “trespass to chattels”). These cases generally have been premised on a website proprietor alerting visitors that scraping was prohibited and that that prohibition (coupled with proof of harm) made the scraping a trespass.
Misappropriation. Some states in the U.S. have found liability for misappropriation of information, but this doctrine is quite limited because federal copyright law preempts inconsistent state laws. Put another way, the federal framework that authorizes or declines to authorize the “owner” or publisher of certain information to bring a copyright infringement claim is exclusive, and unless liability under state law includes an added element (beyond mere copying), the state law will not be permitted to prohibit what is authorized by federal copyright law.
Computer Fraud and Abuse Act. Some website proprietors have argued over the years that data scraping is a violation of the 1986 federal Computer Fraud and Abuse Act, which provides for both criminal and civil liability, depending on the conduct and circumstances. In the present context, the Act generally prohibits actions that damage protected computers, that involve taking of certain financial information, and that involve committing fraud using a computer. Additional protections are available for government computers. Recent case law has significantly limited the applicability of the Act in cases of data scraping of private commercial websites on the open internet. It is currently unclear whether a cause of action under the Act remains for this conduct.
Privacy. Depending on the nature of the information that is subject to scraping on the open internet, state consumer data privacy laws may apply.
Competition. Competition law (or antitrust law in the U.S.) does not have any particular applicability to data scraping. Acts that give rise to liability under antitrust law will do so regardless of the technical means involved to perform the acts.
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
The enforceability of website terms of use in the U.S. tracks general principles of contract law. A binding contract may be formed where terms are offered by one party and accepted by another party. The challenge in terms-of-use cases is that website visitors may be unaware of proposed contract terms and may not have accepted them, including whatever restrictions may be included in regard to data scraping. This challenge may be overcome by presenting contractual terms in a more prominent manner. For example, requiring a party to scroll through contractual terms and select an “I agree” checkbox will more likely result in an enforceable contract than terms of use under a link to “Legal Terms” in small text at the bottom of a web page. A prohibition on data scraping in an otherwise enforceable contract will be enforceable; it is not the case that such a prohibition would be prohibited by current law as, for example, contrary to public policy. Some websites use the Robots Exclusion Protocol by including the robots.txt filename on their sites. This is an electronic signal to web crawlers that they are not authorized to scrape a site. Compliance with the protocol is voluntary, and there is no enforcement mechanism. However, the operator of a web crawler that ignores this instruction may be on notice that data scraping is not authorized, and this may help support other legal claims, such as trespass to chattels (discussed above).
The court in Bartz v. Anthropic PBC, No. 3:24-cv-05417-WHA (N.D. Cal.) (ECF No. 231 June 23, 2025), held that training large language models using purchased copyrighted books is a fair use, while doing so with pirated books is not. This may lead parties to argue that using a licensed work in violation of license terms is not a fair use. It will take more time for case law to develop on this issue.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
Several U.S. agencies with jurisdiction over privacy have issued guidelines on artificial intelligence.
- The Federal Trade Commission: The FTC has been particularly active in this area. In April 2020, the FTC released guidance on using AI and algorithms, emphasizing the need for transparency, explainability, fairness, accuracy, and accountability. The guidance warns against exacerbating bias or unfairness and highlights the importance of robust data security measures. More recently, FTC leadership has made it clear that it intends to scrutinize the use of AI, companies developing advanced AI systems, and companies making AI investments and partnerships.
- The National Institute of Standards and Technology: NIST has developed a risk management framework for AI systems, which includes privacy considerations and is consistent with the NIST Privacy Framework.
- White House: Many White House actions, including the 2022 Blueprint for an AI Bill of Rights and the AI Executive Order (discussed above), include principles on data privacy and algorithmic discrimination.
- States: The California Privacy Protection Agency is developing regulations that will include provisions on automated decision making and profiling that will apply to AI. Similarly, the Colorado Attorney General’s Office has issued draft rules under the Colorado Privacy Act that address AI-driven profiling. Other states, such as New York and Washington, have task forces or working groups examining the implications of AI, including privacy concerns.
As noted in response to Question No. 2, the U.S. Supreme Court issued a landmark decision in Loper Bright Enterprises v. Raimondo, 603 U.S. 369 (2024), that may greatly curtail the ability of federal government agencies to promulgate strategy and regulation relating to AI. It remains to be seen how Congress and agencies will respond to this development.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
The Federal Trade Commission has been active in addressing cases involving artificial intelligence. In 2025, the FTC has pursued a number of cases alleging that companies made false or misleading claims related to AI products or services, including AI-driven business opportunities, security software, and legal services. In 2023, the FTC brought a case against Rite Aid alleging that its use of AI facial recognition technologies did not include reasonable safeguards and falsely tagged people, primarily women and people of color, as shoplifters. In 2021, the FTC brought a case against Everalbum alleging unlawful use of facial recognition technology and deceptive practices regarding users’ ability to opt out of this AI-driven feature. This case resulted in a settlement requiring the company to delete models and algorithms developed using the allegedly improperly obtained biometric data. The FTC has also investigated and settled cases involving AI-powered credit scoring systems, such as a 2020 case against Ascension Data & Analytics for failing to ensure the security of personal information used in its AI models.
-
Have your national courts already managed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
There are many cases pending on a variety of issues. Some have reached initial decisions, but the law is still developing in this area. In addition, following some widely reported examples of parties filing error-riddled briefs prepared by generative AI systems, several courts and judges have issued standing orders that require parties to disclose when they have used generative AI in preparing court filings. The following are some examples of pending cases involving AI, though there are many more examples. Several of these cases are putative class actions, though no classes have yet been certified.
Patent
- Thaler v. Vidal, 43 F.4th 1207, 1213 (Fed. Cir. 2022), denied, 143 S. Ct. 1783 (2023): USPTO took the position that inventors must be natural persons, and the Federal Circuit affirmed. The Supreme Court declined to review the case, possibly to allow the law in this space to develop further before weighing in.
- Recentive Analytics, Inc. v. Fox Corp., No. 2023‑2437, slip op. at 18 (Fed. Cir. Apr. 18, 2025): The Federal Circuit invalidated patents that used machine learning to dynamically optimize schedules for live broadcast events. In the court’s opinion, the claims were too generic and did not cover any technological improvement or particular implementation of machine learning.
Copyright
- Within a one-week period in June 2025, the federal court for the Northern District of California issued two orders relating to whether use of copyrighted content for training generative AI qualifies as a fair use.
- First, Judge Alsup ruled that Anthropic’s use of legally obtained books for training its LLMs was fair use. Bartz v. Anthropic PBC, No. 3:24-cv-05417-WHA (N.D. Cal.) (ECF No. 231 June 23, 2025). In ruling on a summary judgment motion, the court held that (1) the use of copyrighted works for LLM training was transformative fair use (particularly where there was no evidence of infringing outputs); (2) digitization of lawfully purchased print books, where the new digital copy replaces the print original, for the creation of an internal digital library was fair use; and (3) acquisition and retention of pirated copies to build a permanent, general-purpose digital library was not fair use.
- Second, Judge Chhabria, of the same court, ordered summary judgment in favor of Meta Platforms Inc., finding that Meta’s copying of a group of 13 bestselling authors’ books as training data for use in Meta’s LLM “Llama” was a fair use. Kadrey v. Meta Platforms, Inc., No. 3:23-cv-03417-VC (N.D. Cal.) (ECF No. 598 June 25, 2025). The judge cautioned, however, that his ruling did not stand for the proposition that Meta’s use of copyrighted materials to train its language model is lawful and that it stood only for the proposition that the plaintiffs made the wrong argument and failed to develop a record in support of the right one (i.e. sufficient market harm).
- The orders in both cases determined that the use by LLMs of copyrighted data for training generative AI was “highly transformative” and that the first copyright fair use factor therefore weighed heavily in favor of the AI developers. In both cases, the plaintiffs were unable to demonstrate sufficient market harm to overcome the heavy weight placed on the transformative nature of the AI models. The decisions, however, differed notably as to each judge’s consideration of the source of the copyrighted works and whether the works were obtained through authorized channels or from “pirate websites.” These are trial court decisions that are likely to be appealed, and it will be necessary to continue to monitor these cases and others.
- Only a few weeks later, the Bartz court certified a class of plaintiff authors whose books were alleged to have been downloaded by Anthropic from pirate websites. Bartz v. Anthropic PBC, No. 3:24-cv-05417-WHA (N.D. Cal.) (ECF No. 244 July 17, 2025). Specifically, the class is defined to include: “All beneficial or legal copyright owners of the exclusive right to reproduce copies of any book in the versions of LibGen or PiLiMi downloaded by Anthropic.” Slip Op. at 11.
- Thomson Reuters Enter. Ctr. GmbH v. Ross Intel. Inc., 765 F. Supp. 3d 382 (D. Del. 2025): Court held on Feb. 11, 2025 that Ross’s AI legal research tool improperly relied on Westlaw’s headnotes and Key Number System, finding that Ross’s actions did not qualify for the fair use defense. Ross filed an appeal on June 24, 2025, which is pending.
- Thaler v. Perlmutter, 23-5233 (D.C. Cir. 2025): Court held on Mar. 18, 2025 that copyrighted work cannot be authored exclusively by an AI system. While the court acknowledged the Copyright Act does not define “author,” it found multiple provisions within the act, and the act “taken as a whole,” make clear authors must be humans, not machines.
- Disney Enters. Inc. v. Midjourney Inc., No. 2:25-cv-05275 (C.D. Cal., filed June 11, 2025): Plaintiffs allege Midjourney’s generative AI tools unlawfully reproduce and distribute images incorporating the plaintiffs’ copyrighted characters without authorization.
- Alcon Entertainment, LLC v. Tesla, Inc., No. 2:24-cv-09033 (C.D. Cal., filed Oct. 21, 2024): Plaintiffs allege Tesla improperly created and used an AI-generated advertising image that infringes their copyright in the Blade Runner Court granted-in-part and denied-in-part Defendants’ motion to dismiss on Apr. 7, 2025. Second amended complaint filed June 16, 2025.
- Dow Jones & Co., Inc. v. Perplexity AI, Inc., 1:24-cv-07984 (S.D.N.Y., filed Oct. 21, 2024): Publisher alleges Perplexity AI infringes copyrights by copying/using material to train LLMs, notably through “retrieval-augmented generation” index. Amended complaint filed Dec. 11, 2024.
- Allen v. Perlmutter, No. 1:24-cv-02665 (D. Colo., filed Sept. 26, 2024): Artist asks court to reverse Copyright Office decision rejecting copyright registration of work created using generative AI. Answer filed Jan. 28, 2025.
- UMG Recordings, Inc. v. Suno, Inc., No. 1:24-cv-11611 (D. Mass., filed June 24, 2024): Multiple record companies coordinated by Recording Industry Association of America allege Defendants infringed sound recording copyrights by creating an AI platform that produces digital music files that sound like well-known musical artists. Answer filed August 1, 2024.
- Concord Music Group, Inc. v. Anthropic PBC., No. 3:23-cv-01092 (M.D. Tenn., filed June 26, 2024): Transferred to N.D. Cal. under No. 5:24-cv-03811: Music publishers assert direct and secondary copyright infringement and 17 U.S.C. § 1202(b) violations, alleging that Anthropic improperly (1) created and used unauthorized copies of copyrighted lyrics to train Claude, its generative AI product; and (2) copied, distributed, and publicly displayed those lyrics through Claude’s outputs without copyright management information. Court granted-in-part and denied-in-part Defendants’ motion to dismiss on June 24, 2024. Amended complaint filed April 25, 2025. Case transferred to N.D. Cal. as No. 5:24-cv-03811, filed June 26, 2024.
- Nazemian v. Nvidia Corp., No. 4:24-cv-01454 (N.D. Cal., filed Mar. 8, 2024), consolidated with No. 4:24-cv-2655: Authors allege Nvidia copied and used their protected works to train LLMs.
- O’Nan v. Databricks Inc., No. 3:24-cv-01451 (N.D. Cal., filed Mar. 8, 2024) consolidated with 3:24-cv-02653: Authors allege Defendants copied and used their protected works to train LLMs. Amended complaint filed June 27, 2025.
- Raw Story Media, Inc. v. OpenAI, Inc., 1:24-cv-01514 (S.D.N.Y., filed Feb. 28, 2024); The Intercept Media, Inc. OpenAI, Inc., No. 1:24-CV-01515 (S.D.N.Y., filed Feb. 28, 2024): Two suits were filed by news organizations against OpenAI, alleging that OpenAI violated the Digital Millennium Copyright Act by training the ChatGPT LLM with copies of their works from which content management information had been removed. Motion to dismiss granted Nov. 7, 2024. Motion for leave to amend complaint denied Apr. 3, 2025. Motion for reconsideration denied on June 18, 2025.
- New York Times Microsoft Corp., No. 1:23-cv-11195 (S.D.N.Y., filed Dec. 27, 2023): Alleges Microsoft and OpenAI extensively copied New York Times reporting to train Defendants’ large language models. Motion to dismiss denied Apr. 4, 2025. Answer to second amended complaint filed June 11, 2025.
- v. Alphabet Inc., No. 5:23-cv-03440 (N.D. Cal., filed July 11, 2023): Alleges Google stole content created by “hundreds of millions of Americans” to develop its AI chatbot Bard and other AI systems, giving Google an unfair advantage over competitors that obtain data legally for AI training. Hearing held re consolidated amended complaint on Apr. 23, 2025.
- In Re: OpenAI, Inc. Copyright Infr. Lit., MDL No. 3143-SHS (S.D.N.Y.): Consolidated cases from N.D. Cal. and S.D.N.Y., multiple amended complaints and motions to dismiss filed relating to use of books and other publications to train LLMs. Putative class actions. Consolidated cases include: Tremblay v. OpenAI, Inc.; Chabon v. OpenAI, Inc.; Silverman v. OpenAI, Inc.; Millette v. OpenAI, Inc.; Authors Guild v. OpenAI Inc.; Alter v. OpenAI Inc.; Basbanes v. Microsoft Corp.; New York Times Co. v. Microsoft Corp.; Daily News LP v. Microsoft Corp.; Center for Investigative Reporting, Inc. v. OpenAI, Inc.; Raw Story Media, Inc. v. OpenAI Inc.; Intercept Media, Inc. v. OpenAI, Inc.; and Ziff Davis, Inc. v. OpenAI, Inc.
- Getty Images (US), Inc. v. Stability AI, Inc., No. 1:23-cv-0135 (D. Del., filed Feb. 3, 2023): Alleges that the Stability AI’s image generator, Stable Diffusion, infringed Getty’s copyrights in over 12 million photographs copied from Getty’s website, removed or altered copyright management information (CMI), provided false CMI, and infringed its trademarks, all despite terms of use on Getty’s website expressly prohibiting such uses. Defendants’ motion to dismiss and to transfer to N.D. Cal. filed July 29, 2024 and
- Andersen Stability AI Ltd., No. 3:23-cv-00201 (N.D. Cal., filed Jan. 13, 2023): Plaintiff artists allege their works were used without permission as input materials to train and develop various AI image generators that create works in the style of the artists, which the artists argue are unauthorized derivative works. Plaintiffs also claim Defendants are liable for vicarious copyright infringement and for altering or removing CMI from the images owned by Plaintiffs. Defendants include Stability AI, Inc., Midjourney, Inc., and DeviantArt, Inc. Court granted-in-part and denied-in-part Defendants’ motion to dismiss first amended complaint on Aug. 12, 2024. Second amended complaint filed Oct. 31, 2024 and answer filed Dec. 6, 2024.
- Doe GitHub, Inc., No. 4:22-cv-06823 (N.D. Cal., filed Nov. 3, 2022): Alleges a violation of 17 U.S.C. § 1202 (circumvention of copyright protection systems protecting software against unauthorized copying) in connection with unauthorized use of Plaintiff programmers’ software code to develop Defendants’ AI machines, Codex and Copilot. Defendants include GitHub, Inc., Microsoft Corp., and OpenAI, Inc. Court granted-in-part and denied-in-part Defendants’ motion to dismiss on June 24, 2024. Second amended complaint filed Jan. 1, 2024 and answer filed July 22, 2024. Interlocutory appeal pending before 9th Circuit (No. 24-7700).
Privacy
- v. Alphabet Inc., No. 3:23-cv-03440 (N.D. Cal., filed July 11, 2023): Cited above regarding copyright claims, this action also brings privacy-related claims. Case pending.
- v. OpenAI LP, No. 3:23-cv-03199 (N.D. Cal., filed June 28, 2023): Claims the improper collection, storage, tracking, and sharing of individuals’ private information through web scraping without consent misappropriates personal data on an “unprecedented scale.” Notice of voluntary dismissal filed Sept. 15, 2023.
Tort
- Walters v. OpenAI,LLC, No. 23-A-04860-2 (Ga. Super. Ct. Gwinnett Cty., filed June 5, 2023): Alleges OpenAI defamed Plaintiff by fabricating story that Plaintiff was involved in certain litigation. Court granted summary judgment to OpenAI on May 19, 2025, holding OpenAI not responsible for allegedly defamatory statement because of (1) lack of publication by OpenAI, where statement was generated in response to user prompt; (2) lack of actual malice by OpenAI; and (3) sufficient user warnings about ChatGPT’s limitations, including possibility of hallucinations.
Discrimination
- Mobley Workday, Inc., No. 3:23-cv-0070 (N.D. Cal., filed Feb. 21, 2023): Claims that AI systems used by Workday, which rely on algorithms and inputs created by humans, disproportionately impact and disqualify Black, disabled, and older job applicants. Court granted-in-part and denied-in-part Workday’s motion to dismiss on July 12, 2024. Second amended complaint filed Feb. 20, 2024 and answer filed Aug. 2, 2024.
- Huskey State Farm Fire & Casualty Co., No. 1:22-cv-07014 (N.D. Ill., filed Dec. 14, 2022): Claims State Farm’s algorithms and tools display bias in the way they analyze data. Court granted-in-part and denied-in-part State Farm’s motion to dismiss on Sept. 11, 2023. Amended complaint filed Mar. 31, 2023 and answer filed Oct. 9, 2023.
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
The U.S. does not have a single, dedicated regulator responsible for overseeing the use and development of artificial intelligence across all sectors. For now, the U.S. approach to AI governance remains largely sector-specific and decentralized, with various agencies adapting existing regulatory frameworks and pursuing new rules; Office of Management and Budget memo M-25-21 directs agencies to adopt de-regulatory approaches for AI. For instance, the Federal Trade Commission has taken a leading role in addressing AI-related consumer protection and competition issues. The Equal Employment Opportunity Commission has begun to tackle AI’s impact on workplace discrimination. The Food and Drug Administration is developing frameworks for AI in medical devices, while the National Highway Traffic Safety Administration is addressing AI in autonomous vehicles. Additionally, the Center for AI Standards and Innovation at the National Institute of Standards and Technology has been tasked with developing guidelines and best practices to measure and improve the security of AI systems, which while not regulatory, provides guidance for the development and protection of AI systems. Efforts among these agencies have been coordinated by the White House Office of Science and Technology Policy’s National AI Initiative Office, which does not have regulatory authority.
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited? Which sectors have seen the most rapid adoption of AI technologies?
The use of artificial intelligence by businesses in the U.S. is widespread and growing rapidly. Many business software platforms, including email, word processing, and research services, have incorporated AI-enhanced functions into their products. Some of these functions include drafting short messages and editing/correcting text. It bears mentioning that the extent of use varies greatly, depending on the type of AI under consideration and the industry. Use of generative AI to create images, software, and completed documents may not be widespread yet across all industries, but use of autocorrect and voice-operated systems like Siri and Alexa, to the extent these are considered forms of AI, is pervasive. In contrast, many companies are embracing agentic AI and building AI agents that are autonomous programs capable of making decisions and interacting with their environment with minimal or no human intervention. Furthermore, several companies are actively researching and pursuing hypothetical advanced AI, such as Artificial General Intelligence.
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how? Are AI-driven legal tools widely adopted, and what are the main regulatory concerns surrounding them?
Lawyers in firms and at companies are exploring and making use of AI technologies in their practices. Many legal research and document and information management providers are integrating AI functions into their offerings. Many of these platforms have been cautiously trained on licensed or public domain information. After some well-publicized incidents of lawyers filing error-filed court papers created by ChatGPT, some lawyers are leery of adopting AI in their practices. Lawyers are also concerned about protection of confidentiality and attorney-client privilege and, as a result, may be slower to adopt these technologies than some other industries. That being said, even a casual observer of the legal field can see that AI tools are transforming the legal industry and impacting the way attorneys work, how back-office functions within law firms are managed, and the evidence and factors courts weigh in making decisions regarding AI.
Here are some illustrative examples of how AI is being used in the legal sector:
- Due Diligence and Document Review: AI can quickly review vast amounts of data and documents, identify key points, and draw attention to relevant provisions. AI tools can process contracts and flag clauses responsive to diligence requests or disclosure requirements. This significantly reduces the time and effort needed for legal professionals to review documents.
- Legal Research and Predictive Analysis: Related to document review, AI can sift through many cases, regulations, and rules to identify relevant precedent and clauses. AI can also analyze prior decisions and judgments to predict possible outcomes of ongoing disputes to assist in devising legal strategy.
- Contract Generation: AI tools can be used to automate the creation of legal agreements based on set parameters or letters of intent, and they can flag non-standard clauses, check compliance with legal requirements, or highlight critical agreements that are due for renewal or require re-negotiation.
- Chatbots and Ideation: AI-powered chatbots can provide legal direction on simple matters, reducing the time lawyers need to spend on routine queries or producing general client communications.
- Administrative Matters: AI can automate administrative tasks such as billing and time-tracking, reducing errors and freeing up more time for legal professionals to produce higher-level, complex legal work.
Law firms and attorneys can be expected to adopt AI tools from commercial providers that are fine-tuned specifically for legal work. Even with the availability of “safer” AI tools, law firms and attorneys will still need to consider frameworks for ethical and responsible use of AI, training of individual attorneys, and careful review of outputs for relevancy, accuracy, truthfulness, and completeness.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
Challenges
- Learning about and training on the many types of specialized AI systems that are available and being used by clients and other lawyers. This implicates legal ethics and even malpractice issues relating to competence, confidentiality, and other duties.
- Understanding the operational details of AI systems, including the corpus of original training data used in an AI system, how the training data is processed and used, whether prompts are used for further training, and whether and how confidentiality is preserved.
- Tracking the many laws promulgated by legislators and courts and case law at the federal and state level (not to mention internationally) that are relevant to advising clients and to guiding lawyers’ own practices.
- Avoiding unintentional bias and lack of transparency by use of AI systems that may lead to unfair or discriminatory outcomes.
- Balancing cost and time with risk and benefit while keeping pace with peers and properly serving clients.
Opportunities
- There is great client demand for counseling, negotiation, and in some cases litigation related to AI issues, and this can be expected to continue for some years.
- AI tools will provide a wide variety of efficiencies in lawyers’ own practices, including review and drafting of documents, analysis of large collections of documents (e.g. in discovery in litigation), and evaluation of potential case outcomes. These efficiencies should enable lawyers to spend more time on strategic thinking and to handle a greater number of matters.
- AI tools may raise both the floor and the ceiling in terms of the quality of legal services lawyers are able to provide.
- AI tools may reduce the cost of some types of legal services, making legal counsel available to people who could not previously afford it.
- AI tools may help drive lawyer and client satisfaction, as certain routine tasks are automated and more time is available for attorneys to focus on “higher-level” tasks.
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months? Are there any ongoing initiatives that could reshape AI governance?
The most significant legal developments in the next 12 months are likely to come in the form of legal regulation, whether by executive orders, agency action, or legislation. Litigation is slow-moving, and the principles established in pending cases will not gel until appeals are exhausted, different states and circuits have their say, and (potentially) the Supreme Court weighs in on major issues. It can be expected that these rulings will address a wide range of topics, including intellectual property law, privacy, consumer protection, public safety, antidiscrimination, and employment practices.
United States: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in United States.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence? If so, has there been any progress in its implementation? Are there plans for updates or revisions?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems. Have there been any court decisions or legislative developments clarifying liability frameworks applied to artificial intelligence?
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the deployer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace? Have any new regulations been introduced regarding AI-driven hiring, performance assessment, or employee monitoring?
-
What privacy issues arise from the development (including training) and use of artificial intelligence?
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view? Are there any recent precedents addressing the legality of data scraping for AI training?
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
-
Have your national courts already managed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited? Which sectors have seen the most rapid adoption of AI technologies?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how? Are AI-driven legal tools widely adopted, and what are the main regulatory concerns surrounding them?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months? Are there any ongoing initiatives that could reshape AI governance?