-
What are your countries legal definitions of “artificial intelligence”?
As of today, German law does not provide a definition for “artificial intelligence” (AI). However, being a member of the European Union, the definition in Art. 3 (1) AI Act (REGULATION (EU) 2024/1689) applies. Accordingly, an “AI system” means “a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” The definition of artificial intelligence initially proposed by the European Commission focused on an enumeration of specific technologies. In contrast, the final definition adopted in the AI Act -developed by the European Parliament – is technology-neutral and instead hinges on the functional characteristics of AI systems.
Under the AI Act, AI systems are characterized by their capacity to operate with varying degrees of autonomy, enabling them to perform tasks independently of human control or direct input. Central to this autonomy is their ability to infer—a core capability highlighted in Recital 12—which allows them to move beyond basic data processing and engage in more advanced functions such as learning, reasoning, and modelling.
This broader, functional definition is intentionally designed to offer flexibility, ensuring that the regulation remains responsive to the rapid evolution of AI technologies. However, its broad scope may also introduce a degree of legal uncertainty, potentially requiring case-by-case interpretation in practical application
As of today, the German Federal Office for Information Security (Bundesamt für Sicherheit in der Informationstechnik – “BSI”) refers to the definition developed by the EU Commission’s High-Level Expert Group on AI (HLEG) describing AI systems as “software and hardware systems that use artificial intelligence to act “rationally” in the physical or digital world. Based on perception and analysis of their environment, they act with a certain degree of autonomy to achieve certain goals”.
-
Has your country developed a national strategy for artificial intelligence? If so, has there been any progress in its implementation? Are there plans for updates or revisions?
In November 2018, Germany introduced its AI Strategy, establishing a political framework to guide AI development, which was updated in December 2020. The strategy aims to position “AI Made in Germany” as a global label for trustworthy, secure, and public-interest-oriented AI based on European values. To this end, the federal government has committed €5 billion by 2025.
Implementation is supported by the Plattform Lernende Systeme, which serves as a central hub for coordinating AI activities.
Key Objectives:
- Establish technological leadership and promote the “AI Made in Germany” brand;
- Ensure responsible, value-based AI development;
- Leverage AI for environmental and climate goals;
- Promote societal dialogue on AI;
- Build a European AI ecosystem that strengthens competitiveness and aligns with fundamental rights.
In 2023, the Federal Ministry of Education and Research published an “Action Plan Artificial Intelligence” in order to “translate Germany’s excellent foundations in the areas of research and skills into visible and measurable economic success and tangible benefits for society” and in order to “most effectively interlink AI with [Germany’s] existing assets.”
AI Competence Centers have been established for several years, which are leading research hubs established to advance fundamental and applied artificial intelligence through interdisciplinary collaboration. In the coming years, their mission is to strengthen Germany’s position in global AI research by promoting innovation, fostering talent, ensuring ethical AI development, and facilitating technology transfer to industry and society.
Furthermore, all German states have implemented AI-Stragies, with a focus strengthening the collaboration between academic research and local industries/SMEs and additional funding for AI research and product development.
The AI strategy is designed as a learning strategy that is continuously evolving, e.g. taking the recommendations of the OECD Artificial Intelligence Review of Germany into account.
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
European approaches
As a member of the European Union, Germany is subject to the EU Artificial Intelligence Act, which is binding in its entirety and directly applicable in all Member States. The AI Act is in force since August 1, 2024, it introduces a phased application timeline, with key provisions coming into force as follows:
- 2 February 2025: The prohibition on AI systems presenting an unacceptable risk -such as social scoring, manipulative AI, or certain biometric surveillance – enters into effect.
- 2 August 2025: General-purpose AI models (GPAI), including foundation models, become subject to transparency and documentation obligations.
- 2 August 2026: The majority of requirements for high-risk AI systems – particularly those deployed in sensitive sectors such as healthcare, education, and employment – begin to apply.
- 2 August 2027: A deferred application date applies to high-risk AI systems that fall under specific EU harmonisation legislation, allowing additional time for compliance.
The AI Act provides clear requirements and obligations regarding specific uses of AI for i.a.
- providers (Art. 3 Nr. 3 AI Act) placing on the market or putting into service AI systems or general-purpose AI models in the EU, irrespective of whether those providers are established or located within the EU or in a third country, and
- deployers (Art. 3 Nr. 4 AI Act) of AI systems that have their place of establishment or are located within the EU, or in a third country when the output produced by the AI system is used in the EU, and
- importers and distributors of AI systems.
The AI Act follows a risk-based approach, meaning that the scope of regulation depends on the intensity of the risks posed by the respective AI system: Whereas some artificial intelligence practices (e.g. social scoring) are entirely prohibited due to their unacceptable risk (Art. 5 AI Act) and strict technical and organizational requirements apply to high-risk AI systems (Art. 6 et seq. AI Act), other AI systems with lower risk are only subject to certain transparency and information obligations. In addition, there are specific rules for general-purpose AI models (Art. 51 et seq. AI Regulation) including those generating synthetic audio, image, video or text content. Further obligations may apply for providers of general-purpose AI models with systemic risk (Art. 55 et seq. AI Act) once the EU AI Office will have developed codes of practice together with providers of general-purpose AI models as well as national authorities and other relevant stakeholders (these codes of practice are to be expected 9 months after entry into force of the AI Act). In order to support innovation, great consideration is given to the interest and needs of SMEs and start-ups (Art. 57 et seq. AI Act).
National approaches
Germany has been preparing national legislation to implement the AI Act. As an EU regulation, the AI Act is directly applicable, but member states must set up enforcement structures by August 2025. The Federal Ministry for Economic Affairs and Climate Action (BMWK) and the Federal Ministry of Justice (BMJ) have joint responsibility for the AI Act’s implementation. In late 2024, a draft “KI-Marktüberwachungsgesetz (KIMÜG)” was published, an AI Market Surveillance Act, to designate national authorities and enforcement procedure. Implementation of the AI Act in Germany was delayed by unexpected federal elections. The new coalition agreement between conservatives and social democrats reaffirms a commitment to a swift, business-friendly rollout, emphasizing an “innovation-friendly and low-bureaucracy” approach that avoids additional burdens on the economy. The Bundesnetzagentur (Federal Network Agency) is expected to serve as Germany’s central AI regulator under the AI Act.
Beyond the AI Act, the use of AI-based technologies and information systems is not subject to any specific laws and regulations in Germany but governed by general regulations (e.g. General Data Protection Regulation (GDPR), the Civil Code (BGB), the Act against Unfair Competition (UWG), the Act on Copyright and Related Rights (UrhG), the Administrative Procedure Act (VwVfG), the Act on Liability of Defective Products (ProdHaftG), the Road Traffic Act (StVG), the General Act on Equal Treatment (AGG) and the Works Constitution Act (BetrVG)).
Still, limited sector-specific regulation exists. In the healthcare sector, persons with a statutory health insurance are entitled to the provision with medical devices of lower and higher risk whose main function is essentially based on digital technologies and which are intended to support the detection, monitoring, treatment or alleviation of illnesses (so called: digital health applications). Statutory health insurance providers are also allowed to develop digital innovations in order to improve the quality and cost-effectiveness of care.
Furthermore, specific rules were introduced in the automotive sector, allowing the operation of autonomous vehicles (SAE Level 4). Systems that permanently take over the guidance of the vehicle, and which can also cover longer distances within a defined operating zone without human intervention are permitted. Thus far, the actual use of AI for SAE Level 4 vehicles is limited, but would be permitted in Germany.
Germany also played a leading role in the development of the “Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI System” of the Group of the Seven (G7). The Code of Conduct aims to promote safe, secure, and trustworthy AI worldwide and provides voluntary guidance for actions by developers of advanced AI systems. The non-exhaustive list of actions include i.a. early identification and mitigation of risks and vulnerabilities, transparency about the capabilities and limitations of AI, responsible information sharing and reporting of incidents, development of AI governance and risk management policies, implementation of security controls, labelling of AI-generated content and prioritization of AI development for global benefit. In view of the rapidly evolving technology, the Code of Conduct is to be further elaborated on the basis of specific requirements. Furthermore, Germany supports trustworthy AI through structured standardization efforts. The national bodies DIN and DKE collaborate closely to develop AI-specific standards that align with international norms such as ISO/IEC JTC 1/SC 42, focusing on areas like transparency, safety, and interoperability. Additionally, Germany actively contributes to the European CEN-CENELEC framework and helps shape global AI policy through engagement with the OECD AI guidelines, promoting ethical and human-centric AI development worldwide.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
In 1989, Germany implemented a special liability regime for defective products based on the European Product Liability Directive 85/374/EEC. Accordingly, any defective AI system embodied in a product causing a defect of the product may be subject to the Produkthaftungsgesetz (ProdHaftG, Act on Liability of Defective Products) This.
The requirements for liability according to Section 1 ProdHaftG are:
- Damage to a protected legal interest (person’s death, injury to person’s body or health, damage to an item of property)
- Caused by a defect in a product
- resulting in (financial) damages, and
- no legal exception applies.
In light of the increasing use of AI technologies, the EU has adopted new rules to address liability issues related to AI systems. The Directive on Liability for Defective Products entered into force on December 9, 2024. EU Member States must transpose the directive into national law by December 9, 2026. The updated directive i.a. extends the definition of “product” to digital manufacturing files and software (except free and open-source software) and simplifies the burden of proof for people claiming compensation: While the injured person would usually have to prove that the product was defective, the damage suffered and the causal link between the defectiveness and the damage, the court may now presume that the product was defective in certain circumstances (e.g. if the injured person faces excessive difficulties, in particular due to technical or scientific complexity, to prove the defectiveness of the product or the causal link between its defectiveness and the damage (or both). Furthermore, the court may, upon request of the injured person, order the defendant to disclose relevant evidence.
Besides specific product liability, claims may be based on contractual obligations (if such exist) and liability in damages according to Section 823 of the German Civil Code (BGB). Violations of requirements under the AI Act, e.g. regarding high-risk AI systems according to Art. 8 et sequ. AI Act, may lead to damage claims (Sec. 823 para 2 BGB)
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems. Have there been any court decisions or legislative developments clarifying liability frameworks applied to artificial intelligence?
The applicable civil rules depend significantly on the actual damage caused and legal interest concerned.
If privacy issues are concerned, the GDPR and the respective German national act, the Datenschutz-Grundverordnung (DSGVO) provide for the basic remedies (see topics …).
If damage is caused to a person, the Produkthaftungsgesetz (ProdHaftG) and “standard” civil liability according to e.g. Section 823 BGB (German Civil Code) apply in parallel to any contractual obligations (see topic no. 4). The Regional Court of Kiel (docket no. 6 O 151/23), held the operator of a business information service liable for publishing AI-generated false business data, ruling that the company had made the AI’s output its own and was therefore directly responsible for the misinformation. Liability was limited to cease-and-desist and attorney fee reimbursement; no damages were awarded.
In terms of intellectual property rights, the use of AI systems can result in copyright infringement, trademark infringement, design infringement, patent infringement etc. It may also raise issues concerning the right of publicity and other personality rights, e.g. if images of persons are used without consent.
The AI Liability Directive (AILD) was originally proposed to complement the EU AI Act by establishing rules for non-contractual civil liability related to AI systems. It sought to ease the legal burden on victims by introducing a presumption of causality and improving access to evidence. However, the proposal was withdrawn by the European Commission in early 2025 due to overlaps with the revised Product Liability Directive and insufficient political support.
In terms of criminal law, the use of AI as an instrument in criminal activities does not exclude liability. Those responsible for the manufacture of AI products have the same duties of care as for conventional technical products, whereby the slightest possibility that autonomous actions of an AI might lead to criminally relevant actions will increase those duties of care.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the deployer, the user and the victim?
Developers. In terms of product liability (see topic no. 4), the new Directive on Liability for Defective Products will provide for a tired system of liability. Accordingly, liable are the manufacturer, the quasi manufacturer or any party that substantially modifies the product. Moreover, for products manufactured outside of the EU the importer, the authorized representative of the manufacturer and where those are not available fulfilment service providers are liable. Liability can extend under certain circumstances to all distributors involved and even to online platforms. Finally, In terms of product liability (see topic no. 4), the new Directive on Liability for Defective Products will provide for a tired system of liability. Accordingly, liable are the manufacturer, the quasi manufacturer or any party that substantially modifies the product. Moreover, for products manufactured outside of the EU the importer, the authorized representative of the manufacturer and where those are not available fulfilment service providers are liable. Finally, developers may also face fault-based liability under general tort law (Sec. 823 BGB) if they breach safety duties in design, training, or maintenance.
Deployers (e.g., companies integrating or commercially using AI) are generally liable for how AI is applied in practice. In the cited Regional Court of Kiel case, an operator using AI in its services for publishing information, was held fully responsible for its output. Deployers must supervise and validate AI results to avoid negligence.
The user of AI can also be responsible for any harm caused by the AI system. In professional contexts, liability typically remains with the employer or system operator unless the user acts with gross negligence or intent.
According to German law, each party in the “liability chain” may be entitled to seek recourse against the party at the preceding level, e.g. the user of the AI from the deployer, the deployer from the developer.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
In general, the party claiming damages has to show not only that it had suffered damages but also that the other party is responsible for the damages caused. This, in particular, is the case if claims are asserted against the user of the AI, who is not the manufacturer.
Given that this is typically difficult, at least in product liability matters, the claimant can rely on certain means to ease the burden of proof. For instance, the claimant of a product liability claim does not have to show that the other side acted intentionally or negligently. It is sufficient to show that the product was defective. Given the specifics of AI, however, this can still be an issue, since the information to show that the system/product is defective requires not only access to the program but also inside knowledge about its functioning.
In light of the issues that are accompanied with showing that a product has a defect, German case law developed the concept of the so called “manufacturer liability” according to Section 823 BGB. On the basis of “manufacturer liability”, the injured party can rely on a reversal of the burden of proof. In such cases, the manufacturer has to show that its product is actually not defective. AI systems may require a further development of this case law, since the question will arise at what point in time the defect is no longer in the sphere of the manufacturer – considering the learning process of the AI.
The Directive on Liability for Defective Products will simplify the burden of proof for people claiming compensation (see cyph. 4 above). Similarly, the withdrawn AI Liability Directive, intended to ease the burden of proof for victims harmed by AI systems..
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
Yes, the use of AI can be subject to insurance. Certain insurance companies in Germany already offer specific AI insurance, such as backed performance guarantees. While there is no mandatory AI-specific insurance regime under German law, various existing insurance products already provide coverage for AI-related risks—particularly in the areas of liability, cyber, and professional indemnity insurance.
As AI regulation (e.g., the EU AI Act) evolves, insurance is expected to play a growing role in demonstrating risk management and legal compliance, especially for high-risk or safety-critical applications.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
In a landmark decision dated 11 June 2024 (X ZB 5/22 – DABUS), the German Federal Court of Justice (BGH) reaffirmed the long-standing legal position that only natural persons can be named as inventors under German patent law.
The case arose from a patent application that initially listed an AI system known as DABUS as the sole inventor. DABUS, described as an autonomously operating neural network, had purportedly generated the invention without human input. The applicant – owner of the AI -argued that patent rights should transfer via ownership of the system. The patent office rejected the application, and the decision was upheld through the appeals process.
The BGH clarified four key points:
1. Inventorship is Reserved for Natural Persons
Under Sec. 37(1) of the German Patent Act (PatG), only natural persons qualify as inventors. AI systems, even those mimicking human creativity, cannot hold this status.
2. Human Involvement Remains Essential
Even if AI was used in the inventive process, a human must have contributed significantly to the final technical teaching. The Court emphasized that minimal human input suffices, provided it influences the outcome materially.
3. Descriptive Additions Do Not Remedy Formal Defects
Merely appending a note that an AI system generated the invention does not satisfy legal requirements if the inventor designation implies otherwise. A human inventor must be clearly and unambiguously named.
4. Supplementary AI Disclosure is Legally Permissible but Irrelevant
Indicating that an AI tool supported the inventive process is acceptable and does not invalidate the application, so long as it remains clear that the AI is not the inventor.
The BGH’s ruling aligns with the prevailing international approach—consistent with decisions from the EPO, UK Supreme Court, and courts in the US, Australia, and New Zealand—emphasizing a human-centric framework for patent attribution. While recognizing AI’s growing role in innovation, the ruling confirms that legal rights in patent law remain anchored in human agency.
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
German copyright law is based on harmonized EU law. A copyright-protected work according to the CJEU (C-683/17-Cofemel) requires an “intellectual creation reflecting the freedom of choice and personality of its author”, which effectively excludes copyright protection for an image created by AI.
-
What are the main issues to consider when using artificial intelligence systems in the workplace? Have any new regulations been introduced regarding AI-driven hiring, performance assessment, or employee monitoring?
Issues may arise in particular from the usage of information, specifically when information is uploaded on a cloud-based service operated by a third party. In this context, “accidental” or uncontrolled disclosure of trade secrets and other confidential data can be an issue. Should personal data be concerned, obligations under the GDPR also have to be considered. German data protection authorities require a data protection impact assessment (Art. 35 GDPR) for the processing of personal data using AI.
In the legal context, problems may be created in respect to professional codes of conduct and any additional confidentiality obligations arising therefrom. Transparency concerning the use of AI as well as the respective sources and the question of “who owns the work product” can become problematic.
Another issue is liability, as no specific information about functionality or training data used, and therefore also about the validity/accuracy of work results, may be available.
A possible dependence on AI could also become an overall problem, especially regarding critical processes. In terms of sustainability, one will also have to consider the question of energy consumption of intense server processing. Concerning the latter, costs could become an issue too.
The Works Council has to be informed in good time, should the employer plan to introduce AI. According to the Hamburg Labour Court (24 BVGa 1/24), the introduction of certain AI tools as ChatGPT does not require the Works Council’s consent, as would not allow the employer to monitor the behaviour or performance of employees.
High-Risk AI in Employment: Under Annex III of the AI Act, AI systems used in HR and workforce management are designated high-risk. This covers AI intended for: recruitment or selection of persons, making decisions on promotions or terminations, deciding on task allocations based on behavior or traits, or monitoring and evaluating performance of employees. In essence, hiring algorithms, performance scoring tools, monitoring systems will fall into the high-risk category, whereby this applies to all “work-related relationships”, including freelance contractors. Employers using any such systems should be aware that by August 2, 2026, these AI systems must comply with a detailed set of requirements before and while being used.
Prohibited AI Practices: As of February 2025 the AI Act’ regulations on prohibited practices are in force. Notably for employers, AI systems that “evaluate or classify the trustworthiness of people based on social behavior or personality traits” (social scoring) are banned, as are AI systems that infer emotions of individuals in the workplace (with very narrow exceptions). Using AI to monitor workers’ facial expressions or tone for emotion (stress, mood, truthfulness) will be illegal. Likewise, any AI that uses biometric data to infer sensitive attributes like ethnicity, gender, or political affiliation is prohibited.
-
What privacy issues arise from the development (including training) and use of artificial intelligence?
The development and training of AI systems raise key privacy concerns under the GDPR, particularly regarding the lawful collection and use of personal data. AI training often involves large datasets, requiring a valid legal basis such as consent or legitimate interest, and must respect principles like data minimization and purpose limitation. Transparency is a major challenge, as developers must explain how personal data is used and ensure individuals understand and can challenge AI-driven outcomes. Additionally, developers must avoid processing sensitive data without legal justification and conduct Data Protection Impact Assessments (DPIAs) when high risks to privacy arise.
Processing vast amounts of personal data scraped from the internet for training AI, in particular Large Language Models (LLMs), significantly affects privacy rights. The application of AI will cause a plethora of further privacy issues, which we can only begin to recognize today. Already, the unique ability of an AI to take autonomous decisions, clashes with the general human expectation to only be subjected to decisions made by other humans. This is even more problematic as recent incidents show that AI is also not immune to “bias”, depending on the quality of the training data. The GDPR accordingly prohibits or at least severely restricts decisions based solely on automated processing. Similarly, the Digital Single Market Act and the Digital Service Act also require that content moderation measures on internet platforms be at least subject to human review. Possibly even more problematic is AI that does not make active decisions, but “merely” monitors human behavior or analyses personal data limited only by allocated computing power. The German Constitutional Court has developed in its ground-breaking Volkszählungsurteil, the right for informational-self-determination, which would not be ”compatible with a social order and a legal order that enables it, in which citizens can no longer know who knows what, when and on what occasion about them”. Transparency obligations are therefore a key element for AI regulation, but beyond that “bans” for specifically intrusive and discriminatory uses of AI systems are necessary, e.g. for “Real-Time” biometric identification systems, which the AI Act permits only for specific law enforcement purposes.
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view? Are there any recent precedents addressing the legality of data scraping for AI training?
Data scraping in Germany is subject to a complex regulatory framework that encompasses intellectual property and privacy laws as, in contrary to data crawling, the scraping software not only reads out the requested information from particular websites but usually also stores it in a file that may be used for other purposes. This storing requires the (temporary) reproduction of content that may or may not be copyright protected and/or contain personal data. Relevant from an IP perspective, specifically copyright law, are the text and data mining exceptions provided by Art. 3, 4 Digital Single Market Directive (DSM), which were also implemented into German law. Thereby, data mining is defined as an automatic analysis of individual or several digital or digitized works for the purpose of gathering information, in particular regarding patterns, trends and correlations. Aimed at scientific and commercial users, these exceptions permit the reproduction of copyright-protected material for data mining purposes. However, these exceptions are subject to certain conditions, in particular the legitimacy of the source and the right to opt out. The preliminary remarks to the Directive stress the significance of text and data mining for the development of new applications or technologies. Art. 53 I (c) AI Act, requires that providers of general-purpose AI models adopt a policy that they identify and comply with an opt out expressed pursuant to Article 4(3) DSM. This should conclude the discussion whether the data mining exception, would apply to the collection of training data.
German copyright law provides protection to databases under the Database Directive (Directive 96/9/EC), implemented in the UrhG (German Copyright Act). Databases that constitute the author’s own intellectual creation are protected by copyright. Even if the database does not meet this threshold, it may still be protected under the sui generis right if substantial investment has been made in obtaining, verifying, or presenting the contents. However, also in respect to databases the text and data mining exceptions apply.
The GDPR, applicable across the EU including Germany, regulates the processing of personal data. Scraping personal data without consent or other legal basis can violate the GDPR provisions.
Unfair Commercial Practices: The UWG (German Unfair Competition Act) prohibits unfair commercial practices. Scraping information from a website, in order to give users direct access to that content, is not an unfair commercial practice, unless this requires overcoming technical barriers as e.g. a paywall.
The legality of data scraping for AI training was addressed by two notable German court decisions:
Regional Court of Hamburg (27.09.2024 – docket no. 310 O 227/23- Laion)
In September 2024, the Hamburg Regional Court ruled on a case involving the non-profit organization LAION, which had compiled a dataset of approximately 5.8 billion image-text pairs by scraping publicly available internet content. The court determined that LAION’s actions fell under the “text and data mining” (TDM) exception for scientific research as outlined in Sec. 60d of the German Copyright Act (UrhG), which implements Article 3 of the EU Directive on Copyright in the Digital Single Market (DSM Directive).
The court emphasized that LAION’s non-commercial status and the free availability of its dataset qualified the activity as scientific research. It also noted that the mere downloading and analysis of images to create the dataset did not constitute copyright infringement under this exception.
However, the court did not conclusively address whether such TDM activities for commercial purposes would be permissible, especially if rights holders have explicitly reserved their rights. It suggested that reservations of rights expressed in natural language on websites might be considered machine-readable and thus effective, potentially limiting the applicability of the TDM exception in commercial contexts. Moreover, although the Court suggested that the use of scraped data for actual AI training might fall within the scope of the text and data mining (TDM) exceptions, it did not have to decide this question.
The Laion ruling is currently under appeal; nevertheless, it is noteworthy for its recognition of reservations of rights expressed in natural language in the context of AI training. Ultimately, however, this issue will need to be resolved by the Court of Justice of the European Union (CJEU), where a preliminary reference is already pending from the Regional Court of Budapest concerning AI training and the scope of the text and data mining (TDM) exceptions (Like Company v Google, Case C-250/25).
Court of Appeal of Cologne (23.05.2025 – docket no. UKI. 2/25- Meta)
In this significant and controversial interim ruling, the Court held that Meta may use data from all adult Facebook and Instagram users in Europe to train its AI models, including its large language model LLaMA.
The Court found that such use does not violate the GDPR’s Article 9 prohibition on processing special category data, provided the user has “manifestly made the data public.” This threshold, according to the Court, is met when users post sensitive information (such as political views or health details) in public profiles or posts, thereby making it accessible to anyone, including via search engines.
Notably, even sensitive data of third parties contained in public posts were deemed admissible, unless the affected individuals actively request removal. The Court acknowledged some uncertainty on this point and indicated it may seek a preliminary ruling from the CJEU in full proceedings.
The judges justified their stance by referring to the EU AI Act’s recognition of the necessity of large-scale data—including text, images, and videos—for the development of generative AI. The use of web scraping, even where incidental sensitive data may be captured, was deemed foreseeable and broadly acceptable under current law.
Meta also demonstrated that it employs deidentification measures, including tokenization of names, emails, and identifiers. Although these steps fall short of full anonymization, the Court held they adequately reduce risk, even as facial images remain unblurred.
Importantly, users may opt out and request exclusion of their public posts from Meta’s AI training datasets.
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
The German Federal Supreme Court had to deal repeatedly with data scrapping, but focused for procedural reasons on unfair commercial practices and copyright (database sui generis right), whereby scrapping in violation of the website terms was not sufficient to qualify the collection and use of the data as an unfair commercial practice. Still, this does not exclude that data scraping could be validly prohibited by platform terms. Those terms would have to comply with the requirements for general terms and conditions under German law. Art. 11 Data Act explicitly permits the “application of technical protection measures(…), to prevent unauthorised access to data, and to ensure compliance (…) with the agreed contractual terms for making data available” , so assuming that it was possible to show that terms prohibiting data scraping were validly agreed upon, it should be possible to limit data scraping.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
The Data Protection Conference (Datenschutzkonferenz – “DSK”) is composed of the federal and all 16 independent state data protection authorities. Already in April 2019, the DSK published the “Hambach Declaration on Artificial Intelligence”, which sets the following seven requirements for the use of AI: 1. AI must not objectify people. 2. AI must be used only for constitutionally legitimate purposes and must not circumvent the principle of purpose limitation. 3. AI must be transparent, accountable and explainable. 4. AI must avoid discrimination. 5. The principle of data minimization applies to AI. 6. Responsibilities for the use of an AI system must be identified and clearly communicated. 7. AI requires technical and organizational standards.
Following this, they have further issued a position paper on “recommended technical and organizational measures for the development and operation of AI systems” in November 2019, which addresses the whole lifecycle of an AI system, starting with the design of the AI and its components, the process of selecting raw data to create training data, the training process itself, validation and examination of the trained system, the use of the AI system and finally feedback and optimization mechanisms.
In May 2024, the DSK issued guidance on artificial intelligence and data protection primarily for those responsible for implementing AI applications. The publication may serve as a guide for the selection, implementation and use of AI applications and provides an overview of relevant criteria to take into account for the data protection-compliant use of AI applications.
Building upon this, in June 2025, the DSK published a more detailed 28-page framework outlining technical and organizational measures for AI systems throughout their entire lifecycle—from design and development to implementation and operation. This guidance is particularly relevant for manufacturers, developers, and organizations deploying AI systems, ensuring that AI systems are developed and operated in compliance with data protection regulations.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
Concurrently with the cited Court of Appeal of Cologne case rejecting the request for an preliminary injunction (see Topic 13), the Hamburg Data Protection Authority initiated an “urgency procedure” under Article 66 GDPR, challenging Meta’s data processing practices and urging immediate action to protect users’ rights.
In June 2025, Berlin’s Commissioner for Data Protection and Freedom of Information, Meike Kamp, requested that Apple and Google remove the Chinese AI app DeepSeek from their app stores. The app was found to be transferring German users’ personal data to servers in China without ensuring adequate data protection measures, contravening EU data protection standards. Despite prior warnings, DeepSeek failed to comply with EU data transfer requirements, leading to the enforcement action.
German data protection authorities emphasize that AI systems must comply with GDPR principles, especially regarding consent and transparency when processing personal data. Explicit, informed consent is essential, particularly for AI training involving user data, as seen in cases like Meta’s data practices. Organizations must ensure lawful cross-border data transfers, with adequate safeguards, or risk regulatory enforcement, as demonstrated by the DeepSeek app case. Overall, Germany’s privacy regulators are actively monitoring AI use and expect developers to prioritize accountability, fairness, and legal compliance throughout the AI lifecycle.
-
Have your national courts already managed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
German courts have addressed significant cases involving artificial intelligence (AI) in areas such as copyright, data protection, and patent law. In addition to the LAION and Meta cases (see topic no. 13), the Federal Court of Justice (Bundesgerichtshof) ruled on the question of whether an AI system can be named as an inventor in a patent application (see topic no. 9).
In its final ruling in the DABUS case, the BGH unequivocally established that inventorship under German patent law is a legal status reserved exclusively for natural persons, even in the age of autonomous AI. This position aligns with those adopted by the EPO, the UK Supreme Court, and courts in the United States, Australia, and New Zealand.
By contrast, the LAION decision is a first-instance ruling, and while the Meta case was decided at the appellate level, it remains a summary judgment in preliminary injunction proceedings. Nonetheless, both cases demonstrate that German courts acknowledge the growing importance of AI and the practical necessity of training data – a practice with potentially far – reaching implications for the rights of third parties.
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
At present, Germany does not have a single AI regulator. Instead, regulatory functions are distributed across existing authorities—such as the BfDI, Bundesnetzagentur, and BSI—depending on the context (data, competition, cybersecurity, etc.). However, under the EU AI Act, Germany is required to formally appoint a national supervisory authority and clarify regulatory responsibilities for AI development and deployment by August 2, 2025. The Bundesnetzagentur (Federal Network Agency) is expected to assume the role of Germany’s central AI regulator under the new framework.
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited? Which sectors have seen the most rapid adoption of AI technologies?
AI adoption among German businesses has accelerated markedly in recent years, transitioning from a niche technology to a mainstream tool across various sectors. As of 2025, according to an IFO survey approximately 40.9% of German companies have integrated AI into their operations, which is a significant increase to the previous year with 27%, reflecting a significant shift towards digital transformation.
The survey points to differences between sectors.
“Companies in advertising and market research use AI particularly often, with the figure now at 84.3%. IT service providers (73.7%) are driving the use of intelligent systems at full speed. At 70.4%, the automotive industry is also relying heavily on data-based processes in production. Around every second company in the chemical industry and among manufacturers of machinery and equipment uses artificial intelligence. The hospitality sector (31.3%), food and beverage manufacturers (around 21%), and textile producers (18.8%) are still more reluctant about using AI.”
There is also a clear correlation with the company size:
“While 56% of large companies use AI, the figure is 38% for small and medium-sized companies, and only 31% for microenterprises. Nevertheless, growing interest can also be seen among smaller companies, with many of them in the planning or discussion phase.” (Source: ifo Business Survey, 16 June 2025)
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how? Are AI-driven legal tools widely adopted, and what are the main regulatory concerns surrounding them?
The legal sector in Germany is increasingly embracing AI tools, both in private law firms and in corporate (in-house) legal departments.
- Legal Research & Q&A: Generative AI tools – often trained on legal databases – support lawyers in researching legal questions, drafting memos, and summarizing case law using natural language queries. Particularly, the legal publishers are starting to offer AI tools, incorporating them into their legal databases or by specific LLM’s trained on legal commentaries.
- Contract Review & Due Diligence: Traditional AI tools streamline M&A and compliance workflows by analyzing large contract sets, extracting key clauses, and flagging anomalies, significantly accelerating document review.
- Document Drafting & Automation: Generative AI assists with first drafts of legal documents (e.g. contracts, cease-and-desist letters), while enhanced automation platforms integrate firm-specific clause libraries for tailored drafting.
- Summarization & Translation: AI is widely used to condense complex documents—contracts, court rulings, evidence—and to compare text versions. Machine translation tools help manage multilingual matters, though quality checks remain essential.
Regulatory Considerations for AI Use in the German Legal Sector
Confidentiality and Professional Secrecy
German lawyers are bound by strict confidentiality under Sec. 43a(2) BRAO (Federal Lawyers’ Act) and Sec. 203 StGB (Criminal Code). The Federal Bar (BRAK) has clarified that this duty fully applies when using AI. Sensitive client data may only be shared with AI providers under strict conditions—ideally anonymized, using abstract prompts, and only when necessary. Firms must have proper data processing agreements in place, and the use of free public AI tools (e.g. ChatGPT) is discouraged unless adequate safeguards are ensured.
Data Protection and GDPR Compliance
AI tools must comply with the GDPR and German data protection laws. This includes assessing the legal basis for data processing, especially when personal data may be transferred to non-EU providers. The Datenschutzkonferenz warns that many AI systems store data externally, posing risks to both privacy and legal confidentiality. Firms are advised to use secure, EU-hosted AI solutions, with strict access and anonymization protocols.
Duty of Care and Oversight
Lawyers remain fully responsible for any AI-assisted work. Under Sec. 43 BRAO, the use of AI does not absolve the lawyer’s duty of independent judgment. The BRAK emphasizes that all AI-generated outputs must be reviewed and validated by the attorney to ensure legal accuracy and professional integrity. Blind reliance on AI, especially for drafting or legal advice, is considered unethical and potentially negligent.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
Among the 5 key challenges, we see:
- Data privacy and security: AI tools must adhere to strict GDPR requirements and discretion obligations in attorneys’ code of conduct, ensuring data protection and privacy.
- Costs of reliable AI systems: Developing or licensing robust AI systems can be costly, posing significant challenges for small and medium-sized firms that handle standard cases well-suited for AI.
- Lack of knowledge and need for adoption and integration: Highly qualified lawyers and engineers may resist AI, doubting the technology’s reliability due to unclear functionality and unknown training data.
- Lawyers must seek knowledge and develop a deeper understanding on the possibilities and limitations of AI systems, also to prevent false trust in the system and misinformation caused by training data biases or inaccurate AI content.
- Ethical and moral considerations: AI systems can perpetuate or exacerbate biases, leading to potential discrimination. Lawyers must ensure that AI applications are transparent, adhere to ethical standards and do not violate anti-discrimination laws.
Loss of business: Simple legal advice, consulting, and routine tasks may be handled by AI, potentially reducing the need for human lawyers and driving down fees.
5 key opportunities, on the other hand:
- Countering workforce shortages and addressing talent gaps: AI may help mitigate the shortage of qualified legal professionals, reducing personnel costs and filling skill gaps.
- Managing complexity: AI can assist in managing increasing amounts of case law, knowledge, and data, facilitating a more effective legal practice and supporting more informed, strategic decisions.
- Focusing on strategic work: AI can automate tedious tasks, allowing lawyers to focus on more complex legal work.
- Increasing efficiency: AI can enhance efficiency, improving responsiveness and aligning with clients’ budget expectations and demands for quick turnarounds.
- Facilitating creative solutions and enhancing work quality: AI can inspire new, creative legal solutions, potentially leading to higher-quality work products and innovative legal strategies.
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months? Are there any ongoing initiatives that could reshape AI governance?
Over the next 12 months, Germany’s most significant legal development in artificial intelligence will be the implementation of the EU AI Act, which mandates new oversight structures and strict compliance requirements for high-risk AI systems. Germany plans to designate the Federal Network Agency (BNetzA) as the central AI supervisory authority, while specialized agencies will oversee sector-specific applications. A national AI implementation bill is currently being drafted to operationalize the AI Act domestically, including provisions for human rights oversight and regulatory coordination. At the EU level, a voluntary Code of Practice for general-purpose AI is also in development, though its release has been delayed until late 2025.
Germany’s 2025 coalition agreement reaffirms the country’s strategic ambition to lead in AI innovation through public-private collaboration and infrastructure investment. A key component of this effort is the InvestAI initiative, which includes plans to build AI gigafactories to support large-scale model training. Germany is also participating in the formation of an international AI Safety Institute network to develop testing protocols and ensure responsible AI deployment. Collectively, these regulatory and strategic initiatives are set to reshape Germany’s AI governance, aiming to balance technological leadership with robust legal and ethical safeguards.
Germany: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in Germany.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence? If so, has there been any progress in its implementation? Are there plans for updates or revisions?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems. Have there been any court decisions or legislative developments clarifying liability frameworks applied to artificial intelligence?
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the deployer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace? Have any new regulations been introduced regarding AI-driven hiring, performance assessment, or employee monitoring?
-
What privacy issues arise from the development (including training) and use of artificial intelligence?
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view? Are there any recent precedents addressing the legality of data scraping for AI training?
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
-
Have your national courts already managed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited? Which sectors have seen the most rapid adoption of AI technologies?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how? Are AI-driven legal tools widely adopted, and what are the main regulatory concerns surrounding them?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months? Are there any ongoing initiatives that could reshape AI governance?