Artificial Intelligence and Data Protection: Call for Firm GDPR Enforcement in the Age of Algorithms
Bradu Neagu & Associates | View firm profile
Cristina Savulescu – Cristina Săvulescu – Head of Whistleblowing & GDPR Bradu Neagu & Associates
In the context of the exponential development of artificial intelligence (AI), rigorous enforcement of theGeneral Data Protection Regulation (GDPR) is becoming more important than ever. The emergence of advanced algorithmic systems – from facial recognition to conversational chatbots – has brought innovative benefits, but also significant risks to privacy and fundamental rights. The launch of generative AI services such as ChatGPT (considered the fastest-adopted app in history) has triggered increased attention from regulators, highlighting the need for GDPR principles to be strictly applied in the algorithmic era. Thus, the European Union is strengthening its legislative framework (through the preparation of the AI Act) and gearing its supervisory authorities towards increased cooperation to ensure compliance with data protection rules in the development and use of AI systems.
AI data protection risks
The widespread deployment of artificial intelligence systems poses a number of inherent risks
to the protection of personal data. Among the most important are:
- Massive data collection and legality of processing – Many AI models train on huge volumes of data (including personal data) processed automatically, sometimes without adequate legal basis or without the information and consent of data subjects. For example, there is a tendency to use huge amounts of personal data in the training phase of algorithms, sometimes without the knowledge or consent of the individuals whose data is being useddataprotection.ie. This is contrary to the principles of lawfulness, fairness and transparency imposed by the GDPR.
- Lack of transparency and explainability – Complex algorithms (in particular black box machine learning models) make it difficult for operators to explain in an understandable way how they make decisions or process data. This opacity makes it difficult for data subjects to exercise their rights and is contrary to the information obligations under the GDPR (Art. 13- 14). The authorities emphasize that both at the development and implementation stage, operators must ensure transparency – including explaining the logic behind automated decisions – in order to comply with the GDPR and maintain public trust.
- Data Inaccuracy and BIAS – The quality of input data directly influences the quality of AI results. Erroneous, incomplete or biased training data can lead to factual errors and systematic biases in algorithm decisions, affecting individuals or In the ChatGPT case, for example, the Italian authority found that the information provided may be inaccurate and not correspond to the actual data, violating the principle of accuracy (Art. 5(1)(d) GDPR). Algorithmic BIAS may also lead to unlawful discrimination, which is in conflict with both the GDPR (principle of fairness) and other non-discrimination rules.
- Indefinite storage and unintended reuse of data – AI models can retain large volumes of data for long periods of time, sometimes beyond what is necessary, in violation of the GDPR’s principle of limited storage. Furthermore, the personal data initially used to train a model could later be reused in new contexts without the data subjects’ awareness or consent. For example, an AI model trained on personal data may be shared or sold to third parties who use it for other purposes, exposing the data to unanticipated processingdataprotection.ie. This contravenes both the purpose limitation principle and the obligation to obtain a new legal basis for secondary uses.
- Impact on children and vulnerable groups – Without adequate safeguards, AI can expose minors to serious risks. One example is the Replika app, an AI chatbot used as a “virtual friend”: in February 2023, Garante (the Italian authority) found that the lack of age verification allowed minors access to inappropriate interactions and data processing without parental consent. ChatGPT was also temporarily blocked in Italy because it did not have an effective mechanism to restrict access for those under 13. Protecting children in AI environments is essential, and failure to comply with GDPR requirements on parental consent and minimizing data about minors can result in severe
- Automated decisions with significant legal or similar significant effect – More and more organizations are using AI for profiling and automated decision making (e.g., resume filtering, credit granting, risk assessments) that may significantly affect individuals. The GDPR (Art. 22) prohibits decisions based solely on automated processing that produce legal or significant effects on individuals, in the absence of safeguards (such as explicit consent or contractual necessity, plus the right to human intervention). The use of AI for such purposes raises questions of lawfulness (is there a valid basis or applicable exception?), as well as the obligation to provide for the possibility of human intervention and to conduct a Data Protection Impact Assessment (DPIA) prior to implementation. Failure to comply with these requirements may constitute a serious breach of the GDPR – for example, an EU supervisory authority fined a bank for automated lending decisions without prior assessment and without proper customer information (illustrative hypothetical case).
In essence, AI risks in the area of personal data stem from the volume and complexity of processing, which may be beyond the individual’s traditional control over his or her data. These risks call for a firm application of the GDPR principles – lawfulness, transparency, fairness, minimization, accuracy, purpose limitation and storage – from the very beginning of the design of IA (privacy by design) systems. The authorities emphasize that the development of AI is compatible with data protection only if these principles are strictly respected: “the development of AI systems is compatible with the challenges of privacy protection… only then will citizens be able to put their trust in these technologies”. For operators, this translates into an obligation to identify and mitigate risks through appropriate technical and organizational measures (including impact assessments and regular audits), while ensuring that data subjects’ rights (access, erasure, objection, etc.) can be exercised even in complex algorithmic contexts.
Recent examples of AI enforcement
European Data Protection Authorities have demonstrated a firm and proactive approach in recent years in sanctioning GDPR abuses and breaches related to AI-based systems. Below
some relevant and exemplary cases are presented, illustrating the enforcement trend and serving as lessons for the technology industry:
Clearview AI – An emblematic case on facial recognition and massive biometric data collection without consent. Clearview AI, an American company, scraped more than 20 billion images from the internet (including social media) to build a facial identification tool, which was then sold to law enforcement authorities. This practice has been considered a blatant breach of GDPR in multiple EU jurisdictions: the Italian authority (Garante) fined Clearview €20 million and banned it from further processing of Italian individuals’ data, and ordered the deletion of all illegally collected data. Similarly, the French authority (CNIL) in October 2022 imposed a maximum fine of €20 million€ and ordered the company to delete the data of French citizens from its database. For non-compliance with these orders, CNIL subsequently imposed additional penalties of 5.2 million€ (May 2023). Other authorities, such as the UK’s ICO and the Greek authority, have also sanctioned Clearview, totaling tens of millions of fines across Europe. The Clearview case highlights that the processing of facial images (sensitive biometric data) without a valid legal basis and without informing data subjects is considered unlawful – and the fact that the controller was non-EU did not exempt it from liability, as the GDPR has extraterritorial effect (Art. 3(2)). The message from the authorities is clear: mass biometric mass surveillance technologies, if they violate privacy rights, will be severely sanctioned in order to protect citizens from a “surveillance society” incompatible with European values.
OpenAI – ChatGPT – The emergence of the ChatGPT generative language model (launched by OpenAI) tested the limits of GDPR application to generative AI and triggered the first coordinated actions at the European level. In March 2023, after a security breach incident that exposed some users’ data, the Italian Data Protection Authority issued an urgent order temporarily suspending ChatGPT in Italy. Garante invoked several alleged breaches of the GDPR by OpenAI: failure to adequately inform users and the individuals whose data was collected (breach of Art. 13-14), the absence of a legal basis for the processing of personal data for the purpose of algorithm training (legitimate interest could not be invoked in this context), the inaccuracy of the processed data (the model providing often erroneous information about individuals) and the failure to implement an age verification mechanism (allowing access to potentially inappropriate content to minors under 13). This was the first case of a nationwide restriction of an AI service on data protection grounds, marking a bold approach by Garante, which gave the company 20 days to remedy the deficiencies.
OpenAI responded by quickly implementing a number of measures: it updated privacy policies and user notices, introduced a form through which individuals (including non-users) can request deletion of their data from the training set, and implemented an age (self-declaratory) filter to block access by minors as a result, on April 28, 2023, the Italian authority lifted the ban, allowing ChatGPT to resume in Italy, conditional on continued compliance with the agreed compliance plan. This plan included, among other things, carrying out a public information campaign in Italy about how ChatGPT works and how data is used to train algorithms – a novel measure designed to increase transparency to the public.
In parallel, other European authorities have launched similar actions: for example, the Spanish authority (AEPD) opened in April 2023 its own investigation into OpenAI for potential breaches of the GDPR, making Spain the second country to formally initiate an investigation in this regard. At the same time, the EDPS asked the European Data Protection Board (EDPB/ EDPS) to put the ChatGPT topic on the European agenda – an initiative that led to the creation of a task force at EDPB level to coordinate the authorities’ actions on generative AI tools.
The Italian investigation into ChatGPT was finalized in December 2024, when Garante Garante announced important results: OpenAI was fined 15 million€ for violating GDPR. The Authority concluded that OpenAI had processed users’ personal data (and that of other individuals, extracted from online sources) without an adequate legal basis for algorithm training and in breach of its transparency obligations to individuals. The failure to implement a robust age verification system, exposing minors to inappropriate content, was also noted. In addition to the fine, the Garante imposed an obligation on OpenAI to run a six-month campaign on Italian media channels to educate the public on how ChatGPT collects and uses data – a measure designed to improve transparency and public awareness. OpenAI has announced that it will contest the sanction as “disproportionate”, while emphasizing that it has taken compliance measures and that its revenues in Italy were modest compared to the amount of the fine. The ChatGPT case confirms that AI models that heavily process personal data fall directly under the scope of the GDPR, and innovation does not exempt companies from fundamental legal obligations: transparency, consent (or other valid basis), protection of minors and data security.
Replika – Another relevant example is the authorities’ intervention against AI chatbots designed to interact with users, which can process sensitive data and affect vulnerable groups. Replika is an AI app that provides users with a “virtual friend” with whom they can converse freely, including on intimate topics. In February 2023, the Garante (Italy) issued an emergency measure banning Replika from processing the personal data of users in Italy. The decision came following complaints about the negative impact on minors (who could access the app and receive age- inappropriate replies) and the finding that Replika was processing sensitive data (emotions, psychological states) without any legal basis or proper consent mechanisms. The Italian authority considered that such a service, although based on innovative AI, cannot operate in a vacuum of legal accountability: the operator is obliged to comply with the GDPR from the design phase, by ensuring the filtering of underage users, clearly informing adults about how their conversations are stored and used, and obtaining explicit consent if special categories of data are processed (e.g. mental health data inferable from chats). The Replika case highlights that “experimenting” with AI on real users, without safeguards, calls for prompt intervention by authorities to prevent potential emotional or privacy abuses, especially when minors are involved.
X (formerly Twitter) – Social networks are a rich breeding ground for personal data used to train AI algorithms, and the practices of collecting and using this data are closely scrutinized by authorities. A recent example concerns the X/Twitter platform and its plans to use users’ public posts to train AI models (including its own model called Grok.) In July 2024, under new management, X changed its policy on
privacy so that EU users would have to explicitly opt-out if they did not want their public postings to be used by the company for AI training purposes. The Irish authority (DPC), being the lead authority for X in the EU, expressed concern that such a massive processing of European users’ data could violate the GDPR (especially the principles of lawfulness and transparency). The DPC initiated an emergency action and referred the matter to the Irish Supreme Court, arguing that these changes could violate citizens’ privacy rights. In August 2024, an unprecedented agreement was reached: X agreed to suspend all collection and use of EU user data for AI training. Essentially, the company promised to stop including EU citizens’ tweets in the datasets for its Grok model unless it obtained their consent.
The CPD welcomed the outcome, stressing that EU users’ rights have been protected. However, the case did not end there. The Irish authority raised the issue with the EDPB, calling for a unified EU-wide position on the legality of using public social media posts to train AI. The EDPB has been asked to determine whether or not X’s actions constituted a breach of the GDPR (proceeding under Art. 64-65 GDPR). At the same time, the NOYB organization has filed separate complaints in several countries against xAI (sister company developing Grok), accusing violations of 16 articles of EU privacy law. X has labeled the DPC order as “unjustified and discriminates the platform against other scraping companies”, citing the fact that many AI companies mine data from the internet. Regardless of these objections, the X/Twitter case demonstrates that European authorities are willing to take immediate legal action to prevent massive data processing without a legal basis, even forcing tech giants to comply. It also highlights the importance of cooperation between authorities at EU level to uniformly tackle such practices affecting users in multiple countries.
Meta (Facebook/Instagram) – Major social media and technology platforms have also been forced to adapt their business models as a result of GDPR enforcement, especially around algorithmic profiling for targeted advertising and international data transfers – issues closely related to the use of AI in personalization. A turning point was the January 2023 decision by the Irish Data Protection Commission (DPC) in conjunction with the EDPB, regarding the legality of personalized ads delivered by Meta on Facebook and Instagram. Following strategic complaints (made by activist Max Schrems’ organization NOYB), Meta was found to have forced user consent for behavioural advertising, effectively hiding data processing under the guise of “necessity for the performance of the contract” (acceptance of the terms of use). The EDPB has determined in a binding decision that this practice is unlawful – the personalization of advertisements is not necessary for the provision of the contractual social network service, but is a separate processing that requires a different legal basis (basically, explicit consent). As a result, the DPC issued the final decision fining Meta 390 million€ (210 million for Facebook and 180 million for Instagram) and ordered the company to bring its processing in line with GDPR requirements within 3 months. This major penalty has forced Meta to phase out its “contract- based” model for personalized advertising in the EU – by the end of 2023 Meta announced opt-out options from targeted advertising and its intention to seek consent from European users for such data processing.
Also in the Meta case, another high-profile action concerned transatlantic data transfers (the issue of GDPR compliance of the transfer of Europeans’ data to the U.S., following the invalidation of the Privacy Shield mechanism by the CJEU’s Schrems II decision). After a lengthy process of investigation and coordination between authorities, in May 2023 the Irish DPC – bound by the EDPB through the dispute resolution procedure – fined Meta Platforms a record €1.2 billion, the largest GDPR fine ever. The sanction was justified by the seriousness of the breach: Meta continued to transfer personal data of EU Facebook users to the US in a systematic and repetitive manner without adequate safeguards, in breach of Article 46 of the GDPR. The company was also ordered to suspend any future transfers and to bring its operations into compliance with Chapter V of the GDPR (protection of data transferred outside the EEA) within 6 months. EDPB Chairwoman Andrea Jelinek emphasized the significance of this outcome: “The unprecedented fine is a strong signal that serious breaches have far-reaching consequences. The message to the industry is that even global corporations are not above European data law – on the contrary, the greater the volume of data and the impact on citizens, the stronger the authorities will react. The Meta case highlights that compliance with data protection requirements needs to be built into the way business algorithms operate, whether it’s behavioral targeting or international transfers in global AI networks.
The above examples, along with many others (such as investigations into content recommendation algorithms or AI-based credit scoring systems), confirm a clear trend: EU data protection authorities do not hesitate to apply unprecedented maximum sanctions and remedies when AI-based technologies violate the GDPR. From fines in the hundreds of millions of euros to orders to suspend services, the palette of enforcement tools is being used to the full to ensure that individuals’ rights are also protected in the algorithmic digital age. These actions have a dual role – to sanction illegalities and to act as a deterrent (i.e. a ‘by example effect’)
– and send a strong call to developers and operators: innovation must be responsible, and compliance with the GDPR is a necessary minimum, not an optional obstacle.
Legal implications for data controllers and AI developers
The increasing enforcement of the GDPR in relation to AI technologies brings to the forefront a number of practical legal implications that data controllers and developers of algorithmic solutions need to consider. In this section, we look at the main legal obligations and compliance issues highlighted by the cases discussed and authorities’ guidance.
- Choice and documentation of the legal basis for processing. AI systems that process personal data (either in the training phase or in the use/service phase) must be based on a valid legal basis under Art. 6 GDPR. Recent cases show that inappropriate reliance on legitimate interest or performance of a contract will be challenged by authorities. For example, training an AI model on data collected from public sources cannot simply be based on the commercial interest of the developer if it disproportionately affects the rights of individuals (as established in the Clearview AI case, where the “legitimate interest” invoked by the company was rejected). Similarly, personalization of advertisements by
Profiling cannot be hidden under contractual pretext, but requires explicit consent. The practical implication is that operators need to conduct a careful analysis of the legal basis: in many cases, the informed consent of data subjects (Art. 6(1)(a)) will be the most appropriate, especially if the processing is invasive or not strictly necessary for the underlying service. Where legitimate interest is invoked (Art. 6(1)(f)), it is imperative to carry out a rigorous balancing test and implement safeguards (e.g. opt-out options, data minimization) to pass the authorities’ scrutiny. At the same time, if special categories of data are involved (Art. 9, such as biometrics or AI inferred health data), a valid exception (explicit consent, substantial public interest provided by law, etc.) must also be identified, otherwise processing is prohibited.
- Compliance with the principle of transparency and adequate information. Operators that develop or use AI systems must provide clear, accessible and complete information about the processing carried out, as required by Articles 13 and 14 GDPR. This includes: what data the algorithm collects and uses, for what purpose, what the legal basis is, how long the data is stored, to whom it is disclosed (including whether it is shared with other providers or transferred outside the EU) and the existence of any significant automated decision-making. Cases show that a lack of such transparency is severely penalized: OpenAI was fined mainly for failing to inform individuals that it was using their online data to train ChatGPT, and Meta was penalized because its terms of service did not clearly explain to users how their data is processed for advertising purposes. To comply, companies should draft and make available AI module- specific privacy policies, possibly explanatory FAQ sections or even interactive tools that show users, in an understandable way, the general logic of algorithmic processing (within the limits of trade secrets). Also, in case of use of data collected from third party sources (e.g. web scraping), the obligation to inform data subjects from indirect sources (Art. 14 GDPR) becomes applicable, which implies notifying them (if possible) or publishing general information and the possibility to opt-out. Failure to comply with these requirements can lead to data erasure orders and fines (CNIL, for example, ordered Clearview to delete citizens’ data for failing to inform and honor their rights).
- Ensuring data subjects’ rights in the context of AI. Even though the technology is complex, data subjects do not lose their GDPR rights over their data processed by AI systems. These rights include, among others: the right of access to one’s own data (including the data entered into the system and, to a certain extent, to the outputs relating to it), the right to rectification of inaccurate data, the right to erasure (especially if the data has been processed unlawfully), the right to object to processing based on legitimate interest and the right not to be subject to automated decision (under the conditions of Art. 22). AI operators must put in place procedures and technical means to facilitate the exercise of these rights. For example, in the case of a generative model, provide users and non-users with a channel through which they can request the removal of information about them from the training dataset or block the generation of content about them. Clearview AI severely violated the right to access and object – ignoring people’s requests to check and remove their photos – which weighed in authorities’ decisions to impose sanctions and So effective implementation of rights (including through automated processes where data volumes are large) is a key obligation. Moreover, in the context of automated decisions, human intervention on demand and the possibility of
the person to express his/her point of view or contest the decision (Art. 22(3)), even if the original decision was generated by the AI.
- Conducting impact assessments (DPIA) and risk management. The GDPR requires a Data Protection Impact Assessment (DPIA) to be carried out when a type of processing is likely to result in a high risk to the rights and freedoms of individuals (Art. 35). Many AI applications fall into this category – e.g. use of new or innovative technologies, large-scale processing of sensitive data, systematic profiling – which explicitly appear on the list of situations requiring DPIAs. Developers and operators should therefore formally assess the risks to personal data and document mitigation measures before implementing an IA system. A robust DPIA will consider, among other things: the proportionality of processing (is AI necessary for the proposed purpose?), measures to reduce data volume (e.g. anonymization or synthesization of training data where possible), the potential impact on individuals in case of algorithm failure, the risk of discrimination or exclusion, and plans to remedy any potential harms. Supervisors may require prior consultation (Art. 36) if the DPIA indicates high residual Failure to conduct a DPIA when it was required constitutes a breach itself – for example, if a bank implements an AI credit scoring system without a DPIA, the authority can impose fines even independently of the existence of an incident. In addition, the privacy by design approach (Art. 25) requires AI developers to integrate data minimization and protection measures (e.g. pseudonymization of training data, inclusion of erasure mechanisms, auditability of algorithmic decisions) from the design phase. The compliance plans presented by the OpenAI Garantee – such as the introduction of erasure and information options – can be seen as a belated result of a DPIA that was initially lacking. Ideally, such measures should be proactively anticipated by operators, not reactively imposed by authorities after a breach.
- Responsibility and delineation of roles in the AI chain. The AI ecosystem often involves several actors: the developer of the underlying model (provider), the entity that customizes it or offers it as a service to other operators (secondary provider or processor), and the end operator that integrates it into its business (user). Under the GDPR, it is essential to correctly establish the roles of each actor – data controller, processor or joint controllers – in order to allocate compliance responsibilities. For example, if a company uses an AI service from a third-party (cloud) provider that processes the company’s customer data, then the company is the data controller and the AI provider is the processor, requiring an 28 contract that imposes data protection clauses. On the other hand, if the provider reuses the data of multiple customers to improve its own model, it could become a joint controller with those companies, sharing responsibility. CNIL’s recent guidance emphasizes the importance of determining the applicable legal regime and the legal classification of actors (controller vs. processor) at the outset of IA projects. Lack of clarity in this regard can lead to liability gaps: for example, after the ChatGPT security incident in March 2023, it was initially unclear whether OpenAI (as provider) would notify end-users, and companies using the ChatGPT API also had to assess whether the incident also affected their breach notification obligations. Therefore, all parties involved in an IA system need to understand and document their role under the GDPR, enter into processing agreements where appropriate, and each ensure their respective share of compliance.
- International data transfers and alien Many providers of advanced AIsolutions are based outside the European Economic Area (e.g. OpenAI in the U.S.) Under Chapter V of the GDPR, the transfer of personal data to third countries is restricted and requires an adequate level of protection. The Meta case – 1.2 billion€ fine underlines the consequences of non-compliance. For AI developers, the implication is that if they train models on personal data collected in the EU and stored on servers outside the EU, they must have legal mechanisms in place such as approved Standard Contractual Clauses (SCC), Corporate Business Rules (BCR) or ensure that the recipient is in a country with an adequacy decision. Without these, processing is illegal. Authorities are also vigilant about government access to data – for example, if a US AI provider can allow US authorities access to Europeans’ data, this (under Schrems II case law) must be counterbalanced by additional safeguards or may lead to a ban on the use of that service in the EU. Any cross-border flow of data in the context of AI must therefore be carefully mapped and covered by contractual clauses and case-by-case assessments (so-called transfer assessments, TIAs). In addition, if a model possibly containing personal data (or from which stored personal data could be extracted) is intended to be published or open-sourced, this would amount to a transfer to any global recipient, raising serious compatibility issues with GDPR rules.
In the light of the above, it is clear that AI developers and operators in the EU (or targeting EU individuals) need to include GDPR compliance as an integral part of their development and business strategy. Moreover, data-related legal risk management should be a central concern – not only to avoid hefty fines, but also to ensure sustainability and social acceptance of AI products. As recent guidelines show, compliance with the GDPR is not a barrier to innovation, but a prerequisite for trusted AI aligned with European values. Operators who ignore these obligations expose themselves not only to sanctions, but also to the loss of trust of users and partners, which in the long run can be as damaging as a financial penalty. By contrast, those that implement privacy by design principles, that transparently document and justify how they use data and cooperate with the authorities, will be better positioned in an increasingly demanding legal and reputational environment.
Recent guidance from EU authorities on AI and data protection
To support both industry and citizens’ rights, EU data protection authorities have stepped up efforts in the last two years to clarify the rules and best practices on the application of GDPR in the AI context. This recent guidance provides a compliance framework and outlines regulators’ expectations. Below, we review the most notable initiatives and guidance:
- CNIL action plan and guidelines (France) – CNIL, the reputed French data protection authority, has started a broad program focused on responsible AI as early as 2022. In May 2023, CNIL published an AI action plan and launched stakeholder consultations to identify the challenges of applying GDPR to AI. As a result, in
October 2023 and later in June 2024, the CNIL issued the first official recommendations on the development of AI systems in compliance with the GDPR. The CNIL guidance emphasizes that “the development of AI systems can be reconciled with privacy” and provides principles and practical sheets (“AI how-to sheets“) for operators. Key recommendations include:
- Respect the purpose limitation: AI systems should be developed for specific and legitimate purposes, clearly defined from the outset – a vague purpose such as “improving the algorithm” is not considered valid. Any extension of the original purpose requires re-evaluation and possibly new
- Data minimization: it is recommended to use as little personal data as possible when training and operating AI. For example, the CNIL suggests anonymizing or synthesizing data whenever the purpose can be achieved in this way, reducing reliance on raw personal data.
- Determining roles and responsibilities: the guide provides criteria for determining who is a data controller and who is a processor in the different configurations of AI development, avoiding the gray area of processing without accountability.
- Choosing the right legal basis: the CNIL reiterates that in many situations of use of AI (especially those secondary to the main purpose), consent remains the preferable basis, and when legitimate interest is invoked, it must be well documented. Attention is also drawn to the processing of special categories of data in the context of AI (e.g. emotion recognition is considered as processing of sensitive data requiring special grounds).
- Individual rights: CNIL devotes recommendations on how to facilitate data subjects’ rights in IA projects. It states that these rights must be respected both at the level of training datasets (individuals have the right to know if and what of their data has been used) and at the level of models (individuals can request, for example, the deletion of their data from a trained model, if identification is possible).
- Impact assessments: for AI projects, the CNIL recommends early DPIAs, tailored to the specifics of the technology, and the involvement of the DPO at all stages of
- Security and risk prevention: cybersecurity measures are considered, as AI models can be the target of attacks (such as model inversion or membership inference that can extract personal data from the model). Protection against unauthorized access and logging access to training data are essential.
The CNIL’s recommendations are accompanied by concrete examples and case studies and are intended to provide a practical benchmark for AI companies. These guidelines essentially confirm the interpretation that the GDPR already provides a sufficiently comprehensive framework to cover most of the challenges brought by AI – the key is to apply the existing principles to new technological contexts. The CNIL has announced that it will continue to issue further sets of “factsheets” on specific topics (e.g. anonymization in the context of AI, assessment and prevention of algorithmic BIAS, etc.), thus strengthening the compliance arsenal available to professionals.
- Garante Initiatives (Italy) – Although best known for its enforcement actions, the Italian authority has also provided informal post-factum guidance following the ChatGPT and Replika cases. After the ChatGPT block on ChatGPT was lifted, Garante published details of the requirements imposed on OpenAI, which can serve as best practices: Implementing an online form through which anyone (including non-users) can object to the use of their data in training, including dedicated information in the service interface that conversational data can be used to refine the model, conducting a public education campaign on AI, and accelerating the development of a robust age verification systemGarante also initiated in 2023 a national working group on AI, bringing together experts to develop guidelines on AI risk assessment, which will align with the upcoming EU regulation. Italy emphasizes the importance of cooperation between the data protection authority and other sectoral authorities (e.g. Competition Authority, Communications Authority) when AI affects related areas, an approach that anticipates the cooperation mechanisms in the AI Act.
- DPC Ireland Guidance on AI and LLM – As the authority responsible for the oversight of large technology companies, DPC Ireland published on July 18, 2024 an analysis and guidance on “AI, Large Language Models and Data Protection”ie. The document (in the form of an article on the DPC blog) describes in accessible language the risks involved in using LLM products, both for organizations and individuals, and provides practical recommendations. The CPD highlights risks such as:
- Excessive use of personal data to train models without the knowledge or consent of the data subjectsdataprotection.ie.
- Accuracy and data retention issues – models that may generate erroneous content about people or that may retain data entered by users longer than necessary.
- The possibility of disseminating personal data to third parties by sharing the models (without individuals knowing or consenting to the new purposes)dataprotection.ie.
- BIAS and discrimination resulting from incomplete or biased datasets, which may lead to incorrect decisions affecting individuals or groups.
For organizations that plan to use AI tools, the CPD recommends a diligent approach: conduct an internal audit of the data used by the tool (what data are we inputting? where where does this data end up? is stored by the provider? is used for retraining?)dataprotection.ie, review the terms of the provider (whether they act as a controller or processor, and whether they offer privacy guarantees), and implement control processes. For example, the CPD suggests that if employees enter customers’ personal data into a service such as ChatGPT, the organization becomes the controller and must ensure GDPR compliance – possibly through policies prohibiting the entry of sensitive or identifiable data into such insecure cloud services. The Irish guidance also emphasizes the need to facilitate the rights of individuals: companies using AI must be prepared to respond if a person asks them “what data about me did you put into an AI model and what did you do with it?”. In addition, the CPD mentions the importance of accountability – i.e. documenting decisions related to AI implementation, training staff on the ethical use of new tools, and updating the organization’s data protection policies to reflect any AI processing.
- Actions by other European authorities – Many European authorities and bodies have recently issued statements and recommendations on AI:
- The European Data Protection Board (EDPB/CEPD) has shown increased interest: after the creation of the ChatGPT Taskforce in 2023, in February 2025 the EDPB decided to expand its into a general Taskforce on AI enforcement. The aim is to ensure cooperation between authorities in investigating AI systems and to prepare common guidelines. Moreover, in July 2024, the EDPB adopted a Statement on the role of Data Protection Authorities in the AI Act in July 2024, emphasizing the expertise they have in fundamental rights and recommending that Data Protection Authorities be designated as Supervisory Authorities (MSAs) for AI areas involving the processing of personal data. At the same time, the EDPB insisted on the need to establish clear procedures for cooperation between future IA supervisory authorities and data protection authorities, as well as close cooperation between the EU IA Office and the EDPB itself. These positions foreshadow how the GDPR regime and the future AI Act regime will interact (as detailed in the next section).
- The Spanish authority (AEPD) has published since 2020 a detailed guidance on adapting systems incorporating AI to the GDPR (aimed at engineers and IT managers). The AEPD has also developed free tools, such as an impact assessment generator for AI systems and a methodology for identifying BIAS, showing a very practical approach.
- The Dutch Authority (AP) published in 2023 a report on the impact of algorithmic assessments and recommended the inclusion of independent experts in the evaluation of government AI systems as part of transparent decision-making.
- ICO (UK) – even though the UK is no longer part of the EU, the ICO’s experience is worth mentioning: the ICO issued in 2020 an extensive AI and Data Protection Guidance, which remains a technical landmark, addressing topics such as fairness, explanations for algorithmic decisions and machine learning security. Many EU authorities have leveraged these concepts in their own guides.
- The Council of Europe (a separate organization from the EU) is working on a Treaty on Artificial Intelligence, which will include strong references to data protection compliance and the role of specialized authorities. European views and standards thus align: data protection is recognized as a central element of the AI governance ecosystem.
Through these multiple guidelines – from the national to the EDPB level – a set of best practices and normative expectations is taking shape. The common message is that GDPR already provides the basic principles, and AI actors need to operationalize these principles in the design and use of systems. At the same time, the ground is being prepared for the future AI- specific legal framework (AI Act), with data protection authorities taking an active role in defining interpretations and synergies. For professionals in the field, familiarization with these guidelines is essential: they not only spell out the legal obligations, but also offer practical solutions (e.g. how to ensure algorithmic transparency, how to grant rights in a black box model, which internal procedures to implement when adopting an AI service). A rigorous bibliography, presented at the end of the article, brings together
the main documents – regulations, opinions, reports, decisions – which underpin these guidelines and can serve as a reference for further study.
GDPR – AI Act relationship: complementary regulations in a common ecosystem
As the European Union prepares to adopt the Artificial Intelligence Regulation (AI Act) – the first legal framework dedicated to AI at a global level – it becomes crucial to understand how it will interact with the GDPR and what practical implications will arise from the coexistence of the two regulatory regimes. In essence, the GDPR and the AI Act are complementary: they approach AI from different angles, but converge in the common goal of ensuring that the development and use of algorithms respect the fundamental rights of individuals.
Scope and objectives: the GDPR (in force since 2018) is a horizontal regulation, covering any processing of personal data, regardless of technology or sector, with the aim of protecting privacy and data protection. The AI Act, currently in the final stages of adoption (a political agreement on the text was reached in December 2023), is a sector-specific regulation, focused on AI systems, regardless of whether they process personal data or not, with the main objective to ensure that AI is safe, transparent, ethical and in line with EU values. The AI Act classifies AI systems by risk levels (unacceptable, high, limited, minimal) and imposes differentiated requirements: from bans on certain practices (e.g. social scoring by public authorities, real-time biometric surveillance in the public space – except in strict cases) to strict compliance requirements for high-risk systems (such as those used in recruitment, credit, education, health, critical infrastructure, law enforcement, etc.), transparency requirements for limited-risk AI (e.g. obligation to signal users when they interact with a chatbot or when content is generated by AI) and finally voluntary codes of conduct for minimal-risk AI.
Overlaps and differences: In terms of overlap, any AI system that processes personal data (which in practice will be the case for many systems, especially high-risk systems that make decisions about individuals) will be subject to both regulations simultaneously. For example, an AI-based recruitment algorithm is considered high-risk by the AI Act, so it must fulfill requirements such as evaluating training data to reduce BIAS, keeping technical documentation, being transparent to the user (employer), and notifying candidates that they have been evaluated by an automated system. In parallel, the same algorithm, processing candidates’ CVs (personal data), falls under the GDPR: the company using it must have a legal basis (probably candidate consent or legitimate interest with safeguards), make DPIA, ensure the candidate’s right to request human intervention in the decision (as per Art. 22 GDPR) etc. Compliance with the AI Act does not derogate from and does not guarantee compliance with the GDPR, and vice versa – operators will have to comply cumulatively with both sets of obligations. Incidentally, the AI Act contains an explicit clause that its provisions apply without prejudice to data protection rules (the initial articles of the draft reaffirm the applicability of the GDPR).
However, the AI Act also brings to the fore issues that the GDPR only implicitly addresses. For example, the AI Act’s requirement to ensure the quality of datasets used by high-risk systems
(representativeness, correctness, absence of systematic BIAS) is complementary to the GDPR requirement of data accuracydataprotection.ie, but goes further by requiring analysis of potential algorithmic bias. Similarly, the AI Act requires for high-risk systems the creation of a risk management system and extensive technical documentation (including audit logs), which are not directly required by GDPR, but which will support demonstrability of compliance also from a data protection perspective (e.g. logs can be used to respond to access requests or investigate a breach).
Another key element of the AI Act is the introduction of the concept of a European AI Office and national AI Market Surveillance Authorities (MSAs). These bodies will oversee compliance with the AI Act (similar to the role of data protection authorities under the GDPR). To avoid duplication of efforts and confusion, the need for close coordination between them was recognized. The EDPB recommended that Data Protection Authorities (DPAs) be designated as MSAs for AI areas related to the processing of personal data and even for other high-risk systems affecting individuals’ rights. The EDPB also calls for the creation of formal mechanisms for the IA Office and EDPB (as well as the future IA Committee) to collaborate, share information, and issue joint guidance on intersecting topics. In November 2024, the EDPB formally responded to a letter from the IA Office reaffirming its commitment to cooperation and the importance of aligning requirements. Basically, we can expect that once the AI Act enters into force (possibly in 2025, with effective application after a 2-year grace period), some DPAs will become “dual regulators” – wearing two “hats”: one of DPA (applying GDPR), one of MSA (applying AI Act) – at least in certain sectors (such as law enforcement, justice, border management, where the AI Act already explicitly states that DPAs will be co-regulators). This makes sense, as in these sectors (and beyond) most AI systems involve the processing of large volumes of personal data, and DPAs’ expertise in assessing the impact on individuals will be crucial.
Practical implications of the GDPR-AI Act duality: Economic operators and public entities developing or using AI in the EU will have to take both regulations into account simultaneously. This implies, in addition to those discussed above for the GDPR, also new obligations introduced by the AI Act, such as: making a registration of high-risk systems in an EU database (for public transparency), achieving technical compliance (CE marking of high-risk AI, similar to CE for product safety), providing user guides enabling users to operate the system in a compliant way (including warnings on limitations), notifying authorities in case of serious AI incidents, etc. It is worth noting that AI Act violations will also be punishable by substantial fines (the draft provides for penalties of up to 6% of global turnover for certain serious offenses, similar to GDPR). However, where a breach also constitutes a violation of the GDPR (which is likely to be the case when it comes to unlawful processing of personal data), companies could theoretically be exposed to cumulative penalties – for example, a high-risk AI producer using biometric data without consent could be penalized under both the AI Act (for failing to comply with data governance requirements) and the GDPR (for lacking a legal basis for processing sensitive data). It remains to be seen to what extent the authorities will apply a principle of proportionality to avoid double penalization for the same act, but the legal framework does not exclude parallel sanctions.
On the other hand, the AI Act also offers synergy opportunities with GDPR. For example, the AI Act’s requirement that high-risk systems be accompanied by information and explanations can de facto improve the transparency required by GDPR, helping operators to better inform data subjects about the logic of how they work (a difficult task so far, given the opacity of many models). The AI Act’s risk management requirement also aligns well with the GDPR’s accountability approach – companies can develop a single integrated impact assessment process that covers both AI Act risks (security, non-discrimination) and GDPR risks (privacy), saving resources and ensuring holistic compliance. Ideally, the Data Protection Officer (DPO) and possibly a future AI Officer will work together to create a unified internal governance framework for data and algorithms.
In conclusion, the relationship between the GDPR and the AI Act should be seen as one of necessary complementarity. The GDPR remains the basic shield protecting personal data and guaranteeing individual rights, while the AI Act adds a layer additional of product- and technology-oriented requirements to address AI-specific risks (including those beyond the scope of personal data, such as physical security or algorithmic transparency). Together, the two regulations will form a single European regulatory ecosystem for the digital economy, in which innovation is only allowed under conditions of respect for human dignity, privacy and democratic values. Operators and developers will have to navigate this dual framework, but they are not starting from scratch: the GDPR compliance experience of recent years already provides them with useful procedures and reflections, and the new guidelines (EDPB and national) provide bridges between the two regimes. Close cooperation between the authorities (EDPB and the future AI Office) will hopefully ensure that conflicts of interpretation are avoided and common guidance is issued so that companies receive a unified regulatory message. Symbolically, if the GDPR was the “first stage” in which the EU established that personal data belongs to individuals, the AI Act will be the “second stage” in which the EU affirms that the algorithms that influence our lives must obey society’s rules, not the other way around. And these two stages are mutually supportive.
Conclusions
In today’s algorithmic era of spectacular advances in artificial intelligence, from generative models to autonomous decision-making systems, data protection is not just a bureaucratic hurdle, but a fundamental pillar for developing trust and sustainability of new technologies.
Recent cases – from record fines for companies that have ignored basic principles of legality and transparency, to rapid interventions to stop dangerous algorithmic processing -send a clear call: technological innovation must respect the dignity and rights of the individual.
In this way, the GDPR does not hinder the development of AI, but sets acceptable limits, ensuring that “just because something can be done with data does not necessarily mean that it is allowed”. On the contrary, the strict application of GDPR in the field of AI is beneficial in the long term even for the industry, as it creates a climate of trust. Both users and society at large will only embrace algorithmic solutions if they come with privacy safeguards,
security and non-discrimination – elements that GDPR and the forthcoming IA regulation actively promote.
For legal and technology regulatory professionals, the major challenge is to translate legal principles into concrete practices: developing clear internal policies on the use of data in AI, conducting regular risk assessments, consulting DPO and compliance teams from the product design stage, and keeping up to date with authorities’ guidance. The article summarized recent guidance (CNIL, Garante, DPC, EDPB) – these should be seen as indispensable working tools, providing benchmarks on what authorities expect from AI operators.
Particular emphasis should be placed on the practical implications: for example, tech companies should implement GDPR compliance checklists before launching a new AI model; financial or healthcare institutions adopting high-risk AI should involve multidisciplinary teams (lawyers, ethicists, technical experts) to validate GDPR and AI Act alignment; and AI start-ups should plan their growth by considering compliance requirements from the outset (thus avoiding the costs and negative reputation associated with late corrections imposed by authorities).
Cooperation at European level is another essential aspect highlighted. AI, by its nature, knows no borders and a fragmented application of the rules would be inefficient. Initiatives such as the EDPB’s AI Taskforce and the future coordination mechanism with the European AI Office will contribute to a uniform application of the GDPR in cases involving artificial intelligence, ensuring both predictability for industry and equal protection for citizens, regardless of which Member State they are in.
As we approach the 7-year anniversary of the GDPR’s implementation (May 25, 2025) – and the entry into a new phase with the adoption of the AI Act – we can say that Europe remains steadfast in its commitment to taming the algorithmic revolution through the rule of law. It is an ongoing effort that will require resources, education and adaptability. But the benefits are commensurate: a digital marketplace where innovation and human rights coexist harmoniously, where companies can thrive on trusting users, not exploiting their data, and where citizens can embrace new technologies knowing there are strong safeguards against abuse.
In conclusion, the call for strong enforcement of the GDPR in the algorithmic era is not an obstacle against progress, but a necessary condition for sustainable and ethical progress. Through rigorous regulation, dissuasive sanctions and proactive guidance, European authorities are sending the message that “high-tech” must not become “wild west”. It is the responsibility of data controllers and AI developers to respond to this call – by embedding data protection in the DNA of every project and treating privacy not as forced compliance, but as a core value oftheir products. Only then can artificial intelligence reach its full potential, serving society without sacrificing individual rights.
Bibliography
- Regulation (EU) 2016/679 (GDPR) – the General Data Protection Regulation, applicable as of May 25, 2018, which establishes the European legal framework for the processing of personal data.
- [Proposal for an EU Regulation on Artificial Intelligence (AI Act)] – European Commission legislative initiative COM(2021) 206, on which political agreement was reached on December 8, 2023 (pending formal adoption). Aims to establish a uniform legal framework for the development and use of AI systems in the EU, with a regime based on risk assessment of AI applications.
- EDPB – Statement on the Role of Data Protection Authorities in the AI Act (July 17, 2024) – Document adopted by the European Data Protection Board highlighting the expertise of DPAs in the field of AI and recommending that they be directly involved in the oversight of the future AI Regulation. Calls for the establishment of cooperation mechanisms between the future EU IA Office and the EDPB/DPAs to ensure the consistent application of the GDPR and AI Act.
- EDPB – Decision to create the ChatGPT Task Force (April 13, 2023) – EDPB announcement on the formation of a task force dedicated to coordinating investigations and enforcement actions related to ChatGPT services and subsequently extending the mandate to the general application of the GDPR in the context of AI.
- EDPB ChatGPT Taskforce Report (24 May 2024) – report summarizing the preliminary findings of member authorities on the ChatGPT service and its compliance with the GDPR principles. It highlights the challenges regarding the legal basis for model training, transparency obligations, user rights and corrective measures implemented by OpenAI (EDPB, Report on the work undertaken by the ChatGPT Taskforce).
• Italian Authority (Garante) decision in the ChatGPT case:
- Interim measure March 30, 2023 – order temporarily suspending data processing by ChatGPT in Italy, citing lack of legal basis, lack of information and insufficient protection of minors.
- Final decision December 20, 2024 – the conclusion of the Garante inquiry requiring a fine of €15 million OpenAI and corrective measures (including public information campaign) for breach of Art. 5, 6, 13, 14 GDPR (unlawful and non- transparent processing of personal data when training ChatGPT, including data of minors).
- Decision of the Italian Authority (Garante) – Replika case (February 2023) – emergency measure banning the Replika app (AI-based chatbot) from processing the data of individuals in Italy. Grounds: lack of valid legal basis, serious risk to minors and processing of special categories of data without safeguards. (Garante Communication No 9870847/2023, available in English translation).
• CNIL (France) decisions in the Clearview AI case:
- CNIL Deliberation SAN-2022-019 (October 17, 2022) – decision sanctioning Clearview AI with a fine of €20,000,000 and ordering the cessation of the collection and deletion of data of individuals in France for multiple GDPR violations: unlawful (unfounded) processing of biometric and public online data, failure to comply with information obligations and failure to ensure access/deletion rights.
- CNIL Decision 2023 (May 2023) – application of a late payment penalty of 2 mil.
€ Clearview, following non-compliance with the 2022 decision (the company has not ceased processing and has not deleted the data, nor appointed a representative in the EU). Confirms CNIL’s cooperation with other authorities (Italy, Greece, UK) in this case of international scope.
• Decision of the Italian Authority (Garante) – Clearview AI case (February 10, 2022)
– Garante penalizes Clearview AI with €20 million and bans the processing of data of individuals in Italy. Among the breaches: lack of a legal basis (legitimate interest invoked by Clearview was rejected), failure to comply with purpose limitation and storage principles, failure to inform data subjects and to ensure their rights, and failure to comply with Art. 27 GDPR (lack of EU representative).
- DPC Ireland decision – Meta (Facebook/Instagram) case – behavioral advertising (4 January 2023) – decision resulting from the dispute resolution mechanism (Art. 65 GDPR) confirming that Meta violated Art. 5(1)(a) and Art. 6 GDPR by processing user data for personalized advertisements without an adequate legal basis. The DPC imposed a total fine of €390 million and ordered Meta to comply (stop processing based on “contract” as the basis for ads) (References: DPC Press Release 04.01.2023; DPC/EDPB Final Decision 2022).
- DPC Ireland decision – Meta (Facebook) case – international transfers (May 22, 2023) – DPC fines Meta Platforms Ireland 2 bn. € (the largest GDPR fine to date) and orders suspension of data transfers to the US under EDPB binding decision. Reason: violation of Art. 46 GDPR, Meta continuing transfers on SCC despite Privacy Shield invalidation and risks identified by CJEU, affecting massive amount of user data (EDPB News, €1.2 billion fine for Facebook as a result of EDPB binding decision).
- X/Twitter case – DPC investigation into use of data for AI training (2024) – The Irish Authority launched an investigation and legal action against X (formerly Twitter) after the platform introduced clauses allowing the use of EU users’ public data to train its own AI models. Result: X agreed (August 2024) to permanently suspend the collection and use of EU users’ tweets for AI training purposes. The CPD welcomed the agreement as protecting citizens’ rights and referred the EDPB for a GDPR breach (Euronews story, 06.09.2024).
- CNIL Guidelines – “Development of privacy-aware AI systems“:
- CNIL Recommendations of October 12, 2023 – the initial set of 7 “AI & GDPR” factsheets published by CNIL, confirming the compatibility of AI development with GDPR and providing practical guidance: purpose definition, data minimization and quality, rights assurance, legitimate interest assessment, privacy by design, etc… Emphasizes that developers should anticipate compliance with GDPR principles from the design phase.
- CNIL Recommendations of June 7, 2024 – updated version after public consultation with additional examples and clarifications. It emphasizes the accountability of actors in the AI ecosystem (identification of operators and empowered persons) and provides solutions for concrete issues such as reusing datasets, testing algorithms with real data under controlled conditions, performing DPIAs, etc..
- DPC Ireland Guidance – “AI, Large Language Models and Data Protection” (July 2024) – Explanatory document from the DPC outlining the risks of data processing in the context of large language models and providing recommendations for organizations and the public. It identifies potential breaches (e.g. use without knowledge or consent of large amounts of personal data to traindataprotection.ie, potential errors and BIAS propagated ie models) and advises operators to carefully check the terms and conditions of AI providers, limit the input of sensitive data into such tools and ensure responsive processes for exercisingdataprotection.iedataprotection.ie rights.
• Communications and guidance from other authorities:
- AEPD Spain – Guide “Adecuación al RGPD de tratamientos con IA” (Feb. 2020)
– an early guidance document that addresses the questions raised by AI in the data protection context and recalls the key elements of the GDPR to be taken into account by controllers incorporating AI components.
- ICO UK – AI Guidance (2020) and AI Auditing Framework – although non- mandatory in the EU post-Brexit, these resources have been influential in shaping standards on algorithmic transparency, explainability of automated decisions and data governance in the context of AI.
- EDPS (European Data Protection Supervisor) – EDPB- EDPS Joint Opinion EDPB- EDPS 5/2021 on the proposed AI Act – document which recommended strengthening prohibitions on mass surveillance practices, banning real-time biometric identification and social scoring, emphasizing the importance of aligning the AI Act to the GDPR and the role of DPAs in its enforcement.