News and developments
Artificial Intelligence and Data Protection: Call for Firm GDPR Enforcement in the Age of Algorithms
Cristina Savulescu - Cristina Săvulescu - Head of Whistleblowing & GDPR Bradu Neagu & Associates
In the context of the exponential development of artificial intelligence (AI), rigorous enforcement of theGeneral Data Protection Regulation (GDPR) is becoming more important than ever. The emergence of advanced algorithmic systems - from facial recognition to conversational chatbots - has brought innovative benefits, but also significant risks to privacy and fundamental rights. The launch of generative AI services such as ChatGPT (considered the fastest-adopted app in history) has triggered increased attention from regulators, highlighting the need for GDPR principles to be strictly applied in the algorithmic era. Thus, the European Union is strengthening its legislative framework (through the preparation of the AI Act) and gearing its supervisory authorities towards increased cooperation to ensure compliance with data protection rules in the development and use of AI systems.
AI data protection risks
The widespread deployment of artificial intelligence systems poses a number of inherent risks
to the protection of personal data. Among the most important are:
In essence, AI risks in the area of personal data stem from the volume and complexity of processing, which may be beyond the individual's traditional control over his or her data. These risks call for a firm application of the GDPR principles - lawfulness, transparency, fairness, minimization, accuracy, purpose limitation and storage - from the very beginning of the design of IA (privacy by design) systems. The authorities emphasize that the development of AI is compatible with data protection only if these principles are strictly respected: "the development of AI systems is compatible with the challenges of privacy protection... only then will citizens be able to put their trust in these technologies". For operators, this translates into an obligation to identify and mitigate risks through appropriate technical and organizational measures (including impact assessments and regular audits), while ensuring that data subjects' rights (access, erasure, objection, etc.) can be exercised even in complex algorithmic contexts.
Recent examples of AI enforcement
European Data Protection Authorities have demonstrated a firm and proactive approach in recent years in sanctioning GDPR abuses and breaches related to AI-based systems. Below
some relevant and exemplary cases are presented, illustrating the enforcement trend and serving as lessons for the technology industry:
Clearview AI - An emblematic case on facial recognition and massive biometric data collection without consent. Clearview AI, an American company, scraped more than 20 billion images from the internet (including social media) to build a facial identification tool, which was then sold to law enforcement authorities. This practice has been considered a blatant breach of GDPR in multiple EU jurisdictions: the Italian authority (Garante) fined Clearview €20 million and banned it from further processing of Italian individuals' data, and ordered the deletion of all illegally collected data. Similarly, the French authority (CNIL) in October 2022 imposed a maximum fine of €20 million€ and ordered the company to delete the data of French citizens from its database. For non-compliance with these orders, CNIL subsequently imposed additional penalties of 5.2 million€ (May 2023). Other authorities, such as the UK's ICO and the Greek authority, have also sanctioned Clearview, totaling tens of millions of fines across Europe. The Clearview case highlights that the processing of facial images (sensitive biometric data) without a valid legal basis and without informing data subjects is considered unlawful - and the fact that the controller was non-EU did not exempt it from liability, as the GDPR has extraterritorial effect (Art. 3(2)). The message from the authorities is clear: mass biometric mass surveillance technologies, if they violate privacy rights, will be severely sanctioned in order to protect citizens from a "surveillance society" incompatible with European values.
OpenAI - ChatGPT - The emergence of the ChatGPT generative language model (launched by OpenAI) tested the limits of GDPR application to generative AI and triggered the first coordinated actions at the European level. In March 2023, after a security breach incident that exposed some users' data, the Italian Data Protection Authority issued an urgent order temporarily suspending ChatGPT in Italy. Garante invoked several alleged breaches of the GDPR by OpenAI: failure to adequately inform users and the individuals whose data was collected (breach of Art. 13-14), the absence of a legal basis for the processing of personal data for the purpose of algorithm training (legitimate interest could not be invoked in this context), the inaccuracy of the processed data (the model providing often erroneous information about individuals) and the failure to implement an age verification mechanism (allowing access to potentially inappropriate content to minors under 13). This was the first case of a nationwide restriction of an AI service on data protection grounds, marking a bold approach by Garante, which gave the company 20 days to remedy the deficiencies.
OpenAI responded by quickly implementing a number of measures: it updated privacy policies and user notices, introduced a form through which individuals (including non-users) can request deletion of their data from the training set, and implemented an age (self-declaratory) filter to block access by minors as a result, on April 28, 2023, the Italian authority lifted the ban, allowing ChatGPT to resume in Italy, conditional on continued compliance with the agreed compliance plan. This plan included, among other things, carrying out a public information campaign in Italy about how ChatGPT works and how data is used to train algorithms - a novel measure designed to increase transparency to the public.
In parallel, other European authorities have launched similar actions: for example, the Spanish authority (AEPD) opened in April 2023 its own investigation into OpenAI for potential breaches of the GDPR, making Spain the second country to formally initiate an investigation in this regard. At the same time, the EDPS asked the European Data Protection Board (EDPB/ EDPS) to put the ChatGPT topic on the European agenda - an initiative that led to the creation of a task force at EDPB level to coordinate the authorities' actions on generative AI tools.
The Italian investigation into ChatGPT was finalized in December 2024, when Garante Garante announced important results: OpenAI was fined 15 million€ for violating GDPR. The Authority concluded that OpenAI had processed users' personal data (and that of other individuals, extracted from online sources) without an adequate legal basis for algorithm training and in breach of its transparency obligations to individuals. The failure to implement a robust age verification system, exposing minors to inappropriate content, was also noted. In addition to the fine, the Garante imposed an obligation on OpenAI to run a six-month campaign on Italian media channels to educate the public on how ChatGPT collects and uses data - a measure designed to improve transparency and public awareness. OpenAI has announced that it will contest the sanction as "disproportionate", while emphasizing that it has taken compliance measures and that its revenues in Italy were modest compared to the amount of the fine. The ChatGPT case confirms that AI models that heavily process personal data fall directly under the scope of the GDPR, and innovation does not exempt companies from fundamental legal obligations: transparency, consent (or other valid basis), protection of minors and data security.
Replika - Another relevant example is the authorities' intervention against AI chatbots designed to interact with users, which can process sensitive data and affect vulnerable groups. Replika is an AI app that provides users with a "virtual friend" with whom they can converse freely, including on intimate topics. In February 2023, the Garante (Italy) issued an emergency measure banning Replika from processing the personal data of users in Italy. The decision came following complaints about the negative impact on minors (who could access the app and receive age- inappropriate replies) and the finding that Replika was processing sensitive data (emotions, psychological states) without any legal basis or proper consent mechanisms. The Italian authority considered that such a service, although based on innovative AI, cannot operate in a vacuum of legal accountability: the operator is obliged to comply with the GDPR from the design phase, by ensuring the filtering of underage users, clearly informing adults about how their conversations are stored and used, and obtaining explicit consent if special categories of data are processed (e.g. mental health data inferable from chats). The Replika case highlights that "experimenting" with AI on real users, without safeguards, calls for prompt intervention by authorities to prevent potential emotional or privacy abuses, especially when minors are involved.
X (formerly Twitter) - Social networks are a rich breeding ground for personal data used to train AI algorithms, and the practices of collecting and using this data are closely scrutinized by authorities. A recent example concerns the X/Twitter platform and its plans to use users' public posts to train AI models (including its own model called Grok.) In July 2024, under new management, X changed its policy on
privacy so that EU users would have to explicitly opt-out if they did not want their public postings to be used by the company for AI training purposes. The Irish authority (DPC), being the lead authority for X in the EU, expressed concern that such a massive processing of European users' data could violate the GDPR (especially the principles of lawfulness and transparency). The DPC initiated an emergency action and referred the matter to the Irish Supreme Court, arguing that these changes could violate citizens' privacy rights. In August 2024, an unprecedented agreement was reached: X agreed to suspend all collection and use of EU user data for AI training. Essentially, the company promised to stop including EU citizens' tweets in the datasets for its Grok model unless it obtained their consent.
The CPD welcomed the outcome, stressing that EU users' rights have been protected. However, the case did not end there. The Irish authority raised the issue with the EDPB, calling for a unified EU-wide position on the legality of using public social media posts to train AI. The EDPB has been asked to determine whether or not X's actions constituted a breach of the GDPR (proceeding under Art. 64-65 GDPR). At the same time, the NOYB organization has filed separate complaints in several countries against xAI (sister company developing Grok), accusing violations of 16 articles of EU privacy law. X has labeled the DPC order as "unjustified and discriminates the platform against other scraping companies", citing the fact that many AI companies mine data from the internet. Regardless of these objections, the X/Twitter case demonstrates that European authorities are willing to take immediate legal action to prevent massive data processing without a legal basis, even forcing tech giants to comply. It also highlights the importance of cooperation between authorities at EU level to uniformly tackle such practices affecting users in multiple countries.
Meta (Facebook/Instagram) - Major social media and technology platforms have also been forced to adapt their business models as a result of GDPR enforcement, especially around algorithmic profiling for targeted advertising and international data transfers - issues closely related to the use of AI in personalization. A turning point was the January 2023 decision by the Irish Data Protection Commission (DPC) in conjunction with the EDPB, regarding the legality of personalized ads delivered by Meta on Facebook and Instagram. Following strategic complaints (made by activist Max Schrems' organization NOYB), Meta was found to have forced user consent for behavioural advertising, effectively hiding data processing under the guise of "necessity for the performance of the contract" (acceptance of the terms of use). The EDPB has determined in a binding decision that this practice is unlawful - the personalization of advertisements is not necessary for the provision of the contractual social network service, but is a separate processing that requires a different legal basis (basically, explicit consent). As a result, the DPC issued the final decision fining Meta 390 million€ (210 million for Facebook and 180 million for Instagram) and ordered the company to bring its processing in line with GDPR requirements within 3 months. This major penalty has forced Meta to phase out its "contract- based" model for personalized advertising in the EU - by the end of 2023 Meta announced opt-out options from targeted advertising and its intention to seek consent from European users for such data processing.
Also in the Meta case, another high-profile action concerned transatlantic data transfers (the issue of GDPR compliance of the transfer of Europeans' data to the U.S., following the invalidation of the Privacy Shield mechanism by the CJEU's Schrems II decision). After a lengthy process of investigation and coordination between authorities, in May 2023 the Irish DPC - bound by the EDPB through the dispute resolution procedure - fined Meta Platforms a record €1.2 billion, the largest GDPR fine ever. The sanction was justified by the seriousness of the breach: Meta continued to transfer personal data of EU Facebook users to the US in a systematic and repetitive manner without adequate safeguards, in breach of Article 46 of the GDPR. The company was also ordered to suspend any future transfers and to bring its operations into compliance with Chapter V of the GDPR (protection of data transferred outside the EEA) within 6 months. EDPB Chairwoman Andrea Jelinek emphasized the significance of this outcome: "The unprecedented fine is a strong signal that serious breaches have far-reaching consequences. The message to the industry is that even global corporations are not above European data law - on the contrary, the greater the volume of data and the impact on citizens, the stronger the authorities will react. The Meta case highlights that compliance with data protection requirements needs to be built into the way business algorithms operate, whether it's behavioral targeting or international transfers in global AI networks.
The above examples, along with many others (such as investigations into content recommendation algorithms or AI-based credit scoring systems), confirm a clear trend: EU data protection authorities do not hesitate to apply unprecedented maximum sanctions and remedies when AI-based technologies violate the GDPR. From fines in the hundreds of millions of euros to orders to suspend services, the palette of enforcement tools is being used to the full to ensure that individuals' rights are also protected in the algorithmic digital age. These actions have a dual role - to sanction illegalities and to act as a deterrent (i.e. a 'by example effect')
- and send a strong call to developers and operators: innovation must be responsible, and compliance with the GDPR is a necessary minimum, not an optional obstacle.
Legal implications for data controllers and AI developers
The increasing enforcement of the GDPR in relation to AI technologies brings to the forefront a number of practical legal implications that data controllers and developers of algorithmic solutions need to consider. In this section, we look at the main legal obligations and compliance issues highlighted by the cases discussed and authorities' guidance.
Profiling cannot be hidden under contractual pretext, but requires explicit consent. The practical implication is that operators need to conduct a careful analysis of the legal basis: in many cases, the informed consent of data subjects (Art. 6(1)(a)) will be the most appropriate, especially if the processing is invasive or not strictly necessary for the underlying service. Where legitimate interest is invoked (Art. 6(1)(f)), it is imperative to carry out a rigorous balancing test and implement safeguards (e.g. opt-out options, data minimization) to pass the authorities' scrutiny. At the same time, if special categories of data are involved (Art. 9, such as biometrics or AI inferred health data), a valid exception (explicit consent, substantial public interest provided by law, etc.) must also be identified, otherwise processing is prohibited.
the person to express his/her point of view or contest the decision (Art. 22(3)), even if the original decision was generated by the AI.
In the light of the above, it is clear that AI developers and operators in the EU (or targeting EU individuals) need to include GDPR compliance as an integral part of their development and business strategy. Moreover, data-related legal risk management should be a central concern - not only to avoid hefty fines, but also to ensure sustainability and social acceptance of AI products. As recent guidelines show, compliance with the GDPR is not a barrier to innovation, but a prerequisite for trusted AI aligned with European values. Operators who ignore these obligations expose themselves not only to sanctions, but also to the loss of trust of users and partners, which in the long run can be as damaging as a financial penalty. By contrast, those that implement privacy by design principles, that transparently document and justify how they use data and cooperate with the authorities, will be better positioned in an increasingly demanding legal and reputational environment.
Recent guidance from EU authorities on AI and data protection
To support both industry and citizens' rights, EU data protection authorities have stepped up efforts in the last two years to clarify the rules and best practices on the application of GDPR in the AI context. This recent guidance provides a compliance framework and outlines regulators' expectations. Below, we review the most notable initiatives and guidance:
October 2023 and later in June 2024, the CNIL issued the first official recommendations on the development of AI systems in compliance with the GDPR. The CNIL guidance emphasizes that "the development of AI systems can be reconciled with privacy" and provides principles and practical sheets ("AI how-to sheets") for operators. Key recommendations include:
The CNIL's recommendations are accompanied by concrete examples and case studies and are intended to provide a practical benchmark for AI companies. These guidelines essentially confirm the interpretation that the GDPR already provides a sufficiently comprehensive framework to cover most of the challenges brought by AI - the key is to apply the existing principles to new technological contexts. The CNIL has announced that it will continue to issue further sets of "factsheets" on specific topics (e.g. anonymization in the context of AI, assessment and prevention of algorithmic BIAS, etc.), thus strengthening the compliance arsenal available to professionals.
For organizations that plan to use AI tools, the CPD recommends a diligent approach: conduct an internal audit of the data used by the tool (what data are we inputting? where where does this data end up? is stored by the provider? is used for retraining?)dataprotection.ie, review the terms of the provider (whether they act as a controller or processor, and whether they offer privacy guarantees), and implement control processes. For example, the CPD suggests that if employees enter customers' personal data into a service such as ChatGPT, the organization becomes the controller and must ensure GDPR compliance - possibly through policies prohibiting the entry of sensitive or identifiable data into such insecure cloud services. The Irish guidance also emphasizes the need to facilitate the rights of individuals: companies using AI must be prepared to respond if a person asks them "what data about me did you put into an AI model and what did you do with it?". In addition, the CPD mentions the importance of accountability - i.e. documenting decisions related to AI implementation, training staff on the ethical use of new tools, and updating the organization's data protection policies to reflect any AI processing.
Through these multiple guidelines - from the national to the EDPB level - a set of best practices and normative expectations is taking shape. The common message is that GDPR already provides the basic principles, and AI actors need to operationalize these principles in the design and use of systems. At the same time, the ground is being prepared for the future AI- specific legal framework (AI Act), with data protection authorities taking an active role in defining interpretations and synergies. For professionals in the field, familiarization with these guidelines is essential: they not only spell out the legal obligations, but also offer practical solutions (e.g. how to ensure algorithmic transparency, how to grant rights in a black box model, which internal procedures to implement when adopting an AI service). A rigorous bibliography, presented at the end of the article, brings together
the main documents - regulations, opinions, reports, decisions - which underpin these guidelines and can serve as a reference for further study.
GDPR - AI Act relationship: complementary regulations in a common ecosystem
As the European Union prepares to adopt the Artificial Intelligence Regulation (AI Act) - the first legal framework dedicated to AI at a global level - it becomes crucial to understand how it will interact with the GDPR and what practical implications will arise from the coexistence of the two regulatory regimes. In essence, the GDPR and the AI Act are complementary: they approach AI from different angles, but converge in the common goal of ensuring that the development and use of algorithms respect the fundamental rights of individuals.
Scope and objectives: the GDPR (in force since 2018) is a horizontal regulation, covering any processing of personal data, regardless of technology or sector, with the aim of protecting privacy and data protection. The AI Act, currently in the final stages of adoption (a political agreement on the text was reached in December 2023), is a sector-specific regulation, focused on AI systems, regardless of whether they process personal data or not, with the main objective to ensure that AI is safe, transparent, ethical and in line with EU values. The AI Act classifies AI systems by risk levels (unacceptable, high, limited, minimal) and imposes differentiated requirements: from bans on certain practices (e.g. social scoring by public authorities, real-time biometric surveillance in the public space - except in strict cases) to strict compliance requirements for high-risk systems (such as those used in recruitment, credit, education, health, critical infrastructure, law enforcement, etc.), transparency requirements for limited-risk AI (e.g. obligation to signal users when they interact with a chatbot or when content is generated by AI) and finally voluntary codes of conduct for minimal-risk AI.
Overlaps and differences: In terms of overlap, any AI system that processes personal data (which in practice will be the case for many systems, especially high-risk systems that make decisions about individuals) will be subject to both regulations simultaneously. For example, an AI-based recruitment algorithm is considered high-risk by the AI Act, so it must fulfill requirements such as evaluating training data to reduce BIAS, keeping technical documentation, being transparent to the user (employer), and notifying candidates that they have been evaluated by an automated system. In parallel, the same algorithm, processing candidates' CVs (personal data), falls under the GDPR: the company using it must have a legal basis (probably candidate consent or legitimate interest with safeguards), make DPIA, ensure the candidate's right to request human intervention in the decision (as per Art. 22 GDPR) etc. Compliance with the AI Act does not derogate from and does not guarantee compliance with the GDPR, and vice versa - operators will have to comply cumulatively with both sets of obligations. Incidentally, the AI Act contains an explicit clause that its provisions apply without prejudice to data protection rules (the initial articles of the draft reaffirm the applicability of the GDPR).
However, the AI Act also brings to the fore issues that the GDPR only implicitly addresses. For example, the AI Act's requirement to ensure the quality of datasets used by high-risk systems
(representativeness, correctness, absence of systematic BIAS) is complementary to the GDPR requirement of data accuracydataprotection.ie, but goes further by requiring analysis of potential algorithmic bias. Similarly, the AI Act requires for high-risk systems the creation of a risk management system and extensive technical documentation (including audit logs), which are not directly required by GDPR, but which will support demonstrability of compliance also from a data protection perspective (e.g. logs can be used to respond to access requests or investigate a breach).
Another key element of the AI Act is the introduction of the concept of a European AI Office and national AI Market Surveillance Authorities (MSAs). These bodies will oversee compliance with the AI Act (similar to the role of data protection authorities under the GDPR). To avoid duplication of efforts and confusion, the need for close coordination between them was recognized. The EDPB recommended that Data Protection Authorities (DPAs) be designated as MSAs for AI areas related to the processing of personal data and even for other high-risk systems affecting individuals' rights. The EDPB also calls for the creation of formal mechanisms for the IA Office and EDPB (as well as the future IA Committee) to collaborate, share information, and issue joint guidance on intersecting topics. In November 2024, the EDPB formally responded to a letter from the IA Office reaffirming its commitment to cooperation and the importance of aligning requirements. Basically, we can expect that once the AI Act enters into force (possibly in 2025, with effective application after a 2-year grace period), some DPAs will become "dual regulators" - wearing two "hats": one of DPA (applying GDPR), one of MSA (applying AI Act) - at least in certain sectors (such as law enforcement, justice, border management, where the AI Act already explicitly states that DPAs will be co-regulators). This makes sense, as in these sectors (and beyond) most AI systems involve the processing of large volumes of personal data, and DPAs' expertise in assessing the impact on individuals will be crucial.
Practical implications of the GDPR-AI Act duality: Economic operators and public entities developing or using AI in the EU will have to take both regulations into account simultaneously. This implies, in addition to those discussed above for the GDPR, also new obligations introduced by the AI Act, such as: making a registration of high-risk systems in an EU database (for public transparency), achieving technical compliance (CE marking of high-risk AI, similar to CE for product safety), providing user guides enabling users to operate the system in a compliant way (including warnings on limitations), notifying authorities in case of serious AI incidents, etc. It is worth noting that AI Act violations will also be punishable by substantial fines (the draft provides for penalties of up to 6% of global turnover for certain serious offenses, similar to GDPR). However, where a breach also constitutes a violation of the GDPR (which is likely to be the case when it comes to unlawful processing of personal data), companies could theoretically be exposed to cumulative penalties - for example, a high-risk AI producer using biometric data without consent could be penalized under both the AI Act (for failing to comply with data governance requirements) and the GDPR (for lacking a legal basis for processing sensitive data). It remains to be seen to what extent the authorities will apply a principle of proportionality to avoid double penalization for the same act, but the legal framework does not exclude parallel sanctions.
On the other hand, the AI Act also offers synergy opportunities with GDPR. For example, the AI Act's requirement that high-risk systems be accompanied by information and explanations can de facto improve the transparency required by GDPR, helping operators to better inform data subjects about the logic of how they work (a difficult task so far, given the opacity of many models). The AI Act's risk management requirement also aligns well with the GDPR's accountability approach - companies can develop a single integrated impact assessment process that covers both AI Act risks (security, non-discrimination) and GDPR risks (privacy), saving resources and ensuring holistic compliance. Ideally, the Data Protection Officer (DPO) and possibly a future AI Officer will work together to create a unified internal governance framework for data and algorithms.
In conclusion, the relationship between the GDPR and the AI Act should be seen as one of necessary complementarity. The GDPR remains the basic shield protecting personal data and guaranteeing individual rights, while the AI Act adds a layer additional of product- and technology-oriented requirements to address AI-specific risks (including those beyond the scope of personal data, such as physical security or algorithmic transparency). Together, the two regulations will form a single European regulatory ecosystem for the digital economy, in which innovation is only allowed under conditions of respect for human dignity, privacy and democratic values. Operators and developers will have to navigate this dual framework, but they are not starting from scratch: the GDPR compliance experience of recent years already provides them with useful procedures and reflections, and the new guidelines (EDPB and national) provide bridges between the two regimes. Close cooperation between the authorities (EDPB and the future AI Office) will hopefully ensure that conflicts of interpretation are avoided and common guidance is issued so that companies receive a unified regulatory message. Symbolically, if the GDPR was the "first stage" in which the EU established that personal data belongs to individuals, the AI Act will be the "second stage" in which the EU affirms that the algorithms that influence our lives must obey society's rules, not the other way around. And these two stages are mutually supportive.
Conclusions
In today's algorithmic era of spectacular advances in artificial intelligence, from generative models to autonomous decision-making systems, data protection is not just a bureaucratic hurdle, but a fundamental pillar for developing trust and sustainability of new technologies.
Recent cases - from record fines for companies that have ignored basic principles of legality and transparency, to rapid interventions to stop dangerous algorithmic processing -send a clear call: technological innovation must respect the dignity and rights of the individual.
In this way, the GDPR does not hinder the development of AI, but sets acceptable limits, ensuring that "just because something can be done with data does not necessarily mean that it is allowed". On the contrary, the strict application of GDPR in the field of AI is beneficial in the long term even for the industry, as it creates a climate of trust. Both users and society at large will only embrace algorithmic solutions if they come with privacy safeguards,
security and non-discrimination - elements that GDPR and the forthcoming IA regulation actively promote.
For legal and technology regulatory professionals, the major challenge is to translate legal principles into concrete practices: developing clear internal policies on the use of data in AI, conducting regular risk assessments, consulting DPO and compliance teams from the product design stage, and keeping up to date with authorities' guidance. The article summarized recent guidance (CNIL, Garante, DPC, EDPB) - these should be seen as indispensable working tools, providing benchmarks on what authorities expect from AI operators.
Particular emphasis should be placed on the practical implications: for example, tech companies should implement GDPR compliance checklists before launching a new AI model; financial or healthcare institutions adopting high-risk AI should involve multidisciplinary teams (lawyers, ethicists, technical experts) to validate GDPR and AI Act alignment; and AI start-ups should plan their growth by considering compliance requirements from the outset (thus avoiding the costs and negative reputation associated with late corrections imposed by authorities).
Cooperation at European level is another essential aspect highlighted. AI, by its nature, knows no borders and a fragmented application of the rules would be inefficient. Initiatives such as the EDPB's AI Taskforce and the future coordination mechanism with the European AI Office will contribute to a uniform application of the GDPR in cases involving artificial intelligence, ensuring both predictability for industry and equal protection for citizens, regardless of which Member State they are in.
As we approach the 7-year anniversary of the GDPR's implementation (May 25, 2025) - and the entry into a new phase with the adoption of the AI Act - we can say that Europe remains steadfast in its commitment to taming the algorithmic revolution through the rule of law. It is an ongoing effort that will require resources, education and adaptability. But the benefits are commensurate: a digital marketplace where innovation and human rights coexist harmoniously, where companies can thrive on trusting users, not exploiting their data, and where citizens can embrace new technologies knowing there are strong safeguards against abuse.
In conclusion, the call for strong enforcement of the GDPR in the algorithmic era is not an obstacle against progress, but a necessary condition for sustainable and ethical progress. Through rigorous regulation, dissuasive sanctions and proactive guidance, European authorities are sending the message that "high-tech" must not become "wild west". It is the responsibility of data controllers and AI developers to respond to this call - by embedding data protection in the DNA of every project and treating privacy not as forced compliance, but as a core value oftheir products. Only then can artificial intelligence reach its full potential, serving society without sacrificing individual rights.
Bibliography
• Italian Authority (Garante) decision in the ChatGPT case:
• CNIL (France) decisions in the Clearview AI case:
€ Clearview, following non-compliance with the 2022 decision (the company has not ceased processing and has not deleted the data, nor appointed a representative in the EU). Confirms CNIL's cooperation with other authorities (Italy, Greece, UK) in this case of international scope.
•         Decision of the Italian Authority (Garante) - Clearview AI case (February 10, 2022)
- Garante penalizes Clearview AI with €20 million and bans the processing of data of individuals in Italy. Among the breaches: lack of a legal basis (legitimate interest invoked by Clearview was rejected), failure to comply with purpose limitation and storage principles, failure to inform data subjects and to ensure their rights, and failure to comply with Art. 27 GDPR (lack of EU representative).
• Communications and guidance from other authorities:
- an early guidance document that addresses the questions raised by AI in the data protection context and recalls the key elements of the GDPR to be taken into account by controllers incorporating AI components.
