-
What are your countries legal definitions of “artificial intelligence”?
Greek legislation does not provide a definition for the term “artificial intelligence”. However, the definition given in the European AI Act Regulation 2024/1689 for AI systems is accepted, according to which an “AI system” means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
-
Has your country developed a national strategy for artificial intelligence? If so, has there been any progress in its implementation? Are there plans for updates or revisions?
Greece is in the process of developing its national AI strategy, coordinated by the Hellenic Ministry of Digital Governance. More specifically, Article 11 of Law 4961/2022 provides for the establishment of a Coordinating Committee for AI, whose mission is to coordinate the implementation of the National Strategy for the development of artificial intelligence. Its responsibilities include: a) making decisions for the implementation and improvement of the National Strategy, b) shaping national priorities and directions for this purpose, and c) submitting proposals and recommendations for corrective measures if deviations are identified regarding the implementation of the National Strategy or impacts on the fundamental rights of individuals. According to Article 13, the Executive Body of the Coordinating Committee is designated as the Committee of the National Strategy for the development of artificial intelligence, which consists of officials from the Ministry of Digital Governance and is responsible for the implementation of the National Strategy. Finally, Article 14 of Law 4961/2022 provides for the establishment of an Artificial Intelligence Observatory within the Ministry of Digital Governance, under the General Secretariat for Digital Governance and Simplification of Procedures, with the primary mission of collecting data related to the implementation of the National Strategy for the development of artificial intelligence.
The Ministry of Digital Governance created in 2025 a new Special Secretariat for Artificial Intelligence and Data Governance at the Ministry.
The Special Secretariat is part of the Government’s broader strategy aimed at balancing innovation and regulation, the responsible use of new technologies by the Public Administration and the development of a national AI ecosystem with extroversion, sustainability and digital sovereignty.
The Special Secretariat for Artificial Intelligence and Data Governance, among others:
- Support the Commission, which operates under the Prime Minister, for AI projects in the public sector, as well as the design and implementation of the National Strategy for AI and data.
- It will coordinate the stakeholders with a view to achieving optimal results.
- It will act as a pole of know-how and capacity building in AI and data, following the example of gov.gr.
- Develop AI projects and contribute know-how to entities wishing to deploy AI projects where required.
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
Law 4961/2022 introduces inter alia a national framework for regulating AI technologies in these sectors, imposing the following obligations on different categories of entities, namely:
A. Public Entities
- Algorithmic Impact Assessment: In addition to conducting an impact assessment under Regulation (EU) 2016/679 (GDPR), entities must prepare an algorithmic impact assessment to evaluate risks to the rights, freedoms, and legitimate interests of individuals affected by the system. Safeguards will be specified by Presidential Decree.
- Transparency of Operation: Public entities must provide information about the AI system, including start time, operational parameters, and decisions made or supported by it. Complaints regarding transparency violations are examined by the National Transparency Authority.
- AI System Register: Public entities must maintain a register of the AI systems they use.
B. Private Entities
- AI in Employment: Before using an AI system that affects decision-making processes regarding employees or job applicants, impacting working conditions, selection, hiring, or evaluation, enterprises must provide relevant information to employees. This applies to digital platforms with contracts of dependent work, independent services, or projects. Employers must also conduct an impact assessment to safeguard employees’ rights, with potential sanctions for non-compliance imposed by the Hellenic Labour Inspectorate.
- Ethical Use of Data: Medium or large private sector entities, as defined in Article 2 of Law 4308/2014 (“Greek Accounting Standards”), must adopt a data ethics policy outlining measures, actions, and procedures related to data ethics in AI use. Entities preparing a corporate governance statement under Article 152 of Law 4548/2018 (Government Gazette A’ 104) must include information about their data ethics policy. The content of these policies will be specified by a Joint Ministerial Decision.
- Registry of AI Systems: Medium or large private sector entities must maintain a register of the AI systems they use.
- Public Contracts: Public contracts for AI system design or development must include the following obligations for the contractor:
- Provide the contracting authority with information ensuring transparent operation of the system, respecting military, commercial, and industrial secrecy.
- Deliver the AI system under conditions allowing the contracting authority to study its functionality and parameters, make improvements, and publish or distribute those improvements.
- Ensure the system complies with the legal framework, particularly regarding human dignity, privacy and personal data protection, non-discrimination, gender equality, freedom of expression, universal access for persons with disabilities, employee rights, and good governance principles.
The provisions of Law 4961/2022 regarding AI technologies do not affect the rights and obligations established under the General Data Protection Regulation (GDPR) and its implementing law no. 4624/2019 for personal data protection.
The AI Act published on 12 July 2024, is binding and directly applicable in all Member States, including Greece. However, each Member State shall establish or designate as national competent authorities at least one notifying authority and at least one market surveillance authority for the purposes of this Regulation 12 months after the date of entry into force of this Regulation. The AI Act adopts a risk-based assessment. Based on this approach, AI systems are divided into four levels according to the type and level of risk they pose: unacceptable risk, high risk, low risk, minimal risk. AI systems that fall into the unacceptable risk category are completely prohibited, high risk systems must comply with specific requirements, while low or minimal risk systems must comply with fewer or no requirements at all. Finally, it provides for specific requirements for general-purpose AI systems.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
The Consumer Protection Law establishes a strict liability regime under which producers of defective products are held liable when such products cause damage to natural persons or their property, while the injured consumers are not required to prove the fault of the producer. Producer liability is non-contractual, meaning that it is not necessary that a prior contract is concluded between the producer and the injured party. Under Greek consumer protection legislation, in order for a producer to be held liable for damages suffered by a consumer the conditions must be met: a) the product which has been placed on the market by the producer is defective, b) the consumer has incurred damage, and c) a causal link is established between the defect and the damage.
Provided that AI systems qualify as “products” under Greek law, applying the strict liability regime of producers is best suited to the operation of AI systems because of the difficulty in proving the cause of any defect due to the complex construction of the autonomous system. In addition, if personal data is processed by the AI system in violation of the applicable legal framework, relevant liability is triggered.
The existing framework for product liability, which was introduced by the Product Liability Directive (Directive 85/374/EEC) and was implemented by amendments to the Greek Consumer Protection Law 2251/1994, is also applicable to new digital technologies. Directive 85/374/EEC was replaced by EU Directive 2024/2853 to provide an EU-wide system for compensation for people who suffer bodily injury or material damage due to defective products. It introduces a revised and more comprehensive legal framework for liability for defective products, widening the circle of persons who can be held responsible for the safety of a product. Liability now includes other critical factors, such as software vendors, importers, and maintenance or upgrade service providers. The integration of artificial intelligence into several products also introduces new standards of responsibility. AI-based systems are required to be accompanied by documentation proving their security and reliability. The Directive obliges manufacturers to incorporate risk prevention mechanisms.
Furthermore, the Directive introduces a number of important innovations concerning the burden of proof and exemptions from liability, the main one being the strengthening of consumer protection in complex technological cases. In cases where the causal link between the defect and the damage cannot be clearly demonstrated due to the complexity of technologies (such as AI-containing products), the Directive allows for the use of circumstantial evidence.
The Directive has not yet been incorporated into the Greek legal system, as the EU deadline for the transposition of the Directive into the national law of each Member State expires on 9 December 2026.
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems. Have there been any court decisions or legislative developments clarifying liability frameworks applied to artificial intelligence?
A legal framework for civil liability arising from the use of artificial intelligence systems has not yet been established in Greece and no court decisions have been issued in Greece clarifying the liability frameworks applied to artificial intelligence. The general provisions of the Civil Code (Articles 914 and 932) apply.
In accordance with Article 914 and Article 932 of the Greek Civil Code, if an injured party demonstrates that they have incurred damages due to the operation of an AI system, they may be entitled to compensation. Such damage may encompass financial loss, physical injuries, property damage, and non-financial harm such as pain and suffering. The Greek Civil Code sets out five conditions that need to be fulfilled for tortious liability to be attributable to a party: human behaviour; illegal action; fault; damage; causal link between the behaviour and the damage.
Regarding criminal liability, this matter presents the most challenges for legal systems with regard to the operation of AI systems. Specific criminal law provisions regulating AI have not yet been introduced to Greek criminal law, however provisions of the Greek Criminal Code and special criminal laws could be applicable to the use of AI systems on a case-by-case basis.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the deployer, the user and the victim?
In the absence of a dedicated legal framework governing liability for artificial intelligence, responsibility is currently determined on a case-by-case basis. Courts assess the specific circumstances of each incident, including the nature of the harm, the function of the AI system involved, and the roles and conduct of the parties.
For instance, where the damage results from flaws in design or manufacturing, liability may rest with the developer or producer under product liability principles. Conversely, if harm arises due to the user operating the system in an unsafe manner, failing to comply with industry standards or the provided instructions, or neglecting regular maintenance, the user may be held accountable.
Moreover, contributory fault on the part of the injured party—such as negligent behavior or improper use of the AI system—may lead to a reduction in the liability of other parties.
As AI systems evolve toward greater operational autonomy and behavioral complexity, driven by their ability to learn from and adapt to their environments, assigning legal responsibility for resulting harm will become increasingly challenging. In many cases, identifying a specific natural or legal person whose fault directly caused the damage may prove elusive, raising profound legal and ethical questions across jurisdictions.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
Under Greek Consumer Protection Law, consumers that have suffered damage are required to prove that the damage was incurred from a defective product, however as Law 2251/1994 adopts a strict liability regime for manufacturers of defective products, consumers are not required to prove the fault of the manufacturer. Under tort law the injured party bears the burden of proof with regard to the fault of the party liable for the damage they have suffered.
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
The use of artificial intelligence (AI) in Greece is not currently governed by a specific insurance framework, but in principle, it is both insurable and increasingly being covered through existing types of insurance, depending on the context and associated risks, such as general civil liability insurance.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
No. According to Greek patent law Νο.1733/1987, an inventor can only be human. According to Article 6 of the above law, the person applying for a patent is considered to be an inventor. Αlthough in many countries this issue has been a subject of debate, for the time being in Greece, AI cannot be considered an inventor.
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
There is currently no specific legislation or case law addressing the issue of copyright protection for AI-generated images. However, the Greek copyright law No. 2121/1993 generally accepts only natural persons as authors.
-
What are the main issues to consider when using artificial intelligence systems in the workplace? Have any new regulations been introduced regarding AI-driven hiring, performance assessment, or employee monitoring?
When using artificial intelligence (AI) systems in the workplace, several key issues need to be considered to ensure ethical, legal, and effective implementation. AI raises important risks in terms of:
- Biases: AI systems can produce decisions that reproduce prohibited discrimination, biases and prejudice.
- Privacy: AI systems used by employers to make decisions about employees’ personal data. Consequently, the principles established by the GDPR, such as purpose limitation, transparency, and legitimate basis for processing, must be followed. Employees can also exercise all rights granted under the GDPR and Greek Law 4624/2019, such as the right to be informed about what AI tools are being used by the employer and how their data are being processed by these tools. Moreover, Article 22 of the GDPR applies in this context, granting employees the right not to be subject to decisions based solely on automated processing, including profiling, that produce legal effects or significantly affect them.
According to article 9 of the Law 4961/2022 any private sector business that uses an artificial intelligence system affecting any decision-making process regarding employees or job applicants, and impacts working conditions, selection, hiring, or evaluation, must provide sufficient and clear information to each employee or job applicant before its first use. This information should at least include the parameters on which the decision is based, subject to cases requiring prior notification and consultation, and ensure compliance with the principles of equal treatment and anti-discrimination in employment and work due to gender, race, color, national or ethnic origin, genetic background, religious or other beliefs, disability or chronic condition, age, family or social status, sexual orientation, gender identity, or characteristics.
-
What privacy issues arise from the development (including training) and use of artificial intelligence?
The development and use of Artificial Intelligence (AI) systems—particularly during the training phase, but also throughout their deployment—raises significant concerns regarding the protection of personal data. One of the primary issues is the lack of transparency. Often, data subjects are unaware that they are being subjected to automated processing or decision-making by algorithms. Even when such processing is disclosed, the information provided is often insufficient or too technical to offer a clear understanding of the “logic” of the system, the significance of the outcomes, or the potential consequences. This lack of clarity severely hinders the exercise of fundamental rights, such as access, objection, and rectification.
Furthermore, AI systems rely heavily on the processing of vast amounts of data, which conflicts with the principle of data minimization. The constant demand for more and more information—commonly referred to as “data bulimia”—intensifies the risk of excessive data collection and processing. In many cases, the data used are not entirely accurate, and the AI models themselves may produce conclusions or predictions that are biased, unfair, or simply incorrect. Using data for purposes beyond those initially specified represents a serious threat to the principle of purpose limitation. This challenge is further compounded when systems generate new information or profiles through algorithmic processes, often without the individual’s prior consent and without clear oversight regarding the final use of that data.
Regarding the legal basis for processing personal data through AI, this may be established on grounds such as contractual necessity, legitimate interest, or public interest—provided that meaningful human intervention in automated decisions is ensured. Special attention must be given to consent, which is frequently cited as a lawful basis but faces practical limitations: for consent to be valid, it must be freely given, explicit, specific, and informed—conditions that are rarely met adequately in the context of complex and opaque AI systems.
In conclusion, the particular nature of AI systems calls for a reassessment and strengthening of privacy safeguards, in order to ensure the effective protection of personal data in an increasingly complex and evolving technological landscape.
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view? Are there any recent precedents addressing the legality of data scraping for AI training?
In Greece, data scraping is subject to regulation under several legal regimes. In detail, under the GDPR and Law 4624/2019, collecting personal data through scraping without a valid legal basis (such as consent or legitimate interest) is generally prohibited, even if the data is publicly available. Moreover, data scraping can violate copyright or sui generis database rights protected by Law 2121/1993, especially when it involves extracting substantial portions of protected content. Regarding competition issues, according to Law 3959/2011 and relevant EU competition rules, scraping may raise issues where it is used to hinder competition or where dominant firms impose unjustified restrictions on access to essential data. However, to date, there are no known Greek court rulings that directly address the legality of data scraping for AI training. Nonetheless, such practices must align with existing privacy, IP, and competition laws.
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
The data mining exception under Article 21B of L. 2121/1993 applies only if rightholders have not explicitly restricted such use—e.g., through machine-readable formats or metadata, website terms, or platform conditions. These restrictions must be clear, specific, and easily identifiable to be enforceable. If a website has already prohibited data scraping via its terms of use, users may only rely on Article 21A, and only for research purposes. Non-cultural heritage institutions may otherwise only invoke Article 21B. In practice, many publishers and content providers widely restrict text and data mining, often through explicit prohibitions on automated access. Widespread reservations like these can render the exception ineffective. Moreover, if scraping involves substantial copying of copyrighted material, it may constitute infringement. Directive 96/9/EC also protects databases, and unauthorized extraction of significant parts may lead to legal liability. Lastly, Decision 35/2022 of the Hellenic Data Protection Authority ruled that Clearview AI Inc. violated data protection laws by scraping online selfies for commercial facial recognition. The company was fined €20 million for breaching principles of lawfulness and transparency.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
No, at present, the Hellenic Data Protection Authority (HDPA) has not issued guidelines or recommendations regarding the processing of personal data in the context of using artificial intelligence systems or methods.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
In its Decision No. 18/2025, the Hellenic Data Protection Authority (HDPA) examined ex officio the case of the Chinese company Hangzhou DeepSeek Artificial Intelligence Co., Ltd., which provides Artificial Intelligence (AI) services through the DeepSeek platform, accessible also in Greece. The HDPA found that, although the company has no establishment within the EU, its services clearly target data subjects within the EU (including Greece), and therefore it falls within the scope of application of the General Data Protection Regulation (GDPR), pursuant to Article 3(2) of the Regulation. However, the company had not appointed a representative in the EU, as explicitly required by Article 27 of the GDPR, despite its legal obligation to do so. After reviewing the company’s responses, the privacy policy of the platform, and the relevant legal framework (GDPR, Greek Law 4624/2019), the HDPA identified a violation of the obligation to appoint an EU representative. Accordingly, exercising its corrective powers under Article 58(2)(d) of the GDPR, it ordered the company to appoint a written representative in the EU and to inform the HDPA accordingly. The decision underscores that offering services to individuals within the EU, even without monetary transactions, is sufficient to trigger the application of the GDPR and activate the relevant compliance obligations.
-
Have your national courts already managed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
No, there is currently no case law related to AI systems in Greece or their liability or that of their manufacturers. Law 4961/2022 concerning AI systems in Greece is relatively new and has not been implemented so far by the Greek Courts. However, the Hellenic Data Protection Authority has issued the Decision no. 35/2022, according to which the HDPA reviewed a complaint against the company under the name Clearview AI, Inc,. In particular, HDPA found that in this particular case, the complained company, which trades in facial recognition services, violated the principles of legality and transparency (Article 5 par. 1 a’, 6, 9 GDPR) as well as its obligations deriving from the provisions of Articles 12, 14, 15 and 27 of the GDPR, by imposing a fine of twenty million euros (EUR 20,000,000). In addition, HDPA issued a compliance order to the same company in order to satisfy the request for access to the complainant’s personal data, while imposing on the same company a prohibition on the collection and processing of personal data of subjects located in the Greek territory, using methods included in the personal identification service. Finally, with the decision in question, HDPA also sent an order to the company Clearview AI, Inc to delete the personal data of the aforementioned subjects located in the Greek territory, which the complainant collects and processes using the same methods.
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
No, there is currently no case law related to AI systems in Greece or their liability or that of their manufacturers. Law 4961/2022 concerning AI systems in Greece is relatively new and has not been implemented so far by the Greek Courts. However, the Hellenic Data Protection Authority has issued the Decision no. 35/2022, according to which the HDPA reviewed a complaint against the company under the name Clearview AI, Inc,. In particular, HDPA found that in this particular case, the complained company, which trades in facial recognition services, violated the principles of legality and transparency (Article 5 par. 1 a’, 6, 9 GDPR) as well as its obligations deriving from the provisions of Articles 12, 14, 15 and 27 of the GDPR, by imposing a fine of twenty million euros (EUR 20,000,000). In addition, HDPA issued a compliance order to the same company in order to satisfy the request for access to the complainant’s personal data, while imposing on the same company a prohibition on the collection and processing of personal data of subjects located in the Greek territory, using methods included in the personal identification service. Finally, with the decision in question, HDPA also sent an order to the company Clearview AI, Inc to delete the personal data of the aforementioned subjects located in the Greek territory, which the complainant collects and processes using the same methods.
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited? Which sectors have seen the most rapid adoption of AI technologies?
Yes, AI adoption in Greece is growing, albeit gradually. While the overall penetration remains relatively low—around 9.8% of businesses with more than 10 employees used AI technologies as of 2024, according to Eurostat—there is increasing interest and experimentation across sectors. The primary sectors leading AI adoption in Greece include:
- Information Technology and Communications – with approximately 59% of companies reporting full or partial AI integration, especially in automation, analytics, and customer engagement tools.
- Financial Services and Insurance – deploying AI for tasks such as risk assessment, fraud detection, and claims automation.
- Healthcare – utilizing AI for diagnostics, patient data analysis, and telemedicine solutions.
- Retail and Consumer Services – applying AI in personalization, inventory management, and demand forecasting.
According to recent surveys (e.g., National Bank of Greece, SEV), around 1 in 3 SMEs in Greece have started experimenting with AI tools, although often at a preliminary level. Overall, while Greece currently lags behind the EU average in AI adoption, the momentum is building, especially in sectors with high digital maturity or international exposure.
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how? Are AI-driven legal tools widely adopted, and what are the main regulatory concerns surrounding them?
There are several AI tools specifically designed for lawyers, providing multiple possibilities to law professionals such as document review and analysis, document automation, predictive analytics and even legal research. However, whether these tools are used in practice or not, cannot be confirmed by publicly available data.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
While AI presents clear advantages, it must be approached with caution, awareness of ethical risks, and a commitment to ongoing learning. Successful integration depends on understanding both the technology and its legal implications.
Challenges:
Job Disruption: AI may automate routine legal tasks, potentially reducing demand for traditional roles. Legal professionals must adapt by developing new, tech-savvy skill sets.
Complex Outputs: Interpreting AI-generated results like predictive analytics or natural language outputs requires technical understanding. Lawyers must learn how to explain these outcomes clearly to clients and courts.
Data Privacy & Security: With AI’s reliance on large datasets, safeguarding client information becomes critical. Legal practitioners must be familiar with cybersecurity and data protection laws.
Liability Issues: Determining responsibility for harm caused by AI is complex. Traditional tort principles struggle to assign fault when an AI acts unpredictably, beyond its original programming.
Contractual Attribution: In AI-influenced contracts, defining fault or accountability can be unclear, especially when outcomes deviate from expected behavior without direct user involvement.
Opportunities:
Efficiency Gains: AI tools can automate tasks like document review and legal research, freeing lawyers to focus on strategy and complex analysis.
Smarter Decisions: Machine learning enables data-driven insights into case outcomes, risks, and trends, enhancing legal advice and planning.
Enhanced Research: AI-powered platforms streamline legal research and precedent analysis, improving speed and accuracy.
Faster Due Diligence: Contract review and risk flagging can be accelerated using AI, allowing lawyers to concentrate on higher-level issues.
Innovation in Services: AI enables new legal specialties, such as advising on algorithmic accountability or autonomous systems law, helping lawyers evolve and expand their practice areas.
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months? Are there any ongoing initiatives that could reshape AI governance?
- The completion of the drafting of the National Strategy for the development of Artificial Intelligence. In particular, according to the Digital Transformation 2020-2025 Βible, the National Strategy for the development of ΑΙ will:
- Define the conditions for the development of artificial intelligence, including skill and trust frameworks, data policies, and ethical principles for safe development and use.
- Describe national priorities and sectors for maximizing the benefits of artificial intelligence to address social challenges and foster economic growth.
- Analyze necessary actions related to the above priorities and propose cross-cutting interventions, along with at least one pilot application per policy area.
- The adoption of the framework for the implementation of the AI Act: Greece shall establish or designate as national competent authorities at least one notifying authority and at least one market surveillance authority for the purposes of this Regulation. Also, the AI Act gives Member States the power to determine the penalties and enforcement measures in the event of a breach of the Regulation. Therefore, Greece shall, without delay and at the latest by the date of entry into application (2 August 2026), determine the penalties and notify the Commission of the rules on penalties and of other enforcement measures.
The operation of the newly established special Secretariat for Artificial Intelligence and Data Governance is expected to largely determine the next steps for the national strategy regarding the use of artificial intelligence. The Ministry of Digital Governance has already designed and implemented a series of reforms, projects and initiatives, such as:
- The development of “DAIDALOS”, a world-class supercomputer that is part of the European High Performance Computing Consortium (EuroHPC-JU)
- The implementation of the AI Factory “PHAROS”, which will be among the top 13 in Europe and will contribute to the development of AI applications in the fields of Health, Sustainable Development, Culture and Language by research centers, universities and businesses.
- Development of a Central Data Governance and Classification Framework in information systems, so that there is knowledge of the existing data and its form.
- The use of AI systems in the public sector, for public administration issues and AI applications for the National Cadastre.
- The completion of the drafting of the National Strategy for the development of Artificial Intelligence. In particular, according to the Digital Transformation 2020-2025 Βible, the National Strategy for the development of ΑΙ will:
Greece: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in Greece.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence? If so, has there been any progress in its implementation? Are there plans for updates or revisions?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems. Have there been any court decisions or legislative developments clarifying liability frameworks applied to artificial intelligence?
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the deployer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace? Have any new regulations been introduced regarding AI-driven hiring, performance assessment, or employee monitoring?
-
What privacy issues arise from the development (including training) and use of artificial intelligence?
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view? Are there any recent precedents addressing the legality of data scraping for AI training?
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
-
Have your national courts already managed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited? Which sectors have seen the most rapid adoption of AI technologies?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how? Are AI-driven legal tools widely adopted, and what are the main regulatory concerns surrounding them?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months? Are there any ongoing initiatives that could reshape AI governance?