-
What are your countries legal definitions of “artificial intelligence”?
There is no legal definition of artificial intelligence in national legislation. In an effort to take a descriptive approach, the authors of the Policy for the Development of Artificial Intelligence in Poland from 2020 (AI Policy) refer to the following definition:
Artificial intelligence (AI) is an interdisciplinary field of study that includes neural networks, robotics, and the creation of models of intelligent behaviour and computer programs that simulate this behaviour. This includes machine learning, deep learning and reinforcement learning.
The main definition that is most frequently cited is the one set out in the AI Act: “a system that is designed to operate with elements of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic – and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts.” The OECD defines AI as “an automated system that, for a given set of objectives defined by humans, is capable of making predictions, formulating recommendations, or making decisions that impact real or virtual environments. AI systems are designed to operate at various levels of autonomy.”
-
Has your country developed a national strategy for artificial intelligence? If so, has there been any progress in its implementation? Are there plans for updates or revisions?
On 28 December 2020, the Council of Ministers adopted the “Policy for the Development of Artificial Intelligence in Poland from 2020”, also known as the “AI Policy”. The 2024 document updates and extends the action framework to 2030. A public consultation held in 2025 confirmed that the document was under review and being adapted. The AI Policy is executed by the Minister responsible for computerisation. The policy outlines its short-, medium- and long-term goals, which are aimed at developing Polish society, Polish economy and Polish science in the field of artificial intelligence.
The “Policy for the Development of Artificial Intelligence in Poland from 2020” is a supporting document that complements others, including the Responsible Development Strategy, the European Union’s Coordinated Plan on Artificial Intelligence , as well as the work of international organisations.
The document considers not only the legal, technical, organisational and international dimensions of artificial intelligence use, but also the ethical dimension. It implements two national and five international strategic documents, providing a Polish perspective on EU programming documents. It focuses on the following six pillars:
- AI in the public sector
- AI in business
- Science and research
- Education and competencies
- International cooperation
- Legal and ethical frameworks
Poland has made moderate progress in implementing this strategy. Key developments include:
- Creation of AI research hubs (e.g. through collaboration between universities and industry).
- Increased funding for AI-related projects via the National Centre for Research and Development (NCBR).
- Launch of AI training initiatives for the public sector and SMEs.
- Integration of AI into healthcare, defence, and education systems.
However, implementation has been fragmented, with experts noting a lack of centralised coordination and large-scale public-private AI partnerships.
In the updated study, the AI Policy is based on four pillars:
- Human capital – developing highly skilled professionals and increasing the availability of AI talent.
- Innovation – providing support for scientific research and AI applications across a range of industries.
- Investment – providing strategic financial support for AI from both the public and private sectors.
- Implementation – developing the infrastructure and legal framework to support AI in Poland.
The study also considers key areas of development, such as economic competitiveness, social welfare, the digital industry, national security and the development of the Polish AI ecosystem. This includes the creation of platforms that facilitate cooperation between science, business and the public sector.
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
Poland has implemented various soft-law instruments and guidelines relating to artificial intelligence (AI). AI is indirectly regulated through existing legal frameworks, and the country is preparing for the upcoming EU Artificial Intelligence Act.
On 28 December 28 2020, the “Policy for the Development of Artificial Intelligence in Poland from 2020”, also known as the “AI Policy”, was adopted. However, it is a document that specifies the actions that Poland should implement and the goals it should achieve in the short term (by 2023), the medium term (by 2027) and the long term (after 2027). These are aimed at developing Polish society, Polish economy and Polish science in the field of artificial intelligence.
In December 2024, the Working Group on AI (GRAI) produced a document entitled “Policy for the Development of Artificial Intelligence in Poland 2025-2030”, outlining a plan to make Poland a global centre for trustworthy AI, where innovation fuels economic growth, competitiveness and social well-being.
The AI Policy is based on four pillars, as set out in the study:
- Human capital – developing highly skilled professionals and increasing the availability of AI talent.
- Innovation – providing support for scientific research and AI applications across a range of industries.
- Investment – providing strategic financial support for AI from both the public and private sectors.
- Implementation – developing the infrastructure and legal framework to support AI in Poland.
Regulations on the use of artificial intelligence systems may also apply. These set out the rules for using the system and define who is responsible for the generated content. However, they do not strictly regulate the legal aspects.
In Poland, claims relating to the violations of rights by artificial intelligence may be based on different national laws, depending on the specific case and type of violation. These laws include in particular:
- Act on the Protection of Personal Data: in the event of a breach of privacy or personal data protection involving systems based on artificial intelligence, it is advisable to refer to the provisions of this Act, which regulate the collection, processing and protection of personal data.
- Civil Code: the provisions of the Civil Code regarding civil liability may be applied in the event of damage caused by artificial intelligence. The provisions on tort or contractual liability may be referred to, particularly in the event of a violation of personal rights or liability for a dangerous product.
- Copyright and Related Rights Act: if artificial intelligence infringes copyright by generating copyrighted content, it is possible to refer to the provisions of this law to protect one’s creativity.
- Act on Combating Unfair Competition: its provisions are important for protecting trade secrets.
- Consumer protection regulations: in the event of a violation of these rights, the relationship between consumers and service providers may be referenced, particularly with regard to information obligations.
- Act on Competition and Consumer Protection: if artificial intelligence acts in a way that violates competition law or the collective interests of consumers, the provisions of this Act may be referenced.
- Database Protection Act: here, especially over time, the importance of the regulations may increase.
Currently, however, Poland is focusing on drafting legislation to implement the Artificial Intelligence Act, as well as selecting national authorities to enforce the new regulations effectively at a national level. It will be crucial to see how these regulations work in practice. Both the European Commission and the Member States will play an important role here. Implementing the Artificial Intelligence Act requires establishing or selecting (from among the existing ones) institutions to supervise and regulate the AI market. The Ministry of Digital Affairs points out that, at a national level, regulations regarding the so-called regulatory sandboxes in the field of artificial intelligence systems will also be necessary to facilitate the development and testing of innovations before they are placed on the market or put into use.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
In Poland, defective artificial intelligence (AI) systems — meaning systems that do not provide the public with the expected level of safety — are currently regulated under general product and civil liability rules, as there is no AI-specific liability law in force yet. However, EU regulations will change this significantly by the year 2026.
Under the Act on the protection of certain consumer rights and on liability for damage caused by a dangerous product, amending the Civil Code, a defective AI system embedded in a product may be treated as a product defect. These rules implement EU Directive 85/374/EEC, thereby ensuring equal protection across the EU.
Previously, liability for defective products was based on general contract and tort provisions, as well as warranty and guarantee claims. While the amendments do not exclude these regimes, they do introduce strict liability based on risk.
Under the Civil Code, those who may be held liable for damage caused by a dangerous product include the manufacturer, quasi-manufacturer, importer or seller, who are liable jointly and severally. The injured party can choose whom to pursue.
National standards on cybersecurity (including the GDPR) and consumer protection also apply. The forthcoming AI Act will regulate AI applications, ensuring safety and defining liability for AI-related damage.
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems. Have there been any court decisions or legislative developments clarifying liability frameworks applied to artificial intelligence?
In the absence of an alternative, the legal basis for claiming compensation for damage caused by the operation of systems based on artificial intelligence is currently the general provisions of individual Member States regarding liability for damage.
The Polish Civil Code contains a comprehensive catalogue of events other than those caused by direct human actions. For example, Article 435 of the Civil Code refers to the responsibility of the operator of an enterprise set in motion by the forces of nature. These provisions could therefore form the basis of tort liability for events caused by artificial intelligence. However, AI could be the cause of events that may lead to the violation of the goods and rights of other entities, which in turn could lead to damage to these entities.
Due to the various activities of artificial intelligence, the damage may be material or non-material. It may be caused by a human using artificial intelligence, or by artificial intelligence acting independently. The biggest problem arises when autonomous AI actions cause damage. Nevertheless, the law provides certain principles that should be applied in the context of AI, such as the concepts of responsibility for vehicles and enterprises set in motion by natural forces. Another interesting concept that can be reached by analogy is animal responsibility (i.e. Articles 431 and 435 of the Civil Code).
In addition, depending on the case of violation of third-party rights, provisions relating to intellectual property law may also apply. For example, the Act on Copyright and Related Rights, as well as the Industrial Property Law and the General Criminal Code, set out standards on criminal liability.
As far as EU legislation is concerned, the current regulations do not address the issue of liability for damage caused by artificial intelligence systems. However, this situation may be significantly affected by the European Commission’s proposal of 28 September 2022, which suggests a new EU directive on liability for artificial intelligence: the Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (or AI Liability Directive (AILD)).
It is also worth noting that Member States may adopt or maintain national provisions that offer more favourable grounds for justifying a non-contractual civil claim for compensation for damage caused by an artificial intelligence system. However, such provisions must be compatible with EU law.
To date, no Polish court has issued a final precedent-setting decision that directly addresses AI-specific civil or criminal liability. However, cases involving automated decision-making (e.g., in HR or credit scoring) are appearing in courts under anti-discrimination or consumer protection legislation. The Polish Office of Competition and Consumer Protection (UOKiK) is actively monitoring AI-related practices, such as algorithmic pricing and dark patterns.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the deployer, the user and the victim?
Any damage caused by AI is the responsibility of a natural or legal person. The operator and the manufacturer are primarily responsible, as, to a certain extent, they have the ability to control the operation of artificial intelligence and the associated risks.
In Poland, liability for damage caused by an artificial intelligence system can be shared among the various parties involved in its creation, implementation and use, as well as the victim of the damage. The general rules for the division of responsibilities are as follows:
Developer: the developer of the AI system may be liable for any damage resulting from a faulty design, programming errors, or negligence during the system development process. They are obliged to ensure that the system is safe and complies with applicable regulations.
Implementer: the entity that implements the AI system – i.e. the company or organisation that uses it – may be liable for any damage caused by incorrect implementation or failure to ensure proper supervision of its operation.
User: the user of the AI system may be liable for any damage caused, if they improperly use the system or do not follow the guidelines for its use.
Victim: the person affected by the AI system may be entitled to compensation for any damage suffered. They may pursue compensation claims against the developer, implementer, user, or any other party responsible for the damage.
In Poland, the liability for damage caused by the AI system may be determined on the basis of applicable civil and consumer law, as well as other relevant regulations regarding liability for damage.
The aim of the AI Act is to regulate artificial intelligence applications to ensure security and protect the rights of individuals and entities. The AI Act defines – among other things – the requirements for liability for damage caused by artificial intelligence systems. According to the AI Act, operators of artificial intelligence systems may be held liable for any damage caused by their systems, including material and non-material damage, as well as death. They may also be required to provide liability insurance or other forms of financial security in the event of damage.
In terms of liability for damage caused by AI systems, the AI Act obliges operators to ensure that their systems are safe and compliant with the law, and that they do not cause damage to users or other entities. Operators must also ensure transparency and adequate documentation of processes relating to artificial intelligence systems.
In accordance with the provisions of the AI Act, the division of responsibilities between developers, implementers, users and victims of damage may depend on the specific circumstances and applications of artificial intelligence systems.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
In Poland, the legal basis for claiming compensation for damage caused by AI systems is general civil liability under the Polish Civil Code. However, applying this to AI is complex, as the burden of proof remains with the victim, who must show a culpable act or omission, actual damage and a causal link The autonomous and intricate nature of AI decision-making can make it difficult to prove that the system’s design, operation or input data have actually caused the damage. This often poses a significant obstacle to obtaining compensation.
However, Polish law does permit victims to apply to a civil court to preserve evidence or obtain information from the entities responsible for developing or operating the AI system. This can be crucial for proving the extent of a breach.
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
There are specialised insurance policies for the technology industry. Such policies may include protection against risks related to programming errors, data breaches, or other technical issues associated with AI. Currently, there is probably no insurance product on the market that directly responds to business activities by taking into account the risks associated with designing and implementing or using AI. However, existing insurance frameworks can cover AI-related risks under certain conditions. Key areas include:
1. General Liability Insurance (OC)
It covers damage caused by AI-driven systems or robots, if they are operated by an insured party.
However, issues of attribution of fault can still complicate claims — e.g., if the damage results from an AI system’s autonomous decision-making.
2. Professional Liability Insurance
This is particularly relevant for companies that develop or deploy AI software, such as medical diagnostics tools or automated trading platforms.
It covers the risks associated with algorithmic errors, misuse of data, or faulty deployment.
3. Cyber Insurance
A growing area, as AI often processes large volumes of sensitive data.
It covers data breaches, AI-driven malware incidents, as well as losses resulting from cyberattacks that exploit vulnerabilities in AI systems.
4. Product Liability Insurance
If AI is embedded in a product (e.g., an autonomous vehicle or a smart home device), insurers may offer product liability protection.
Complex legal questions arise regarding who is liable — the manufacturer, developer, or the user.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
In Poland, only a natural person can be named as the inventor in a patent application. While this is not explicitly stated in the Industrial Property Law, the fact that AI lacks legal personality means that it cannot be an inventor, as follows from Copyright Law and general legal principles. To date, the Polish Patent Office and national courts have not ruled on any cases involving AI as an inventor.
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
Under the current Polish Copyright and Related Rights Act, copyright protection applies to any manifestation of creative activity of an individual nature, established in any form, regardless of its value, purpose and manner of expression. The protection granted by the Act is given to the creator. The author is presumed to be the person whose name appears in this capacity on copies of the work, or whose authorship has been made public in any other way in connection with the dissemination of the work.
In one of the judgements of the Court of Justice of the European Union, it was pointed out that “in order for an object to be considered original, it is necessary and at the same time sufficient that it reflects the author’s personality, manifested in his free and creative choices (…). On the other hand, where the performance of an object is conditioned by technical considerations, rules or constraints that leave no room for creative freedom, that object cannot be seen as displaying the originality necessary to be considered a work” (Judgement in Case C-683/17 of 12 September 2019).
Being deprived of its own personality, which enables free and creative choices, AI is incapable of creating works within the meaning of copyright law. Therefore, images generated by AI do not benefit from protection under these regulations. Another issue is the use of a generated image as the basis for further human processing. Assuming an appropriate level of creative modification is applied, the person making them becomes the author of a new work.
One view favours treating AI as a tool in the hands of the creator. The introduction of specific, precise commands that define parameters (i.e. “prompts”) is enough to recognize a person using AI as the creator.
Polish courts have not yet ruled on this issue, so the considerations on the subject are not unequivocally confirmed in the jurisdiction.
-
What are the main issues to consider when using artificial intelligence systems in the workplace? Have any new regulations been introduced regarding AI-driven hiring, performance assessment, or employee monitoring?
When using AI to build a company’s competitive advantage, careful implementation is essential to minimise IT and legal risks. AI tools support many areas, such as marketing, HR, R&D, product development and customer service, as well as increasingly strategic decisions, including the financial ones.
Implementation should consider goals, compliance, cybersecurity, resources, and user needs. Clear internal guidelines and monitoring for bias and data security are essential, as is transparency and integration with company policies. One risk is the “black box” effect.
AI is only as good as its data; poor-quality or biased data can lead to wrong decisions being made. Therefore, automation should include human oversight, as even the best AI cannot replace business experience or an understanding of customer needs.
Despite AI’s efficiency, companies must control outputs and prevent “hallucinations”. Quality checks and supervision can help reduce this risk.
In order to maintain trust, companies must respect users’ privacy, use data lawfully and act ethically. For example, they should inform users whenever they interact with a bot.
-
What privacy issues arise from the development (including training) and use of artificial intelligence?
The use of AI is associated with challenges in the field of privacy protection, both at the stage of training and its actual use.
Those on the final stretch towards the entry into force of the AI Act, as well as the previously adopted regulations such as the Digital Services Act and the Digital Markets Act, refer to the already existing provisions of the General Data Protection Regulation in order to provide an even more comprehensive overview of privacy issues.
AI collects, stores and processes data which in itself creates privacy violations. It is questionable whether the data is collected and used with the knowledge and consent of the individuals concerned or on a legally justified basis. The use of AI for continuous monitoring in a specific scope, e.g. the use of biometric data or facial recognition, can cause serious privacy violations. Enforcing standards will therefore be important in this respect.
AI can perpetuate and replicate existing biases, raising the risk of discrimination. Combined with a lack of transparency and explainability, this means that decisions taken not only duplicate prejudices, but are also difficult to justify.
AI is commonly used to profile users and create recommendation systems. However, this approach raises significant legal and ethical doubts, concerns, particularly with regard to vulnerable groups, such as minors.
The challenge is uncontrolled data disclosure and cybersecurity issues.
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view? Are there any recent precedents addressing the legality of data scraping for AI training?
Web scraping, or the extraction of data from websites, is not specifically regulated in Poland or the EU, and is generally legal unless it breaches other laws, such as those relating to data protection or copyright. When scraping personal data, GDPR obligations apply, including the requirement to have a lawful basis, to fulfil information duties and to respect data subjects’ rights. Using scraped data that qualifies as a copyrighted work without consent may infringe rights, unless an exception such as fair use applies.
Recent EU copyright rules on text and data mining (TDM) have introduced exceptions and opt-outs that must be taken into account. Database protection laws and unfair competition rules may also restrict data scraping, particularly if it violates website terms and conditions, puts a strain on servers, or misuses trade secrets. There have been no significant national court rulings on web scraping for AI training yet.
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
Third parties must comply with the website’s terms and conditions, if they explicitly prohibit web scraping. In Poland, collecting personal data without user consent may violate data protection laws and unfair practices regulations. The website’s terms and conditions, including the ban on web scraping, form part of the agreement between the user and the website and support enforcement. Violating these terms may result in financial claims, including damages.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
Although there are no comprehensive official guidelines on AI from Polish privacy authorities, there are various recommendations on good practices. The Personal Data Protection Office (UODO) takes initiatives on AI and data protection through its Department of New Technologies and an AI working group. The UODO supports legislative work, e.g., by commenting on draft legislation such as AI Act and Poland’s Digitalisation Strategy 2035. It also runs webinars and publishes materials, including its official Bulletin, to address AI-related data use.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
The Personal Data Protection Office (UODO) is dealing with a complaint regarding ChatGPT. OpenAI has been accused, among other things, of processing data unlawfully, unreliably and in a non-transparent manner. This constitutes a potential violation of numerous provisions on the protection of personal data, including a failure to comply with the information obligation.
The UODO will examine the proper exercise of the Complainant’s rights, and take steps to clarify doubts as to whether OpenAI’s personal data processing model complies with applicable regulations on the protection of personal data.
-
Have your national courts already managed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
To the best of our knowledge, the national courts in Poland have not issued any notable rulings on AI yet.
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
Poland does not yet have a single AI regulator — current responsibilities are spread across existing authorities, such as those responsible for data and consumer protection. Once the EU AI Act is transposed, a central body will likely be appointed; draft plans suggest that this may be a new institution. Currently, the Minister for Digitalization oversees the AI Policy and reports yearly to the Council of Ministers. The Polish AI Act is expected by September 2025. Public institutions such as NASK and OPI PIB also run initiatives to develop AI expertise and monitor progress.
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited? Which sectors have seen the most rapid adoption of AI technologies?
A study conducted by KPMG in 2024 found that the use of AI by businesses in Poland is growing slowly but steadily. 28% of the respondents declared that they already use AI in their business, while a further 30% said that they are planning to implement it within the next year.
With percentages of 47% and 40% respectively, the automotive and financial sectors lead the way in the implementation of AI technologies.
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how? Are AI-driven legal tools widely adopted, and what are the main regulatory concerns surrounding them?
Although adoption remains uneven and mostly at an early stage, AI is increasingly being used in Poland’s legal sector. AI tools mainly assist with repetitive tasks, such as document review, legal research, and contract analysis, thereby boosting efficiency without replacing core legal reasoning.
AI enables faster searches and basic document analysis, saving time on manual checks. It can also accelerate the drafting and review of contracts and letters, allowing lawyers to focus on strategy and case merits rather than routine tasks.
As lawyers remain fully responsible for the content, all AI-generated outputs must be carefully verified. If used without checking, AI-generated fictitious decisions or citations may expose lawyers to liability.
Concerns include data confidentiality, particularly when sensitive client data is uploaded to cloud-based tools. There are also risks related to bias, transparency, and reliability of AI systems.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
Challenges
- Accuracy and Completeness of AI-Generated Outputs
Artificial intelligence technologies operate based on the data with which they have been trained. Consequently, the accuracy and reliability of the information generated cannot be guaranteed, particularly given the risks of bias, misinformation, or so-called “hallucinations.” It is therefore advisable to verify AI-generated content both substantively and procedurally. Furthermore, due to the autonomous nature of AI content generation, it is difficult to determine if such outputs are comprehensive or adequately address the legal issue at hand. - Data Protection and Professional Secrecy
Legal professionals are bound by strict confidentiality obligations arising from the public trust nature of their profession. Therefore, using AI tools to formulate appropriate queries or share sensitive information presents a risk of breaching professional secrecy and data protection regulations, particularly when personally identifiable or privileged information is involved. - Job Displacement and Reskilling Requirements
Artificial intelligence has the potential to replace many of the cognitive and routine legal tasks currently performed by humans. Over time, this may result in diminished demand for certain legal roles, necessitating a significant shift in skillsets and prompting legal professionals to acquire new competencies and adapt to technology-driven workflows. - Liability for AI-Generated Content
Lawyers remain fully liable for the legal advice, actions, and documents that they submit on behalf of their clients to courts or authorities. However, a fundamental concern arises regarding the attribution of liability for the decisions based on AI-generated content. This is particularly pertinent when it comes to delineating the boundaries of responsibility between legal practitioners, AI developers, and the AI systems themselves. - Compliance with Legal Standards and Regulatory Frameworks
The design, deployment, and operational use of AI tools in the legal sector must adhere to a complex array of national and EU-level laws and ethical guidelines. The need to comply with these obligations may significantly increase the cost of legal oversight or force firms to take calculated legal risks in relation to regulatory uncertainties or non-conformity with evolving standards.
Opportunities
- Enhanced Productivity
AI systems assist in automating repetitive legal tasks, enabling legal practitioners to prioritise high-value strategic decision-making and client representation. - Expanded Research Capabilities
AI expedites the search for legal sources and case law, enabling more comprehensive and timely legal research and analysis. - Advanced Data Analytics
AI technologies can rapidly process and summarise large volumes of data, which in turn generates actionable insights that are crucial to case strategy. - Document Automation
AI-powered tools facilitate the efficient generation of standardized legal documents, such as contracts and notices, thereby minimising human error and saving valuable time. - Improved Client Engagement
Chatbots and other AI-driven communication tools enable faster responses to initial client enquiries.
- Accuracy and Completeness of AI-Generated Outputs
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months? Are there any ongoing initiatives that could reshape AI governance?
The most significant legal developments are expected to focus on the implementation of the EU Artificial Intelligence Act (AI Act) The AI Act will directly shape national regulatory approaches, forcing Polish authorities and businesses to adapt to a new, harmonised legal framework for AI systems.
As part of the implementation of the EU AI Act, Member States are required to designate or establish three kinds of authorities.
Market Surveillance Authority – this authority builds upon the pre-existing and well-established concept of market surveillance authorities within EU law. Its task is to ensure that only products which comply with EU law are made available on the Union market.
Notifying Authority – it will be the national authority responsible for establishing and carrying out the procedures for assessing, designating and notifying conformity assessment bodies, as well as monitoring them (Art. 3(19) and Art. 28(1)).
National Public Authority – it will ensure that Member States respect fundamental rights obligations in relation to high-risk AI systems, as outlined in Annex III.
By 2 August 2025, Member States must establish or designate competent authorities (Art. 113(b)). An implementing act currently under consideration would establish a new body, the Committee on the Development and Security of AI, as the market surveillance authority and single point of contact. The act designates the Minister of Digitization as the notifying authority.
The objectives to be achieved by 2027, as set out in the “Policy for the Development of Artificial Intelligence in Poland from 2020”, include the following activities:
1. Analysis and elimination of legislative barriers and administrative burdens for new enterprises dealing with artificial intelligence by:
- creating conditions to increase the flexibility of the labour market through appropriate legislative changes and consultations with employers and trade unions in this respect;
- preparing new types of licenses for algorithms and ICT solutions enabling the public sector to use AI technologies produced from public funds openly;
- updating the law to ensure access to data, including sensitive data such as medical data, and to establish the conditions for the functioning of trusted spaces to share this data, while taking into account the privacy and personal data protection;
- preparing and updating the legal system in terms of the possibilities of practical implementations of AI, which concern algorithms and data processing in the cloud using edge computers, the use of Internet of Things (IoT) solutions in the context of industry, and public data collection;
- the security of citizens’ data and the sharing of “industrial” data;
- preparing and updating the law in terms of the practical implementation of autonomous drones using artificial intelligence for use in agriculture to inspect crops and infrastructure of protected facilities;
- consulting with the academic, social and business communities to develop and regularly update promotional strategies, changes in legislation and activities aimed at eliminating legislative barriers and administrative burdens in a rapidly changing environment.
2. Taking action in specific areas related to the development of artificial intelligence, particularly with regard to efficient and quick access to data and the use thereof for all participants in economic life, regardless of the size of the institution by:
- promoting solutions related to data openness, including through the development of the Digital Administration Sandbox and the open data portals, digital repositories created in the cultural sector, as well as and commercial and academic solutions based on open data. This also includes piloting sector-specific trusted data spaces;
- enabling access to high-speed infrastructure solutions, including computing centres with GPUs and broadband connections (also those based on 5G or newer networks), on which calculations can be performed.
3. Support for programmes preparing Polish society for the changes brought about by the development of the algorithmic economy by:
- creating new knowledge bases and aggregation of existing educational materials within one contact point for people retraining in the field of modern technologies;
- continuing to develop the range of courses, fields of study and interdisciplinary scientific and research programmes (including online and hybrid ones, i.e. those that combine full-time education with online learning) organised in cooperation with representatives of the business community, and combined with the elements of career counselling and networking opportunities.
4. Preventing unemployment and flexibly creating new jobs in the labour market for disadvantaged groups by:
- providing information, educational and retraining programmes aimed at counteracting unemployment;
- training and retraining courses for representatives of the most endangered professions, including encouragement to acquire qualifications and develop skills in the field of modern technologies, for example through the development of market qualifications included in the Integrated Qualifications System.
5. Defining permanent programmes to support artistic and creative activities in the field of AI by:
- co-organising continuous exhibitions of works created and co-created with the help of AI;
- regulating the issue of intellectual property of works created using AI;
- organising international competitions for works created with the help of AI and supporting Polish artists who win competitions organised abroad.
Poland: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in Poland.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence? If so, has there been any progress in its implementation? Are there plans for updates or revisions?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems. Have there been any court decisions or legislative developments clarifying liability frameworks applied to artificial intelligence?
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the deployer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace? Have any new regulations been introduced regarding AI-driven hiring, performance assessment, or employee monitoring?
-
What privacy issues arise from the development (including training) and use of artificial intelligence?
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view? Are there any recent precedents addressing the legality of data scraping for AI training?
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
-
Have your national courts already managed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited? Which sectors have seen the most rapid adoption of AI technologies?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how? Are AI-driven legal tools widely adopted, and what are the main regulatory concerns surrounding them?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months? Are there any ongoing initiatives that could reshape AI governance?