-
What are your countries legal definitions of “artificial intelligence”?
As of today, Türkiye does not have a specific, standalone law regulating artificial intelligence (AI), and accordingly, there is no statutory legal definition of AI under Turkish law. References to AI are mostly found in policy-level documents rather than in binding legislative instruments.
The most prominent of these is the National Artificial Intelligence Strategy (2021–2025), jointly issued by the Ministry of Industry and Technology and the Digital Transformation Office of the Presidency of the Republic of Türkiye. This strategy provides a conceptual definition of AI as “the ability of a computer or computer-controlled robot to perform various activities in a manner similar to that of intelligent creatures.” While not legally binding, the strategy reflects the national policy orientation and terminological preferences regarding AI.
There is also a draft Artificial Intelligence Bill, submitted to the Turkish Grand National Assembly on 24 June 2024. However, this draft was introduced by an opposition party member, and its chances of enactment are currently considered low.
Although no binding definition currently exists, it is widely expected that any future legislative efforts in Türkiye concerning artificial intelligence will adopt a definition similar to that found in the EU Artificial Intelligence Act, reflecting ongoing efforts to ensure compatibility with international regulatory trends.
-
Has your country developed a national strategy for artificial intelligence? If so, has there been any progress in its implementation? Are there plans for updates or revisions?
Yes, Türkiye has developed a comprehensive national strategy for artificial intelligence (AI), titled the National Artificial Intelligence Strategy (2021–2025) (NAIS), which was jointly prepared by the Digital Transformation Office of the Presidency of the Republic of Türkiye and the Ministry of Industry and Technology.
The implementation of the NAIS is overseen by a Steering Committee chaired by the Vice President of Türkiye, supported by an AI Ecosystem Advisory Group and several technical working groups composed of relevant stakeholders. In light of recent advancements in AI and the alignment with Türkiye’s 12th Development Plan, the NAIS was further operationalized through the release of the updated 2024–2025 Action Plan.
The updated Action Plan sets out key measures to strengthen AI governance, including the development of legal evaluation tools, alignment with international regulatory standards, and enhanced auditability and accountability mechanisms. It also envisions tools for monitoring trustworthy AI, audit guides on algorithmic accountability, guidance on intellectual property rights for AI-generated content, standardization for AI-related patents, and the potential introduction of a “Trusted AI” certification system.
The monitoring and evaluation of this Action Plan are carried out on a quarterly basis, incorporating feedback from responsible institutions, such as the Digital Transformation Office of the Presidency of the Republic of Türkiye, the Ministry of Transport and Infrastructure, and the Scientific and Technological Research Council of Türkiye (TÜBİTAK), to ensure that implementation remains aligned with emerging needs and technological advancements.
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
Türkiye has not yet enacted a binding, comprehensive legal framework specifically regulating artificial intelligence (AI). In the absence of AI-specific legislation, several existing legal instruments may apply by analogy to certain aspects of AI technologies. These include the Law on the Protection of Personal Data No. 6698, particularly in areas such as automated decision-making and profiling; the Law on Consumer Protection No. 6502, which may apply to AI systems interacting with consumers; and the Law on Product Safety and Technical Regulations No. 7223, which can be relevant for AI-integrated products in the context of conformity and liability requirements.
In addition, general provisions under the Turkish Code of Obligations No. 6098 and the Turkish Civil Code No. 4721 may become relevant in cases involving tort liability, contractual obligations, or personal rights, depending on the use case of the AI system. However, these laws were not designed with AI in mind and may fall short in addressing the unique features of AI, such as autonomy and adaptiveness.
Applying these general laws to AI raises considerable interpretive difficulties. The evolving and probabilistic nature of AI systems complicates assessments related to causation, foreseeability, and fault—key elements in both private and public law contexts. For example, it remains unclear how legal responsibility should be allocated when a self-learning AI system changes its behavior post-deployment in a way that leads to harm or non-compliance.
As for legislative developments, a draft Artificial Intelligence Bill was submitted to the Turkish Grand National Assembly on 24 June 2024. The bill was proposed by a member of an opposition party, and its likelihood of being enacted in its current form is considered low.
On 5 October 2024, the Grand National Assembly of Türkiye established an “Artificial Intelligence Research Commission” to help develop the legal infrastructure for AI, enhance its benefits, and address associated risks.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
Since Türkiye has not yet enacted a dedicated legal framework governing artificial intelligence, issues arising from defective AI systems are assessed through existing legal instruments on a case-by-case basis. The applicable rules will vary depending on how the AI system is used, the parties involved, and the type of harm caused.
The Law on Product Safety and Technical Regulations No. 7223 serves as the main framework for product safety in Türkiye and is designed to ensure that products placed on the market do not pose risks to public health, safety, or property. A defective AI system—such as one embedded in a smart device—could be classified as a “product” under this law, and manufacturers or importers may be held liable if the product when used under normal conditions and in accordance with its instructions poses more than minimal, foreseeable risks specific to its intended use and fails to provide an adequate level of protection for human health and safety. Additionally, if an AI system is embedded in a regulated product—such as a medical device—other sector-specific safety regulations would apply, including those that contain tailored liability provisions.
In addition to the primary product safety framework, liability for defective AI systems in Türkiye may arise under general legal rules, depending on how the system is used and the type of harm caused. The Law on Consumer Protection No. 6502 applies when AI systems are offered as goods or services to consumers. If an AI system fails to function as promised, consumers may request repair, replacement, a refund, or a price reduction, and may also claim compensation for damages caused by the defect. Similarly, the Turkish Code of Obligations No. 6098 provides liability rules in both contractual and tort contexts, holding providers responsible for improper performance or harm caused by fault-based conduct. In addition, the Law on the Protection of Personal Data No. 6698 governs the processing of personal data, which is often central to AI operations. If a defective AI system results in unauthorized access, disclosure, or a breach of personal data, data controllers and processors may be held accountable, and affected individuals may seek legal remedies under the law.
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems. Have there been any court decisions or legislative developments clarifying liability frameworks applied to artificial intelligence?
Under current Turkish law, there are no specific civil or criminal liability rules tailored exclusively to artificial intelligence (AI) systems. However, in the absence of a dedicated AI liability framework, existing general legal provisions—primarily found in the Turkish Code of Obligations No. 6098, the Turkish Penal Code No. 5237, and sector-specific laws—may apply on a case-by-case basis, depending on the circumstances of the harm caused.
Civil liability may arise under contract law when an AI system fails to perform as agreed or under tort law when harm occurs independently of a contractual relationship. Victims may seek compensation if they can prove fault, damage, and a causal link. However, assigning liability is particularly challenging for autonomous AI systems due to their evolving and unpredictable nature. Other liability regimes may also apply, such as employer liability when harm arises from an employee’s use of an AI system in the course of their employment.
Criminal liability is limited to natural or legal persons, as AI systems themselves are not recognized as legal subjects under Turkish law. Nonetheless, developers, operators, or users may be held criminally liable if harm arises from negligent or intentional acts involving the design, deployment, or oversight of AI technologies.
As of July 2025, there are no binding court decisions or enacted legislation specifically addressing civil or criminal liability arising from AI-related damages. While some court rulings have referenced AI in other contexts, such as intellectual property, the legal framework for liability remains under development. For legislative efforts, please refer to Q3.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the deployer, the user and the victim?
Türkiye does not yet have a dedicated legal framework assigning liability for harm caused by AI systems. Instead, liability is allocated under general principles including tort, product liability, and strict liability frameworks, depending on the nature of the harm and the role of the parties involved (see Q4 and Q5 for a general overview).
Under product liability rules (the Law on the Product Safety and Technical Regulations No. 7223), if a product incorporating an AI system causes harm due to a defect, the manufacturer or importer is strictly liable, regardless of fault. “Manufacturer” includes those who produce or market products under their own brand, and importers are treated as manufacturers and held equally liable for imported products.
Within contractual claims related to harm caused by an AI system, such claims are limited to the parties bound by the contract. Under general tort law, anyone who unlawfully and faultily causes harm must compensate the victim, who bears the burden of proving fault and damage. If multiple parties contribute, liability is apportioned by fault in general.
Under Turkish law, strict liability may apply in cases such as where damage arises from inherently dangerous activities or from harm caused by employees during the course of their duties. According to the Turkish Code of Obligations No. 6098, an employer can be held liable for damage caused by their employee while performing assigned tasks, unless the employer proves that they exercised due diligence in selection, supervision, and instruction. Additionally, strict liability may arise under the ‘dangerous enterprise’ regime, which holds operators jointly liable for damages resulting from activities deemed inherently hazardous—even if all due care was taken.
If an AI system is deployed as part of such an operation—especially where the technology involves high levels of risk or the use of powerful tools or autonomous functions—it may fall within the scope of these strict liability rules. For example, an enterprise using AI in autonomous vehicles, industrial automation, or sensitive decision-making may be classified as a dangerous activity if it poses frequent or severe risks despite best practices. In such cases, the operator or business owner may be held liable for damages caused by the AI system, regardless of fault, and may be required to compensate victims accordingly.
Although Turkish law does not contain AI-specific liability rules, existing legal frameworks—such as product liability, general tort law, and strict liability—may be applied depending on the nature of the harm and the role of the parties involved. These provisions do not explicitly address AI but may be interpreted to cover AI-related risks, particularly where the system is integrated into dangerous activities, used negligently, or caused harm due to a product defect.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
Under Turkish law, the burden of proof lies with the victim, and the requirements vary based on the type of liability.
In tort claims, the victim must prove unlawful conduct, fault, damage, and a causal link. In contractual claims, the victim must show that the AI system failed to perform as agreed, while the provider may avoid liability by proving the absence of fault.
In strict liability cases—such as product defects or autonomous vehicle accidents—fault does not need to be proven. The victim only needs to demonstrate that damage occurred and that there is a causal link to the AI system. However, proving causation in AI-related harm can be complex, particularly with autonomous or opaque systems.
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
In Türkiye, the use of artificial intelligence (AI) is not governed by a dedicated insurance framework, and there are currently no AI-specific insurance products mandated or widely available in the market. However, AI-related risks may be insurable under existing general insurance categories, depending on the specific context and application of the technology. For example, certain incidents involving AI—such as data breaches, algorithmic errors, system malfunctions, or losses arising from automated processes—may be covered under traditional policies like professional liability, cyber risk, product liability, or general business insurance.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
Under Turkish patent law, as set out in the Law on Industrial Property No. 6769, inventorship is limited to natural persons. The law provides that the right to a patent belongs to “the inventor or their legal successors,” which presumes a human origin. In practice, the Turkish Patent and Trademark Office (TURKPATENT) requires that all inventors listed in a patent application be real individuals and does not recognize artificial intelligence systems as inventors.
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
Under Turkish law, images generated by or with the assistance of artificial intelligence (AI) may benefit from copyright protection only if they meet the conditions set out in Law on Intellectual and Artistic Works No. 5846.
The law defines a “work” as an original intellectual or artistic product that reflects the individuality and creativity of its author and falls within specific categories such as literary, musical, or visual works. The Court of Cassation has emphasized that to qualify as a protected work, a creation must (i) be expressed in a concrete and perceptible form, and (ii) reflect the personal creative contribution of its author.
In this context, authorship is limited to natural persons. The Court has consistently held that only real individuals can be authors under the law, excluding legal entities, computer software, or AI systems from holding that status.
If a person makes a genuine creative contribution—for example, by designing prompts, curating or selecting outputs, or making expressive decisions during the process—then the resulting image may be eligible for copyright protection, depending on whether the human input demonstrates sufficient originality and personal character.
So far, there is no court decision in Türkiye that directly and solely addresses the copyright status of AI-assisted works, and any assessment would need to rely on existing general principles of authorship and originality.
-
What are the main issues to consider when using artificial intelligence systems in the workplace? Have any new regulations been introduced regarding AI-driven hiring, performance assessment, or employee monitoring?
Under Turkish law, the use of artificial intelligence (AI) systems in the workplace—especially in areas like hiring, performance evaluation, and employee monitoring—raises significant legal and ethical concerns. Key issues include potential violations of employee privacy, discriminatory outcomes, lack of transparency, and unlawful data processing. As of July 2025, there are no AI-specific regulations governing employment practices; instead, such use is primarily regulated under existing labor and data protection laws.
Under Turkish labor law, AI tools must not lead to discrimination or unfair treatment. This is especially important under the Labor Law No. 4857, which prohibits discrimination based on language, race, color, gender, disability, political opinion, philosophical belief, religion, or similar grounds. Employers are, therefore, expected to take proactive steps to ensure that AI systems promote equal treatment and comply with anti-discrimination obligations.
AI systems used in the workplace must comply with the Law on Personal Data Protection No. 6698 by ensuring that data processing is lawful, transparent, and respectful of employee privacy. Employers are required to obtain valid consent or rely on another legal basis, avoid excessive data collection, and clearly inform employees about how their personal data will be used. AI-based employee monitoring tools—such as surveillance systems, keystroke trackers, or behavior analysis software—must be proportionate, necessary, and transparent. Excessive or covert monitoring can violate not only data protection laws but also constitutional protections of privacy and personal liberty. The Turkish Constitutional Court has consistently affirmed employees’ rights to privacy within the workplace. In addition, AI-driven decision-making, particularly when applied without meaningful human oversight, may raise legal concerns under both data protection and labor laws.
-
What privacy issues arise from the development (including training) and use of artificial intelligence?
Under Turkish law, the development and use of artificial intelligence (AI)—particularly in training models and processing personal data—raises several important privacy concerns governed primarily by the Law on the Protection of Personal Data No. 6698 (LPPD).
AI systems often require large datasets for training, which may include sensitive or personally identifiable information. Under LPPD, any collection, use, or processing of such data must have a legal basis, such as explicit consent or another lawful ground defined by the law. Using personal data—especially special categories like health or biometric data—for AI training without meeting these conditions can result in unlawful processing.
Another concern is purpose limitation and data minimization. Personal data must be collected for specific, explicit, and legitimate purposes and not processed in ways incompatible with those purposes. AI models trained on data later used for unrelated functions—such as profiling or targeted decision-making—may breach this principle.
Automated decision-making also poses privacy risks, especially if decisions significantly affect individuals (e.g., hiring, credit scoring, or surveillance). Although LPPD does not yet have a specific provision banning such practices, it obliges data controllers to ensure fair processing, transparency, and the right to object, which becomes challenging when opaque AI systems are involved.
Another key issue involves cross-border data transfers, as many AI systems are developed, trained, or hosted using international infrastructure or third-party services. Under LPPD, such transfers are strictly regulated and allowed only if appropriate safeguards are in place—for example, by signing and submitting standard contracts published by the Turkish Data Protection Authority. Without these safeguards, international transfers may be deemed unlawful.
Moreover, data security is a key issue. Developers and deployers must take adequate technical and administrative measures to protect data against breaches, leaks, or unauthorized access during both training and deployment of AI systems.
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view? Are there any recent precedents addressing the legality of data scraping for AI training?
Data scraping activities are not directly governed by a dedicated law in Türkiye; however, such practices are assessed under several existing legal frameworks, including intellectual property and data protection, each of which may entail significant risks depending on the nature and scope of the scraping involved. Although no specific judicial precedents currently address the legality of scraping for AI training purposes, Turkish law provides a basis for evaluating such activities through applicable general rules.
From an intellectual property perspective, scraping may infringe on rights protected under the Law on Intellectual and Artistic Works No. 5846. Databases that reflect creative effort may qualify as copyrighted works. Unauthorized extraction of substantial parts of such databases can amount to infringement, with rights holders entitled to seek injunctions and damages. Turkish courts have also affirmed that certain websites and layouts may be protected as databases or graphic works if they display intellectual effort (Turkish Court of Cassation, 11th Civil Chamber, Case No. 2016/6829, Decision No. 2018/768, dated 5 February 2018).
Regarding privacy, if scraping involves personal data, it qualifies as “processing” under the Law on the Protection of Personal Data No. 6698 (PPDL), requiring a valid legal basis such as explicit consent or legitimate interest. The TDPL’s “public data” exception is narrowly construed, and scraping publicly accessible data for unrelated uses like profiling or marketing could result in administrative penalties or even criminal sanctions under the Turkish Penal Code for unlawful collection or disclosure. From a competition law angle, systematic scraping of content or commercial data may constitute unfair competition under Article 55 of the Turkish Commercial Code No. 6102. In particular, the unauthorized use of another company’s commercial data to gain economic benefit, damage its reputation, or attract its customers may result in legal consequences.
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
Under Turkish law, data scraping prohibitions in a website’s terms of use can be enforceable, particularly when users have explicitly accepted these terms before accessing the site. Such clauses are generally binding under the Turkish Code of Obligations No. 6098, provided they are clearly communicated and not contrary to principles of good faith or mandatory legal rules. If users provide affirmative consent—such as through a clickwrap agreement—violating a no-scraping provision may constitute a breach of contract, allowing the website owner to pursue remedies like access restrictions, account termination, or damages.
Even in the absence of explicit acceptance, unauthorized and systematic scraping may give rise to liability under other legal regimes. Many Turkish websites, including e-commerce and classified ad platforms, expressly prohibit automated tools (e.g., bots or scrapers) in their terms of service and reserve the right to take legal action against violations. While such terms may not be contractually binding on all users, their presence helps demonstrate that the website owner has not consented to scraping and supporting potential claims under intellectual property, data protection, or unfair competition laws. As such, clearly worded and prominently displayed no-scraping clauses serve both as contractual safeguards and as evidence in broader enforcement efforts against unauthorized data extraction.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
The Turkish Personal Data Protection Authority (TDPA) has taken several steps to address the intersection of artificial intelligence (AI) and data protection, primarily through its guidelines.
In September 2021, the TDPA issued its “Recommendations on the Protection of Personal Data in the Field of Artificial Intelligence,” outlining key principles for AI developers, operators, and service providers to ensure compliance with the Law on the Protection of Personal Data No. 6698 when processing personal data through AI systems. This guidance was later updated in April 2025, although no substantial changes were made to its content. The document emphasizes core principles such as transparency and accountability in AI systems, encouraging developers to design systems that are understandable to individuals and whose decision-making processes can be explained and scrutinized
On November 8, 2024, the TDPA published an “Information Note about Chatbots,” highlighting privacy risks associated with AI-powered conversational tools. The note emphasizes obligations such as conducting risk assessments before processing personal data and ensuring chatbot design aligns with the principle of accountability.
AI-related privacy concerns have also been addressed in industry-specific documents by TDPA. For example, the “Guidelines on Good Practices in the Banking Sector” include recommendations for financial institutions on safeguarding personal data in the use of AI-driven tools and services.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
While the Turkish Data Protection Authority has recognized the increasing significance of AI in personal data processing through various guidelines and publications, it has not, as of July 2025, issued any publicly known binding decisions, sanctions, or enforcement actions specifically concerning artificial intelligence systems.
-
Have your national courts already managed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
As of July 2025, Turkish courts have increasingly encountered disputes involving artificial intelligence (AI), yet they have not developed a consistent or specialized jurisprudence on AI-specific legal issues. Instead, courts continue to integrate AI-related considerations into existing legal doctrines, approaching such matters cautiously.
In a decision dated 14 July 2016 (Case No. 2015/5140), the 15th Civil Chamber of the Court of Cassation examined a contractual dispute concerning an AI-powered video surveillance system. The Court emphasized the importance of involving experts with specialized knowledge in AI and computer vision to properly assess system functionality and performance obligations. It also held that developers are obliged to inform counterparties of technical limitations that may affect contractual outcomes.
In a 2022 ruling (Case No. 2020/987), the 20th Civil Chamber of the Ankara Regional Court of Appeal rejected the use of AI-generated analysis as evidence in a trademark dispute. The plaintiff argued that AI systems would confuse the marks, but the Court reaffirmed that likelihood of confusion must be assessed from the standpoint of an average consumer—not based on algorithmic judgment—underscoring the primacy of human legal reasoning.
In a 2018 case (Case No. 2015/231), the Istanbul 1st Civil Court for Intellectual and Industrial Rights addressed a claim involving the alleged unauthorized use of a person’s voice in an AI-driven virtual assistant. While the Court acknowledged the use of algorithmic processing to modify the voice, it ruled that the claimant did not meet the legal thresholds for copyright or performer protection under existing laws.
Overall, these cases demonstrate that Turkish courts are incrementally responding to the legal implications of AI while continuing to apply established principles. A coherent AI-specific body of case law has not yet emerged.
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
As of July 2025, Türkiye does not have a single, dedicated authority exclusively responsible for supervising the use and development of artificial intelligence (AI). However, oversight is distributed among various public institutions, each playing a role within its regulatory mandate.
The Digital Transformation Office of the Presidency of the Republic of Türkiye and the Ministry of Industry and Technology are the primary bodies responsible for shaping AI policy and coordinating national strategy.
Regulatory and sectoral supervision of AI applications is currently handled through existing authorities:
- The Turkish Personal Data Protection Authority (TPDA) oversees the processing of personal data in AI systems under the Law on the Protection of Personal Data No. 6698, ensuring privacy compliance.
- The Information and Communication Technologies Authority (ICTA) monitors the telecommunications and digital services sectors, which increasingly involve AI-based applications.
- Sector-specific regulators in Türkiye—such as the Banking Regulation and Supervision Agency, the Central Bank of the Republic of Türkiye, and the Ministry of Health—may oversee the use of AI within their respective domains (e.g., banks, fintech, digital health tools).
- The Cybersecurity Directorate is responsible for coordinating artificial intelligence practices and applications across public institutions, reflecting its expanded role in overseeing AI governance within the public sector.
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited? Which sectors have seen the most rapid adoption of AI technologies?
In Türkiye, the use of artificial intelligence (AI) by businesses is growing steadily but remains uneven across sectors. While adoption is not yet widespread on a national scale, it is gaining momentum, particularly among large enterprises and technology-forward industries. The most rapid and advanced adoption of AI technologies has been observed in the finance, telecommunications, e-commerce, and manufacturing sectors. In these industries, AI is being deployed for applications such as fraud detection, predictive analytics, customer behavior analysis, process automation, demand forecasting, and supply chain optimization.
The finance sector, for example, has integrated AI into credit scoring, risk assessment, and chatbot-based customer services. Similarly, major e-commerce platforms utilize AI for personalized recommendations, dynamic pricing, and logistics optimization. The health sector is also showing increasing interest, particularly in diagnostic tools, medical imaging, and hospital management systems.
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how? Are AI-driven legal tools widely adopted, and what are the main regulatory concerns surrounding them?
Artificial intelligence (AI) is increasingly being adopted in the Turkish legal sector, particularly among larger law firms and in-house legal departments. Although AI tools are not yet integrated into the routine operations of courts, there is growing interest in their use for lawyers’ day-to-day tasks, such as document review, bilingual contract analysis, legal research, and drafting areas where efficiency, accuracy, and speed are increasingly prioritized. Due diligence activities, particularly in cross-border mergers and acquisitions and compliance reviews, are increasingly benefiting from AI-powered solutions that enhance efficiency, reduce review times, and minimize human error. Nonetheless, a key barrier to the development of effective domestic tools remains the limited availability of Turkish-language AI training datasets and legal-domain resources.
From a regulatory standpoint, the use of AI tools by lawyers may raise important questions under Law on Attorneyship No. 1136, particularly with regard to strict professional confidentiality obligations, the prohibition on advertising and solicitation, and the requirement that legal services be provided personally and responsibly by licensed attorneys. It is also significant that there is currently no binding ethical framework or official guidance from the Union of Turkish Bar Associations on the use of AI in legal services, which contributes to uncertainty surrounding professional liability and the interpretation of these obligations under the Attorneyship Law in the context of AI deployment.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
Artificial intelligence (AI) presents both significant challenges and promising opportunities for lawyers in Türkiye. One major challenge lies in the regulatory uncertainty surrounding AI. While certain national strategies, soft law instruments, and policy papers introduce general principles regarding the development and use of AI, Türkiye lacks a comprehensive and binding legal framework specifically tailored to AI technologies. This gap creates ambiguity in legal compliance, particularly in how lawyers should tackle, interpret, and apply existing laws—or develop new legal approaches—to address the complex issues arising from the development, deployment, or use of AI systems in practice. Lawyers also face growing concerns about data privacy and ethical risks, including bias in AI systems and the explainability of their outputs, which could impact client trust and professional responsibility. Limited familiarity with technology among some legal professionals can make it harder to assess or use AI tools effectively. Additionally, there are concerns about job displacement as AI threatens to automate routine tasks such as document review and research. Finally, unresolved questions around liability—particularly when legal outcomes are influenced by AI-generated insights—pose a serious challenge in determining accountability.
Despite these concerns, AI offers considerable opportunities for the legal profession in Türkiye—many of which reflect broader global trends. AI-powered tools can dramatically enhance the efficiency of legal research, document drafting, and contract analysis, reduce turnaround times, and improve accuracy. Predictive analytics can assist lawyers in evaluating litigation risks and outcomes based on historical court decisions, thus supporting more strategic client advice. Automation of administrative tasks—ranging from deadline tracking to compliance monitoring—frees up time for higher-value legal reasoning and client engagement. Globally, AI is also seen as a key driver in expanding access to justice, and in Türkiye, AI-driven platforms have the potential to offer cost-effective legal support and guidance to individuals in underserved or remote areas. Finally, as in many jurisdictions, early adoption of AI in Türkiye’s legal sector can offer a competitive edge, as innovation and technological fluency are becoming key differentiators in delivering efficient, high-quality client service in an increasingly dynamic legal market.
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months? Are there any ongoing initiatives that could reshape AI governance?
In the next 12 months, Türkiye is expected to witness important legal developments in the field of artificial intelligence (AI), largely driven by domestic regulatory initiatives and an ongoing effort to align with international—particularly European Union—standards.
While the most significant progress is likely to focus on establishing a foundational framework for AI regulation, the enactment of a comprehensive AI law within this timeframe remains unlikely due to the complexity and length of the legislative process. Notably, Türkiye’s Medium-Term Programme (2024–2026) sets completing the alignment of national data protection law with the EU acquis, particularly the GDPR, as a key objective—reflecting a broader harmonization trend that suggests the EU AI Act may likewise influence the future direction of AI regulation in Türkiye. In the meantime, regulatory efforts are likely to concentrate on articulating ethical principles, updating secondary legislation—especially in the area of data protection—and issuing institutional or sector-specific guidelines. Parallel to these developments, growing references to AI in judicial decisions are anticipated, which may help shape legal interpretation on matters such as automated decision-making, liability, and data privacy through evolving case law. Additionally, the National Artificial Intelligence Strategy and its 2024–2025 Action Plan, prepared by the Digital Transformation Office of the Presidency of the Republic of Türkiye and the Ministry of Industry and Technology, are expected to play a central role in shaping AI governance. The plan outlines concrete goals for the responsible development of AI and includes initiatives such as the creation of legal evaluation tools for AI systems, the preparation of guidance to clarify intellectual property rights related to AI-generated content, and standardization efforts concerning the patentability of AI products, along with the potential establishment of a “Trusted AI” certification mechanism—all of which reflect Türkiye’s commitment to proactive and structured AI regulation in the near term.
Türkiye: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in Türkiye.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence? If so, has there been any progress in its implementation? Are there plans for updates or revisions?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems. Have there been any court decisions or legislative developments clarifying liability frameworks applied to artificial intelligence?
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the deployer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace? Have any new regulations been introduced regarding AI-driven hiring, performance assessment, or employee monitoring?
-
What privacy issues arise from the development (including training) and use of artificial intelligence?
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view? Are there any recent precedents addressing the legality of data scraping for AI training?
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
-
Have your national courts already managed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited? Which sectors have seen the most rapid adoption of AI technologies?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how? Are AI-driven legal tools widely adopted, and what are the main regulatory concerns surrounding them?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months? Are there any ongoing initiatives that could reshape AI governance?