Draft Law of the Republic of Kazakhstan “On Artificial Intelligence” – Principles of Regulation and Practical Aspects

This article is dedicated to the analysis of the provisions of Kazakhstan’s Draft Law “On Artificial Intelligence” (hereinafter – “Draft Law”) and its comparison with international approaches to the regulation of this technology. The Draft Law was approved by the Mazhilis (Chamber of the Parliament of the Republic of Kazakhstan exercising legislative functions) in its first reading on May 14, 2025. There is no need to prove the growing significance of artificial intelligence (AI) systems and their active integration into the daily lives of internet users and private businesses around the world, including Kazakhstan. AI is rapidly becoming an integral part of digital technologies, and its widespread use raises a number of ethical, technological, and legal challenges.

All of this underscores the need for clear legal regulation to protect citizens and ensure sustainable technological development. In view of this, legislators and government bodies around the world, including those in Kazakhstan, have begun developing approaches to the regulatory governance of AI technologies.

International Practice of Legal Regulation of Artificial Intelligence

The first regulatory act of its kind aimed at governing the development and use of artificial intelligence systems was the Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (hereinafter – the “AI Act”), which was adopted by the European Parliament on 13 March 2024 and by the Council of the EU on 21 May 2024. The AI Act has, in part, served as a foundation for the Kazakh Draft Law. For instance, the AI Act introduced the first legal definitions of an artificial intelligence system (hereinafter – “AI system”), its users, developers, and other actors involved in the creation, distribution, and use of AI systems.

Furthermore, the AI Act classifies the levels of risk associated with AI applications and identifies practices deemed unacceptable due to their excessive risk level, which are completely prohibited within the European Union. Examples include manipulating human behaviour to influence decision-making, exploiting individuals’ vulnerabilities, social profiling to predict criminal behaviour, creating facial recognition databases without consent, and using AI to detect emotions in the workplace or educational settings, among others. Such systems may not be placed on the EU market. In addition, the AI Act requires pre-market conformity assessments of AI systems and post-market monitoring.

These provisions have had a notable influence on the content of the Kazakh Draft Law. It is worth noting that the EU document clearly distinguishes between personal use of AI and its professional application, distinctions that are not present in the Kazakh Draft Law, which may, in our view, complicate its practical implementation.

Key Elements of AI Regulation in the Draft Law of the Republic of Kazakhstan

The Kazakh Draft Law consists of 28 articles and includes general provisions that define key terms, goals, and principles for regulating public relations in the field of artificial intelligence. The Draft also sets out the framework for state governance, regulation, and support in the AI sphere, outlines the rights and obligations of the parties involved, and introduces a classification of AI systems by risk level along with corresponding risk management procedures.

The Draft defines artificial intelligence as an information and communication technology that imitates or exceeds human cognitive functions for the purpose of performing intellectual tasks and finding solutions. An AI system is defined as an informatization object that operates based on AI. A generative AI system is described as a system that creates synthetic content, including alteration of biometric data (such as voice, face, video, and movement) and distortion of reality. Meanwhile, the sole definition provided for an AI system user refers to a person who uses the system either to carry out a specific task or to make use of its results.

Other key participants in the AI systems market, as defined by the Draft Law, are the owners and holders of such systems. The text of the Draft Law does not distinguish between these two categories, but it does establish their rights and obligations. There is also no definition of AI system developers or any potential requirements for them, although the mechanisms provided by the Draft Law, including the creation of data libraries for AI training, imply that AI systems will be developed and tested within the territory of the Republic of Kazakhstan. Owners and holders of AI systems have the right to set conditions for their use and to protect their rights. They are obligated to manage risks, ensure the security and reliability of the systems (including protection against malfunctions and unauthorized access), maintain documentation depending on the level of impact the AI system has on the safety, rights, and interests of citizens, society, and the state, provide user support, and disclose information about the system’s operating principles and the user data it utilizes. At the same time, users of AI systems have the right to receive information about how the AI operates and what data is used, to protect their personal and confidential information, and to take measures to safeguard their intellectual property rights over content created with the use of AI. They are also required to use the systems within their authorized access rights and to comply with the established rules and safety measures.

The Draft Law also defines the role of the state in public relations within the field of artificial intelligence. It establishes a central executive (authorized) body responsible for leadership and cross-sectoral coordination in the AI domain. This authorized body develops and implements state policy on artificial intelligence, coordinates activities across industries, drafts and approves regulatory legal acts, and proposes measures to improve legislation. In addition, it approves the list of required documentation and the criteria for classifying objects as AI systems, identifies priority sectors of the economy for AI implementation in collaboration with other government agencies, and performs other functions as prescribed by law.

One of the key innovations of the Draft Law is the introduction of the National AI Platform and its Operator. The National AI Platform is a technological platform intended for the collection, processing, storage, and distribution of data libraries (grouped datasets, although the Draft Law does not specify which ones exactly) and the provision of services in the field of artificial intelligence. Its operator is a legal entity designated by the Government of the Republic of Kazakhstan, which ensures the development and functioning of the platform, provides technical support, offers AI services, and collects, processes, and stores data libraries in accordance with the requirements established by the Government. At the same time, the National AI Platform offers a controlled environment for the development, training, and pilot operation of AI systems. The procedure for interaction between the platform operator and users of these services is to be established by the authorized body, however, such a procedure has not yet been defined.

Data owners and holders are also entitled to freely use and distribute their data, subject to the restrictions established by the legislation of the Republic of Kazakhstan. The training of AI systems in Kazakhstan will be carried out based on data libraries provided for lawful and pre-defined purposes. The creation and provision of such libraries must comply with data governance requirements approved by the Government. Data library owners and holders are obliged to ensure the quality of the data libraries they provide and to define the terms and procedures for access to them. They are also entitled to freely create, use, and distribute data libraries in compliance with the law and to monitor their use for training AI systems within the declared purposes and conditions.

Other state authorities, in addition to the authorized body, participate in the implementation of state policy in the field of artificial intelligence, provide the operator of the National AI Platform with access to data, and form data libraries in accordance with data governance requirements approved by the Government, although such requirements have not yet been developed. In addition, state authorities exercise other powers as provided by law.

As for the legal framework, artificial intelligence systems are classified by risk level:

  • minimal risk – their failure or shutdown has little impact on users;
  • medium risk – may reduce operational efficiency and cause material damage;
  • high risk – may lead to emergencies or have serious consequences for security, economy, defense, international relations, and the livelihood of citizens.

AI systems are also classified according to the degree of independence in decision-making and the extent of their impact on users into the following categories: assistive systems, where artificial intelligence supports the user and final decisions are made solely by the user, semi-autonomous systems, where the owner or user is granted limited rights for automated decision-making within predefined parameters, while the person granting these rights may intervene in the decision-making process or alter the outcomes of the system’s operation, and fully autonomous systems, which make decisions independently of predefined parameters and cannot be controlled by the system’s owner.

Furthermore, Article 18 of the Draft Law, in alignment with the AI Act, establishes a complete ban in the Republic of Kazakhstan on the creation and placing on the market of AI systems with the following functions:

  • the use of subconscious, manipulative, or other methods that significantly distort an individual’s behavior, limit their ability to make informed decisions, or force decisions that may cause harm or pose a threat to life, health, property, or otherwise negatively affect the individual;
  • the exploitation of a person’s moral and/or physical vulnerability due to age, disability, social status, or any other circumstances, with the intent to cause or threaten harm;
  • the evaluation and classification of individuals or groups over a certain period based on their social behavior or known, presumed, or predicted personal characteristics, except in cases provided for by the laws of the Republic of Kazakhstan;
  • the creation or expansion of databases for personal data subject recognition through untargeted extraction of personal data, including facial images, from the Internet or video surveillance footage;
  • the classification of individuals based on their biometric data to draw conclusions about their race, political views, religious affiliation, or other attributes for the purpose of any form of discrimination;
  • the detection of a person’s emotions without their consent, except as permitted by the laws of the Republic of Kazakhstan;
  • remote real-time biometric identification of individuals in public places, unless otherwise provided by the laws of the Republic of Kazakhstan;
  • the creation and dissemination of outputs of AI systems that are prohibited under the laws of the Republic of Kazakhstan.

At the same time, the draft law does not establish specific criteria for implementing the listed prohibitions, which creates legal uncertainty. Many AI systems already in use in the Republic of Kazakhstan partially possess the listed functions, and it remains unclear whether they will be banned or allowed for use. The draft law provides the following risk management mechanisms: identification and analysis of known and foreseeable risks when using an AI system for its intended purpose, risk assessment for both intended and reasonably foreseeable unintended use, and the implementation of risk management measures aimed at preventing and eliminating such risks. If there is a risk of circumstances arising as outlined in Article 18, the owners and holders of AI systems must take measures to minimize harm and protect the interests of citizens and society, including by suspending or completely ceasing the operation of the AI system.

Practical Application of the Draft Law’s Provisions and Comparison with the AI Act

The provisions of the Draft Law do not clearly explain how the risk management mechanism for AI systems and control over compliance with established requirements should function in practice, and most of its norms are largely declarative in nature. In comparison, the European Union’s AI Act provides a comprehensive set of regulatory mechanisms for high-risk AI systems. For instance, it includes an obligation to maintain technical documentation containing a description of the AI system’s purpose, its architecture, interaction with hardware and software components, cybersecurity measures, as well as detailed information on the data used, its origin, characteristics, methods of collection and cleansing, labeling, and processing methodologies. Furthermore, AI Act sets out requirements for managing high-risk systems, such as establishing necessary infrastructure, hiring, and managing personnel, the activities of law enforcement agencies, and more. There are also requirements regarding the data used to train and test AI systems, including the need for a data quality plan covering parameters such as accuracy, completeness, compliance with standards, integrity, and relevance.

Moreover, even within the European Union, the adoption of the AI Act raised concerns regarding the practical implementation of some of its provisions. A number of Member States and international organizations expressed concern over the pace of the law’s development, pointing to the insufficient elaboration of certain norms. After the Act was adopted, businesses began to face delays in the issuance of the necessary guidance documents, which are essential for the effective implementation of its requirements. Since 2021, the European Commission has faced increasing pressure from the industry calling for a reconsideration of the regulatory approach, including the easing of requirements or even abandoning the Act altogether. The main concerns relate to the potential obstacles to innovation, especially due to the uncertainty in applying strict requirements and the high compliance burden for smaller companies.[1]

The Kazakh Draft Law sets out a basic regulatory framework for artificial intelligence systems, including the distribution of roles and responsibilities among stakeholders. In practice, the implementation of the provisions of the Draft Law appears challenging without subsequent regulatory detailing. Many provisions require clarification, which may complicate the application of the norms and lead to discrepancies in law enforcement. Given this, it can be assumed that effective application of the norms will only be possible with active development of secondary legislation and the creation of necessary technical standards.

[1] The European Commission considers pause on AI Act’s entry into application, 4 июня 2025

https://www.dlapiper.com/en/insights/publications/ai-outlook/2025/the-european-commission-considers-pause-on-ai-act-entry-into-application

More from GRATA International