-
What are your countries legal definitions of “artificial intelligence”?
Mexico lacks a federal legal definition of Artificial Intelligence (AI). Some state-level definitions exist. The National Alliance for Artificial Intelligence (ANIA) advised the Mexican Senate and issued a Roadmap and Agenda Proposal 2024-2030, and incorporated the ODCE definition.
Several bills in Congress have proposed AI definitions, but these show regulatory and technical limitations. Definitions are often overly broad or too vague for effective implementation.
A recent bill defined it as “an information, algorithmic, or physical system designed to imitate human capabilities such as learning, reasoning, perception, or decision-making, which can operate autonomously or with assistance, and whose outcomes have an impact on individuals, processes, or environments, whether physical or digital”.
Also, recently, Mexico’s Supreme Court of Justice (SCJN) agreed to a Sinaloa State Criminal Code AI definition as “the applications, programs or technology that allow automatic alterations or modifications on photographs, audio or video”, in the context of the sexual intimacy crime. Afterwards, the Court also accepted a definition in the Quintana Roo State Criminal Code, as “the capacity of technological, computer, software, or application systems of a machine to simulate human capabilities such as reasoning, learning, creativity, the ability to plan, and process data for the performance of specific and autonomous tasks”.
In a recent case ruling AI cannot be owners or authors, the SCJN defined AI as an algorithmic system simulating human reasoning, processing data, making predictions, and performing actions based on patterns, but lacking human experience, perception, feelings, and self-awareness.
However, to prevent definitions from becoming obsolete and evolve alongside the fast-paced growth of AI, a more accurate definition and approach is needed. Below, please find the key elements that an AI definition for a regulatory framework should include:
- Defining AI as a capability of a software-based system. Defining AI as a software system per se, is too expansive and could unintentionally include non-AI systems (e.g., simple rule-based or deterministic software).
- Clarifying that it can receive explicit or implicit objectives.
- Outlining the process of analyzing, interpreting, or inferring from input to generate outputs.
- Considering different types of outputs, by listing possible results from the use of AI including predictions, content, recommendations, and decisions, it covers a broad range of AI applications. Most definition approaches focus mainly on generative AI.
- Impact: considering that AI can influence both physical and virtual environments.
- Acknowledgment of AI’s autonomy and adaptation capabilities to technological evolution.
-
Has your country developed a national strategy for artificial intelligence? If so, has there been any progress in its implementation? Are there plans for updates or revisions?
Yes, Mexico developed a national AI strategy, first announced in 2018. Titled “Towards an AI Strategy in Mexico: Harnessing the AI Revolution,” it outlined six thematic areas: governance, government and public services, research and development, capacity/skills/education, data infrastructure, and ethics/regulation.
More recently, on May 15, 2024, a group of experts (ANIA) proposed a National AI Agenda (2024-2030) to the Senate. This agenda suggests public policy and governance recommendations for ethical AI development, including a risk-based regulatory approach, data privacy, IP protection, regulatory sandboxes, and comprehensive AI/cybersecurity frameworks.
While not formally adopted, the 2025-2030 National Development Plan includes a commitment to optimize public services and improve policy design using analytical tools like AI, a priority also highlighted by President Claudia Sheinbaum, who has stated that science and technology are a priority on the Government’s agenda. The new Digital Transformation and Telecommunications Agency is also expected to issue relevant regulations and standards.
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
As of July 2025, Mexico lacks centralized, comprehensive, and specific AI regulation, despite approximately 60 related bills introduced in Congress since 2020. However, this is expected to change, with key legislative proposals anticipated for discussion in the upcoming September session, partly due to increased pressure from sectors like the film industry.
The “Federal Law for the Ethical, Sovereign and Inclusive Development of AI” and the “Law for the Ethical Regulation of Artificial Intelligence and Robotics” are among the bills that may be debated. Please see question 18 for more information on this bill.
Given the absence of specific AI legislation, existing laws are being considered for their potential applicability, though their interpretation for AI presents significant challenges:
- Copyright Law: Relevant for AI-generated content and data training. However, it’s unclear on AI authorship, derivative works, and enforcement against infringing AI models. The SCJN recently ruled (yet to be published) that only humans can be authors, classifying AI outputs as ‘products’ rather than ‘works’, but raised important legal and philosophical questions around authorship, creativity, and the role of human input in AI-assisted works. However, the status of the output’s ownership is yet to be clarified. The legal status of AI-generated content in Mexico remains uncertain, particularly in scenarios where human intervention is limited or non-existent.
- Privacy Law: The recently enacted Federal Law on Protection of Personal Data Held by Private Parties (LFPDPPP), though not designed for AI, applies its principles (transparency, consent, purpose) to automated profiling and decision-making, including biometric recognition. This law expands the definition of personal data, broadens legal bases for processing, and enhances individual rights over automated processing. Challenges arise from AI’s extensive data processing without clear human oversight, and the law lacks AI-specific rules for legal certainty.
- Consumer Protection: May apply where AI tools are used in advertising, pricing, or automated customer interactions lead to misleading information, deceit or discriminatory outcomes affecting consumers, or defective products/services causing consumer harm.
- Criminal Code: Relevant for misuse of AI (e.g., identity theft, fraud, cybercrime). Its strict interpretation makes direct application to AI difficult, prompting legislative proposals to criminalize specific AI-related offenses like deepfakes and identity theft. State-level codes (Quintana Roo, Sinaloa) have already increased penalties or criminalized AI use in certain contexts.
- Commercial Code: Some of its provisions on e-commerce, consent, and contractual validity may apply to AI-generated acts.
- Federal and local Civil Code: Provide general rules on contracts, torts, civil liability, damages, and negligence, applicable on a case-by-case basis.
- Labor Law: Indirectly relevant as AI impacts employment (e.g., as an employee tool, or regarding job displacement).
- Sector-Specific Regulations (Health, Banking): AI use in these highly regulated fields may infringe existing laws that currently lack AI-specific provisions.
- USMCA 19.17: provides safe harbors for intermediaries, though not drafted for AI, could be relevant.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
Currently, there is no specific regulation for defective AI systems and therefore, individuals and entities harmed by a faulty AI system must navigate a legal landscape built on established principles of civil responsibility and consumer rights. The Federal Civil Code and the Federal Consumer Protection Law (LFPC) are the primary legal instruments applicable. This law, designed to ensure fairness and safety in the marketplace, is sufficiently broad to encompass AI not as an abstract legal concept, but as a “product” or “service” for which suppliers are strictly accountable.
The Federal Consumer Protection Agency (Profeco) can prohibit, after a warning, if it considers the “product or service” (in this case, AI system) if determined that said product may endanger the life or wellbeing of any consumer. Additionally, Profeco may step in whenever a product or service “affects or could affect the life, health, safety or economy of a group of consumers.”
If an AI system, acting as a product or service, malfunctions or provides unsafe outputs (e.g., an autonomous vehicle crashing due to AI error, an AI diagnostic tool providing incorrect medical advice leading to harm, or an AI system recommending dangerous actions), it could be considered “defective” under this law. The LFPC provides mechanisms for consumers to seek redress and for authorities to take action.
Crucially, manufacturers or distributors are obliged to inform authorities immediately if they discover that their AI poses a risk. Failure to do so can attract additional penalties, and Profeco may demand detailed reports on the scope of the recall, user notifications, corrective actions and their progress.
The Civil Code section is analyzed on the next two questions.
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems. Have there been any court decisions or legislative developments clarifying liability frameworks applied to artificial intelligence?
Mexico lacks a specific AI legal framework; general principles from the Federal Civil, Commercial, and Criminal Codes apply.
In civil matters, liability stems from unlawful acts or omissions causing direct damage. For AI, this includes negligent deployment, defective design or lack of oversight. Furthermore, wStrict (objective) liability may apply under the Federal Civil Code if AI systems are deemed ‘dangerous mechanisms,’ imposing liability even without fault. Courts would treat AI as a tool and focus on human control and foreseeability. Victims must prove causation and damage using admissible evidence such as expert opinions and documents.
In commercial contexts, the Commercial Code applies to AI actions in contracts or electronic transactions, though attributing liability or intent for autonomous AI is challenging.
In criminal law, liability requires that the conduct precisely matches a defined offence. Since 2016, legal entities can also be held criminally liable when crimes are committed in their name or for their benefit. While AI can’t be prosecuted, individuals or companies using it for crimes like fraud or digital violence face charges if elements are met. State innovations, like Quintana Roo’s increased penalties for AI involvement and Sinaloa’s criminalization of AI manipulation of intimate content, adapt criminal law to new risks.
Likewise, federal legislative proposals aim for strict liability for high-risk AI use and clarify responsibility across the value chain. General offences like cybercrime or data misuse may apply.
Two key SCJN cases are pending: one on AI copyright registration and another on platform liability for user content, potentially extending to AI providers.
Finally, USMCA Chapter 19.17 offers safe harbor for intermediaries, potentially covering some AI tools as interactive computer services and limiting liability for user-generated AI content, though its application to AI damages in Mexico needs jurisprudential clarification, as AI does not work as traditional internet content intermediaries.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the deployer, the user and the victim?
In Mexico, without specific AI legislation, responsibility for harm caused by an AI system is primarily determined by existing civil and criminal frameworks, drawing on traditional concepts of fault, objective liability, and consumer protection. Liability is allocated on a case-by-case basis, typically falling upon the human or legal entity with the most proximate control, oversight, or ability to foresee and prevent the harm, either through action or omission, and thus, who has directly caused the harm. This could be the developer, if the harm arises from a design flaw or inadequate testing; the deployer (e.g., a company integrating AI into a product or service) due to negligent implementation, insufficient risk management, or failure to ensure safe operation; or the user, if the harm results from misuse or unauthorized modifications.
While Mexican courts currently lack AI-specific rulings, they apply existing legal distinctions for objective (contractual or extracontractual) and moral damages, on a case-by-case basis, there is no statutory rule that allocates responsibility across the AI value chain, with outcomes heavily dependent on specific facts and judicial interpretation. Pending legislative proposals, however, suggest a future move towards statutory allocation of responsibility, potentially including joint and several liability among actors in the AI value chain, particularly for high-risk systems, reflecting an evolving approach to AI governance in Mexico. This could impact on fairness, innovation incentives, and the varying levels of control and foreseeability each actor may have. The allocation of liability continues to be one of the most sensitive and evolving aspects of AI governance in Mexico.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
To obtain compensation for damage caused by AI, the victim, as the claimant, bears the burden of proving a direct and immediate causal link between the suffered harm and the AI system’s use or malfunction, demonstrating that the damage would not have occurred otherwise and that no other event broke this connection. This requires establishing real and actual harm, its direct connection to the AI’s use, and the presence of either a legal violation or negligence (or, in cases of dangerous mechanisms, or objective risk), often necessitating admissible evidence like expert opinions.
Mexican damages law distinguishes between objective damages, whether contractual or extracontractual, and moral damages. The use of AI could potentially give rise to claims under both categories, depending on the circumstances and the nature of the harm.
To succeed, the claimant must generally prove: (i) a violation of law or negligence in the use of the AI system; (ii) the existence of real, direct, and actual harm; and (iii) that such harm was directly caused by an identifiable actor (e.g., developer, deployer, or user) using or failing to use the AI tool properly.
While Mexican courts have not yet ruled on a case involving AI-related damages, they retain broad discretion in evaluating the evidence and applying general tort principles. As a result, outcomes remain uncertain and will depend on the specific facts and judicial interpretation in each case.
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
While Mexico lacks specific AI insurance policies or regulations, AI use is theoretically insurable under the Law on Insurance and Bonding Institutions. This law allows coverage for new risks if lawful and based on a defined insurable interest. Thus, AI risk is not inherently uninsurable due to lack of specific regulation. However, regulatory ambiguity and ‘silent AI’ make insurers cautious, particularly regarding external infrastructure or unclear liability.
A practical limit is the territorial scope of contracts, typically requiring risks within Mexico. Cross-border AI infrastructure (e.g., cloud) may challenge this. Mexican AI systems could theoretically be covered by civil liability policies if risks are lawful and defined. Yet, no known specific AI liability policies exist in Mexico; lack of precedent and AI risk uncertainty deter providers.
As AI legal treatment matures, insurers may reconsider limits, especially for high-risk sectors. Meanwhile, companies deploying AI should use traditional liability frameworks and ensure risk allocation via internal controls and contractual protections.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
No, article 46 of the Federal Intellectual Property Protection Law (LFPPI) states that an invention is a human creation that transforms matter or energy existing in nature, for its use by humans to satisfy specific needs. In that sense, only human beings can be considered inventors.
As for the patent owner(s), while a legal entity can be named as the patent owner (e.g., through an assignment from an employee-inventor or other contractual agreements), AI systems lack legal personality and capacity to enter into such agreements or hold rights.
Moreover, in connection with this topic, the SCJN recently issued a decision (to be analyzed in more detail in question 17) confirming that only human beings can create works; hence, only humans can be authors. The outputs of AI are considered products, not works of authorship.
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
No, images generated entirely by or primarily with AI do not currently benefit from copyright protection in Mexico. The Mexican legal framework, specifically Article 12 of the Federal Copyright Law (LFDA), unequivocally establishes that an ‘author’ is the natural person who creates a literary and artistic work. Consequently, the authorship of content generated by AI, absent significant human creative input, is not attributed to anyone.
This precise legal interpretation was recently affirmed by the SCJN in a landmark decision (Amparo Directo 6/2025), following a challenge to the National Copyright Institute’s (INDAUTOR) denial of registration for an AI-generated avatar. The SCJN upheld INDAUTOR’s argument that the LFDA recognizes only natural persons as authors. This recent ruling clarifies that purely AI-generated outputs, lacking demonstrable human creative direction, are considered products and are therefore part of the public domain, not protected works of authorship. This case is analyzed in detail in question 17.
-
What are the main issues to consider when using artificial intelligence systems in the workplace? Have any new regulations been introduced regarding AI-driven hiring, performance assessment, or employee monitoring?
There are no new AI regulations regarding labour-related topics. However, using AI systems in the workplace presents employers with significant legal, operational, and ethical risks.
When deploying AI systems in the workplace, employers must carefully consider a range of legal, operational, and ethical risks. This includes navigating compliance with existing laws, managing the practical challenges of implementation and integration, and addressing the ethical implications for employees and the organizational culture. A primary concern is protecting confidential and strategic information. Many AI tools risk data leakage or unauthorized use. Employers must use vetted tools and implement policies prohibiting uploading sensitive data to unsecured platforms.
Another critical issue is the ownership and legal status of AI-generated outputs. Mexico’s law requires human authorship, creating uncertainty over employer ownership of content created by AI tools, particularly when third-party providers or external datasets are involved.
AI systems demand extreme caution in human resources. While useful, they often lack transparency and can reinforce bias, leading to unfair outcomes without human oversight. Mexico lacks specific AI regulations for hiring or monitoring, but existing labor and privacy laws prohibit discrimination and require monitoring to be proportionate.
Legislative proposals classify AI-driven employment decisions as high-risk, following international trends, often prohibiting automated decisions without meaningful human involvement.
In the interim, employers should proactively adopt internal governance frameworks for AI use. This includes defining permitted tools and use cases, classifying AI systems by risk level, ensuring human review for sensitive decisions, safeguarding data protection, and establishing internal AI codes of conduct or ethics committees. Such measures mitigate legal exposure and build a foundation for future regulatory compliance.
-
What privacy issues arise from the development (including training) and use of artificial intelligence?
AI development and use raise significant privacy concerns, primarily due to their reliance on vast volumes of personal data for training and tuning. In Mexico, the intersection of AI with personal data falls under the scope of the new Federal Law on Protection of Personal Data Held by Private Parties (LFPDPPP), significantly updated and enacted in March 2025. The new law reinforces data subject rights, including the right to object to automated processing that produces adverse legal effects or significantly affects their rights or freedoms.
Training AI models often involves automated profiling and decision-making, including biometric recognition and algorithmic assessments. These activities, potentially sensitive, are generally regulated under the LFPDPPP, which requires transparency and explicability for decisions made without human intervention, addressing bias and discrimination risks. However, achieving this level of transparency remains challenging for the ‘black-box’ nature inherent in many AI systems
Without a comprehensive AI framework, determining which AI systems trigger specific regulatory obligations remains challenging. Furthermore, inadequate safeguards around AI training data can lead to personal data breaches, risking sensitive or confidential information. Implementing robust security measures and internal controls, including comprehensive privacy policies and Data Processing Agreements (DPAs), is crucial given the rapid expansion of the digital environment and its associated risks.
DPAs are particularly relevant when sharing or receiving large datasets between data controllers and processors, such as in techniques like Retrieval Augmented Generation (RAG). These agreements should specify conditions for AI system use, security measures, data breach protocols (including insurance), data retention and deletion terms, sub-processors, data transfer clauses, periodic audits, and clearly define parties’ obligations and liabilities for non-compliance.
Companies must prioritize implementing robust data protection measures, operational protocols, and strong contractual agreements, including DPAs, to ensure compliance with data privacy laws and safeguard personal data in all AI-related operations.
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view? Are there any recent precedents addressing the legality of data scraping for AI training?
Data scraping in Mexico is not subject to a specific, overarching regulation. Instead, its legality is assessed under existing legal frameworks, particularly those governing intellectual property (IP), privacy, and competition.
From an IP perspective: The LFDA grants owners exclusive control over reproduction and public communication of their works, including original databases based on creative selection or arrangement. Copying substantial portions of a site’s curated structure without a license infringes copyright and the ToS. Mexico lacks broad “fair use” or specific TDM exceptions, so unauthorized scrapping of copyrighted material for AI training is likely infringement. The LFDA (amended 2020) includes anti-circumvention prohibitions against bypassing TPMs, making such actions potentially illegal (with exceptions). If scraped data includes confidential business information meeting the trade secret definition, its unauthorized acquisition and use could constitute misappropriation.
From a privacy perspective: The 2025 overhaul of the privacy laws requires scrapers collecting personal data to provide notice, transparency, purpose-limitation, and secure consent and honour user’s rights. The prior regulator, National Institute of Transparency, Access to Information, and Protection of Personal Data (INAI, joined international regulators in emphasizing that publicly available personal information remains subject to data protection laws. The SCJN recently clarified that website ToS prohibiting scraping are only binding when a user’s assent is explicit (e.g., via a clickwrap agreement), not through passive “browsewrap” terms, making clear permissions essential for contractual enforceability.
From a competition perspective: Preventing third parties from scraping publicly available data or content could be seen as an abuse of dominance if the information owner holds a dominant market position. This interpretation suggests restricting access to data might threaten competition if the data is considered an “essential input.” This interpretation suggests restricting access to data might threaten competition if the data is considered an “essential input.” While Mexico lacks direct precedents like the US case hiQ Labs, Inc v LinkedIn Corp (where blocking a startup from scraping LinkedIn data was challenged), similar arguments of “refusal to deal” could arise. Similar arguments of “refusal to deal” amounting to abuse of dominance could be made, as this interpretation has apparently gained traction given the growing role of data as an essential competitive tool. In a broader context, the Mexican antitrust authority’s blocking of Walmart/Cornershop (2019) partly due to the strategic use of competitors’ data highlights how data access can be viewed as an essential competitive tool.
Regarding precedents for AI training: As mentioned, the SCJN ruling on website terms is relevant. While no specific precedents directly address the legality of data scraping for AI training in Mexico, the general principles of IP, privacy, and competition law, as outlined above, would apply. The SCJN’s stance on explicit consent for terms of use is particularly important for any large-scale data collection.
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
While Mexico has few public cases indicating established precedent to indicate established case law or an emerging trend, the enforceability of a website’s terms of use (ToS) generally depends on how consent was obtained. Under Mexican civil law, a contractual relationship is formed when consent is given. However, recent rulings by the Supreme Court of Justice (SCJN), particularly in January and June 2025, clarify that explicit consent, such as through a “clickwrap” agreement (where users actively click an ‘Accept’ button), is generally required to bind users to the ToS. Conversely, “browsewrap” agreements (where terms are merely posted via a link, and assent is implied by continued use) are increasingly difficult to enforce, as courts emphasize the need for conspicuous notice and unambiguous user acceptance.
Therefore, the prohibition of data scraping within ToS is primarily enforceable through a breach of contract claim before civil courts, but its success hinges on proving valid consent to those specific terms. Beyond contract law, data scraping can also violate provisions of the LFDA (particularly the anti-circumvention prohibitions), the LFPDPPP, and the Criminal Code, providing multiple legal grounds for a claim.
The primary challenge lies in proving: (i) that a contract was validly formed, and (ii) that scraping actually occurred. Both points typically require specialized forensic evidence, which can be difficult and costly, especially if offshore elements are involved. Given the evolving Mexican judicial system, the level of expertise in resolving these matters is still uncertain. Consequently, most companies affected by scraping prefer to initiate extrajudicial steps, such as cease-and-desist letters, before resorting to litigation.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
Yes. INAI published non-binding Recommendations for processing personal data when using AI. These Recommendations offer public and private entities in Mexico guidance on responsible AI use, emphasizing integrating privacy by design in AI development and implementation, and ensuring compliance with data privacy principles like transparency, information, consent, and purpose.
However, Mexico’s data privacy environment is undergoing a significant transition. In 2024, INAI, an autonomous body, was dissolved and replaced by the Anticorruption and Good Governance Ministry (within the Executive Branch) in May 2025. Additionally, the new LFPDPPP was recently enacted in March 2025, with its Regulations still pending. While this law significantly strengthens data subject rights and increases obligations for data controllers and processors, it does not introduce specific AI-directed regulations. Nevertheless, its enhanced provisions on automated decision-making processes (including the right to object to decisions made without human intervention) and increased requirements for transparency and explicability are highly relevant to AI systems and will guide their lawful operation.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
Not in an official capacity or through publicly available information. Nonetheless, considering the ongoing transition between authorities and privacy regulators, the Ministry has not yet had time to discuss or issue new cases or rulings specifically on AI. Its future approach to AI privacy enforcement will depend on its internal organization, expertise development, and the pending Regulations.
-
Have your national courts already managed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
Yes. Mexico’s SCJN recently ruled a landmark case, Amparo Directo 6/2025, significantly shaping the landscape of AI and copyright. This case arose after the National Copyright Institute (INDAUTOR) denied copyright registration for an AI-generated work (from Leonardo AI), arguing that only natural persons can be recognized as authors under the LFDA.
While the final decision is pending official publication, a publicly available draft and subsequent clarifications reveal key takeaways:
- Human Authorship is exclusive. Copyright is exclusively reserved for humans, requiring creativity and autonomy. The SCJN reasoned that AI merely executes algorithms and processes information provided by humans, who define, design and decide its objectives and tasks.
- AI Outputs as ‘Products’: AI-generated content (outputs) are classified as ‘products’ rather than ‘works,’ and thus cannot be copyrighted or subject to moral rights.
- Software Copyrightable: The software that creates and improves AI systems and platforms is copyrightable.
Although a draft mentioned the potential public domain status of AI outputs if uncopyrightable, the SCJN later officially clarified that this aspect was intendedly omitted from the final ruling’s discussion. The draft mentioned that output was always in the public domain, and payment of fees for using an AI system does not transfer ownership of the output to the user, as the payment is solely for platform access.
This ruling provides clarity on human authorship but leaves the legal status of AI-generated content’s ownership open for future interpretation. The SCJN’s final position will significantly influence AI-related industries, particularly those involving content creation, data-driven platforms, and software development, impacting copyright, liability, and commercial use. However, until the final version is made public, any conclusions about the legal status of AI-generated content remain premature.
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
As of July 2025, Mexico does not yet have a specific authority directly regulating or overseeing AI. However, this is expected to change soon with potential legislative developments.
The proposed “Federal Law for the Ethical, Sovereign, and Inclusive Development of Artificial Intelligence” suggests a risk-based framework (similar to the EU’s AI Act), including authorizations for high-risk systems, strict liability for users and developers, a sanctioning regime, and the establishment of a National AI Council for oversight. While this bill is not yet enacted, it is anticipated for discussion in the next ordinary legislative period in Congress in September 2025.
Sectorial regulators still apply, including the Anticorruption Ministry for privacy, Profeco for consumer protection, IMPI and INDAUTOR for copyright, the new national competition agency for antitrust, and sector-specific regulators for health, finance, education and other sectors.
Separately, the new Telecommunications and Broadcasting Law (LFTR), issued on July 16, 2025, replaced previous legislation and dissolved the Federal Institute of Telecommunications. Its regulatory authority has been assumed by a new Digital Transformation and Telecommunications Agency, reporting to the President. This Agency now oversees telecommunications, broadcasting, and digital platforms.
Although the Agency has no express powers specifically granted for AI, it holds authority to issue regulations concerning information and communication technologies, telecommunications, and software development, which could encompass AI in the future. Furthermore, the LFTR introduced a new regulatory framework for digital platforms, broadly defined as any digital service provided by intermediaries over the Internet that offers, commercializes, or intermediates goods, services, applications, products, or content. The breadth of this definition will necessitate a case-by-case assessment to determine its applicability to various technology companies, potentially including those utilizing AI.
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited? Which sectors have seen the most rapid adoption of AI technologies?
The use of AI by businesses in Mexico is growing but remains uneven across sectors. While adoption is not yet widespread, there is a clear upward trend, particularly among large companies and multinationals seeking operational efficiency, automation, and data-driven decision-making.
The sectors showing the most rapid adoption of AI technologies include:
- Financial services, where AI is being used primarily for fraud detection, transaction monitoring, chatbots, risk assessment and process automation.
- E-commerce and logistics, leveraging AI for inventory management, personalized marketing, route optimization and enhanced customer experiences.
- Media and entertainment, particularly in content generation, recommendation algorithms, and audience analytics.
- Legal and professional services, where AI tools support document review, research, and internal knowledge management.
- Manufacturing, for supply chain optimization and predictive maintenance.
Among SMEs, adoption is more limited but growing, with 64% reported to have integrated AI solutions1. Many rely on free or open-access AI tools for basic tasks such as content creation, customer engagement, or internal productivity. However, concerns around data protection, cost, a lack of skilled personnel and challenges with data quality often limit more advanced or large-scale deployment.
Footnote(s):
1 Mexico Business News in an article from May 27, 2025, titled “AI Adoption Grows Among Mexican SMEs, but Cybersecurity Lags.”
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how? Are AI-driven legal tools widely adopted, and what are the main regulatory concerns surrounding them?
AI tools are increasingly being used in the legal sector in Mexico. Law firms and in-house legal departments are adopting them as support tools for tasks like contract drafting, legal research, due diligence, and litigation analysis. These tools are particularly useful for efficiently processing large information volumes, extracting key document insights, and assisting in drafting standard clauses or internal reports.
Applications span areas like M&A, litigation, regulatory compliance and transactional work. While no Mexican law firm has developed its own AI solution, some collaborate with technology providers to curate databases or fine-tune legal models for local legal sources and practices.
A key regulatory concern is ensuring AI tools use remains consistent with professional confidentiality obligations. As many AI tools are cloud-based, lawyers must verify that appropriate safeguards protect sensitive information and maintain attorney-client privilege. This encourages a thoughtful approach, focusing on secure platforms, clear internal policies, and understanding data processing and storage.
As with any emerging technology, human oversight remains essential. Lawyers are ultimately responsible for the legal work, even with AI assistance. While Mexican regulators have not yet issued specific guidance, the profession is expected to apply existing ethical standards to new tools.
AI is gradually transforming legal work, offering greater efficiency and support across tasks. With informed use and proper controls, it can be integrated responsibly into day-to-day legal practice.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
Key challenges:
- Data confidentiality, privilege and cybersecurity. AI tools often operate on external servers, raising concerns about how client information is processed, stored or reused. Lawyers must ensure that the platforms they use preserve confidentiality and respect attorney-client privilege, and hold cybersecurity best standards.
- Protection of know-how and internal knowledge. A growing concern is whether the use of AI tools may result in the firm´s internal know-how, such as templates, analysis, or legal strategies, being absorbed into external systems. Lawyers must take steps to ensure that proprietary content is not used to train tools accessible for third parties or even competitors.
- Cost efficiency and client perception. AI streamlines legal work, but clients may expect lower fees. High costs for advanced AI and integration challenges can hinder smaller firms. Law firms must demonstrate AI’s value and secure ROI while transparently showing how AI enhances, rather than replaces, expertise.
- Ethical Dilemmas & Professional responsibility. Lawyers remain responsible for the legal accuracy of work produced with AI assistance, and there is a risk of overreliance on tools without proper review or understanding of their limitations, which can lead to issues with competence, confidentiality, and supervision when using AI-generated output or advice.
- Regulatory Uncertainty & Compliance: The lack of a comprehensive AI-specific legal framework in Mexico creates ambiguity regarding liability, data governance, and ethical use. Lawyers must navigate existing fragmented laws (privacy, IP, competition) while anticipating rapid legislative changes.
Key opportunities:
- Increased productivity and efficiency. AI can automate repetitive tasks such as due diligence, document review, contract comparison, or legal research, allowing lawyers to focus on higher-value activities.
- Cost Reduction & Increased Accessibility: By streamlining processes, AI can lower operational costs, potentially making legal services more affordable and accessible to a broader segment of the population.
- Improved Accuracy & Risk Management: AI can reduce human error in due diligence, compliance checks, and legal document generation, leading to better outcomes and reduced risks for clients.
- Internal knowledge and competitive differentiation. AI improves access to internal knowledge by structuring and retrieving prior work. This aids junior lawyers and consistency. AI analyzes legal data to identify trends, predict outcomes, and inform strategies, providing a competitive advantage.
- Innovation & New Service Offerings: AI enables the development of new legal tech solutions and service models (e.g., AI-powered chatbots for initial client intake or legal guidance, online dispute resolution tools), creating new revenue streams and specialized niches for law firms.
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months? Are there any ongoing initiatives that could reshape AI governance?
In the next 12 months, the most significant legal developments in Mexico are expected to come from legislative activity in Congress. Several AI-related initiatives have been presented, aiming to regulate the development, deployment, and use of AI systems across sectors. These proposals advocate for principles of transparency, accountability, non-discrimination, and human oversight, particularly for high-risk systems in employment, health, finance, and public services. The president of the Senate’s commission on AI has already stated that regulation is coming soon.
A recurring theme in these initiatives is the creation of a national AI agency and the adoption of a risk-based regulatory model, similar to the EU’s AI Act. While these proposals remain under discussion, they signal growing momentum toward a structured governance framework.
In parallel, ongoing debates around AI-generated content, data sovereignty, and algorithmic bias are gaining relevance. The SCJN is also expected to rule on several cases related to technology use, user-generated-content, liability of online content, and copyright, which could influence how intellectual property and authorship are treated in future legislation.
Taken together, these developments suggest that Mexico is moving toward a more proactive stance on AI regulation. While no binding framework has been enacted yet, companies using or developing AI systems should begin aligning with international standards and strengthening internal policies to anticipate emerging obligations.
Mexico: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in Mexico.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence? If so, has there been any progress in its implementation? Are there plans for updates or revisions?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems. Have there been any court decisions or legislative developments clarifying liability frameworks applied to artificial intelligence?
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the deployer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace? Have any new regulations been introduced regarding AI-driven hiring, performance assessment, or employee monitoring?
-
What privacy issues arise from the development (including training) and use of artificial intelligence?
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view? Are there any recent precedents addressing the legality of data scraping for AI training?
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
-
Have your national courts already managed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited? Which sectors have seen the most rapid adoption of AI technologies?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how? Are AI-driven legal tools widely adopted, and what are the main regulatory concerns surrounding them?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months? Are there any ongoing initiatives that could reshape AI governance?