Legal Landscapes: Taiwan – Artificial Intelligence

Robin Chang and Eddie Hsiung

Partners, Lee and Li, Attorneys-at-Law


1. What is the current legal landscape for Artificial Intelligence in your jurisdiction?

Taiwan has announced several guidelines for the application of artificial intelligence (AI) in both the public and private sectors.

In the public sector, according to the Guidelines on the Use of Generative AI for the Executive Yuan and Its Subordinate Agencies (“Executive Yuan” is the cabinet of Taiwan), generative AI users within the Executive Yuan and its subordinate bodies are required to uphold a responsible and trustworthy attitude, maintain autonomy and control, and adhere to fundamental principles including security, privacy protection, data governance, and accountability. The guidelines emphasize, among others, the following:

  • The information generated by generative AI must be objectively and professionally evaluated for risk by the responsible staff member/user. It must not replace the staff member’s independent thinking, creativity, or interpersonal interactions.
  • Confidential documents must be personally drafted by the responsible staff member; the use of generative AI is strictly prohibited.
  • Responsible staff members must not provide generative AI with information related to official secrets, personal data, or information not authorized for disclosure by the agency. They must also not ask generative AI questions that may involve confidential business or personal data. However, for generative AI models deployed in a closed on-premises environment, after confirming the security of the system environment, usage may be permitted according to the classification of document or information confidentiality levels.
  • Agencies must not fully trust information generated by generative AI, nor may they directly use unverified output as the basis for administrative actions or as the sole basis for official decision-making.
  • When agencies use generative AI as an auxiliary tool for business execution or service provision, appropriate disclosure should be made.

In the private sector, regulations primarily focus on the financial industry. The Financial Supervisory Commission, Taiwan’s financial regulator (FSC), has issued the Guidelines on the Use of Artificial Intelligence (AI) for the Financial Sector (“Financial AI Guidelines”), which serve as an administrative guidance framework, alongside several self-regulatory rules established by relevant financial industry associations. These Financial AI Guidelines require financial institutions to comply with principles such as clear accountability, fairness, privacy protection, system security, transparency and explainability, and sustainability, ensuring the compliant and secure application of AI technologies.

Furthermore, the National Science and Technology Council (NSTC) released the draft AI Basic Act in August 2024.   Meanwhile, there have been also multiple draft versions submitted by lawmakers, which disagree on key issues such as legal penalties, risk-based regulation, and conflicts with existing laws.  According to news reports, the Executive Yuan had re-assigned the Ministry of Digital Affairs (MODA) as the main responsible agency for the draft, while as of mid-2025, the MODA has not yet submitted a full official draft to the Legislative Yuan (Taiwan’s congress).  As for when the draft will be passed by the Legislative Yuan and what its final contents will be, these matters remain to be closely observed in the future.

2. What three essential pieces of advice would you give to clients involved in Artificial Intelligence matters?

When assisting enterprises or clients in handling matters related to artificial intelligence (AI), there are three key recommendations that are particularly important.

First, companies may wish to ensure the explainability of AI systems. While AI can significantly enhance work efficiency, its operational processes are often highly complex. Especially in machine learning applications, systems automatically collect data and make inferences, making it difficult for human users to understand how decisions are formed.  Understanding where AI decision-making is opaque (black-box) versus explainable is critical to avoid misusing AI or justifying questionable decisions as “algorithmic” outcomes.  If the risks associated with such a “black box” effect are not properly addressed, it may undermine human oversight and the basis for judgment.  Not to mention that generative AI can produce highly realistic but fabricated content. Therefore, even though current regulations may not yet mandate it, enterprises may consider whether to, as a matter of internal control and supervision, establish sufficient explainability mechanisms when developing or implementing AI systems. This ensures that users understand the principles behind AI operations, avoids over-reliance on AI outputs, and enables timely identification of potential errors, biases, or systemic risks. Also, ongoing human oversight and clear guidelines for addressing AI errors or biases are necessary to maintain ethical and effective AI deployment.

Second, the legality and risks associated with AI training data must be handled with caution, particularly during the data collection phase, with strict attention to issues of privacy rights and intellectual property rights. Regarding personal data protection, it is essential to confirm, for example, whether the collected data contains personal data, especially sensitive information, and whether such data is collected with legal basis. In terms of intellectual property rights, attention must be paid to the legality of data sources and an assessment made as to whether there is any risk of infringing upon others’ copyright or other intangible assets. In addition, precautions should be taken to prevent internal employees from inadvertently disclosing trade secrets through the use of AI tools, thereby avoiding commercial losses or legal disputes for the enterprise.

Third, enterprises are suggested to clearly define accountability and governance responsibilities when using AI systems. Even if a company’s directors or senior management are not experts in the AI field, they should not rely solely on the recommendations or conclusions provided by AI systems when making major decisions involving AI technology. It is recommended that enterprises seek external expert assistance and opinions when facing high-risk decisions, and ensure that board members at least understand the basic principles and operational logic of AI technology.

In summary, for enterprises to effectively utilize AI technology, they must address technical explainability, data legality, and corporate governance. Only by doing so can they harness the efficiency and innovation brought by AI while ensuring legal compliance and risk control, thereby establishing a sustainable and responsible AI application framework.

3. What are the greatest threats and opportunities in Artificial Intelligence law in the next 12 months?

Although the drafting of the AI Basic Act has already been initiated, the NSTC has emphasized its importance, stating that future efforts should simultaneously strengthen both the legal framework and foundational infrastructure. However, the MODA tends to adopt a more cautious view of the legislative process, noting that due to the rapid evolution of AI technologies, premature legislation may lead to rigid and outdated regulations. The rapid pace of generative AI innovations outstrips the speed at which lawmakers can enact tailored regulations. Emerging challenges—such as copyright in AI training datasets, algorithmic transparency, and misinformation—lack clear legal standards, creating grey zones in accountability and enforcement during this legislative gap.  These highlight the current legislative challenge of bridging the gap between legal development and technological advancement.

Also, sensitive personal data—especially in sectors such as healthcare—requires protection of individuals’ privacy. Regulators must strike a delicate balance between enabling innovation through data accessibility and safeguarding privacy rights. Failure to properly manage this balance risks insufficient data availability for model training or potential abuse and loss of public trust.

Instead of creating entirely new legislation, existing laws, such as competition laws, can be flexibly adapted to address AI-specific issues. According to our understanding, certain competition authorities are proactively studying AI impacts, trying to promote predictable and effective enforcement that balances fair competition with innovation growth.  In July 2025, the Fair Trade Commission (FTC), Taiwan’s authority responsible for supervising competition law area, released a briefing document on competition law issues that may be related to generative AI, discussing topics such as market concentration, monopoly, collusive behavior, and false advertising, while also soliciting public feedback. This move reflects a shift by regulatory authorities from traditional oversight toward more adaptive, technology-responsive governance, offering an opportunity to establish a more preventive regulatory framework.

According to a press release issued by the FSC, as of May 2025, statistics indicate that financial institutions are actively adopting AI for fraud prevention and large language model development. The FSC has launched collaborative mechanisms through the FinTech Industry Alliance and continues to encourage joint research and application trials. This demonstrates strong regulatory support for cross-sector collaboration, representing a significant development opportunity for the industry.

At the same time, however, financial institutions face compliance risks, including the unpredictability of generative AI outputs, concerns over data security and privacy, and issues related to transparency and accountability in AI decision-making. Under the current legal framework, it is essential for the financial sector to strengthen internal control systems and compliance design; otherwise, the long-term viability of AI applications may be compromised.

4. How do you ensure high client satisfaction levels are maintained by your practice?

With the rapid advancement of AI technologies, the legal industry is undergoing unprecedented transformation. AI tools enable lawyers to be liberated from repetitive and time-consuming tasks, thereby enhancing work efficiency and allowing us to devote greater attention and resources to complex, strategic, and highly interpersonal core legal services. This shift not only alters the way lawyers work but also progressively reshapes client expectations regarding legal services.

Accordingly, we closely monitor the application of AI in legal practice and proactively identify which tasks can be effectively delegated to AI and which require the expertise of legal professionals. We believe that only by actively revisiting and adjusting our service offerings—ensuring that clients continue to perceive the value of our professional services even when AI seamlessly handles certain tasks—can we maintain and enhance client satisfaction and trust.

Also, law firms should not only establish internal regulations for the use of generative AI tools within the firm, but also provide professional training to all employed lawyers, legal personnel, and even assistants. This training is essential to ensure that these employees have a basic understanding of these AI tools, recognize the risks and ethical concerns involved in their usage, and thereby safeguard the quality of legal services and the interests of clients.

Furthermore, we are committed to ongoing learning and application of legal technology with strategic thinking skills to effectively utilize AI tools to improve service quality and efficiency. Simultaneously, we carefully evaluate to possibility and opportunities to collaborate with emerging legal tech providers to determine the most appropriate modes and degrees of partnership, ensuring innovative and forward-looking solutions for our clients.

Lastly, we maintain vigilant attention to domestic AI-related policies, legislative developments, and government guidelines, while keeping abreast of legislative interpretations and key issues in advanced jurisdictions such as the European Union. This comprehensive approach enables us to offer clients precise, forward-thinking advice on AI-related matters. Through these multifaceted efforts, we strive to ensure that clients continue to receive high-quality legal services with significant added value amid the evolving intersection of AI and law.

5. What technological advancements are reshaping Artificial Intelligence law and how can clients benefit from them?

In recent years, the rapid development of AI has profoundly transformed both the conceptual approach and the applicable framework of AI law. Several key technologies—including generative AI, large language models (LLMs), and AI agents—have attracted particular attention. Taking generative AI and LLMs as examples, these technologies offer significant convenience and application potential in areas such as natural language processing, content generation, and decision automation. However, they also raise numerous legal issues, such as the legality of data sources and collection, the determination of copyright ownership, possible intellectual property infringement caused by use of AI, the authenticity of generated content, and legal liability arising from model bias. In response, legal practitioners are actively considering how to adjust liability attribution mechanisms, enhance the verifiability of technology, and address the accountability demands brought about by AI technologies.

Furthermore, the emergence of AI agents has further expanded the role of AI systems, enabling them to not only passively provide recommendations but also actively execute tasks, issue commands, and even interact with other systems. These technological changes challenge the de jure definitions of “actor” and “liability,” prompting enterprises to establish robust risk control mechanisms in advance to prevent unauthorized actions or misuse by AI.

For enterprises and clients, while these technologies present new challenges, they also entail numerous potential legal and commercial opportunities. By gaining a deep understanding of the legal implications behind new technologies, organizations can implement mechanisms for explainability, risk disclosure, and liability attribution at the early stages of AI adoption, thereby reducing the risk of potential future legal disputes and strengthening overall data governance and compliance strategies. In addition, by continuously monitoring the evolution of AI technologies and industry trends, and proactively adjusting response strategies according to operational needs, enterprises can build resilient and competitive AI application frameworks.



Key Takeaways from Video


Full transcript