{"id":110066,"date":"2025-08-07T12:25:33","date_gmt":"2025-08-07T12:25:33","guid":{"rendered":"https:\/\/my.legal500.com\/guides\/?post_type=comparative_guide&#038;p=110066"},"modified":"2025-08-29T15:36:17","modified_gmt":"2025-08-29T15:36:17","slug":"singapore-artificial-intelligence","status":"publish","type":"comparative_guide","link":"https:\/\/my.legal500.com\/guides\/chapter\/singapore-artificial-intelligence\/","title":{"rendered":"Singapore: Artificial Intelligence"},"content":{"rendered":"","protected":false},"template":"","class_list":["post-110066","comparative_guide","type-comparative_guide","status-publish","hentry","guides-artificial-intelligence","jurisdictions-singapore"],"acf":[],"appp":{"post_list":{"below_title":"<div class=\"guide-author-details\"><span class=\"guide-author\">Drew &amp; Napier LLC<\/span><span class=\"guide-author-logo\"><img src=\"https:\/\/my.legal500.com\/guides\/wp-content\/uploads\/sites\/1\/2019\/03\/Drew-Napier.jpg\"\/><\/span><\/div>"},"post_detail":{"above_title":"<div class=\"guide-author-details\"><span class=\"guide-author\">Drew &amp; Napier LLC<\/span><span class=\"guide-author-logo\"><img src=\"https:\/\/my.legal500.com\/guides\/wp-content\/uploads\/sites\/1\/2019\/03\/Drew-Napier.jpg\"\/><\/span><\/div>","below_title":"<span class=\"guide-intro\">This country specific Q&amp;A provides an overview of Artificial Intelligence laws and regulations applicable in Singapore<\/span><div class=\"guide-content\"><div class=\"filter\">\r\n\r\n\t\t\t\t<input type=\"text\" placeholder=\"Search questions and answers...\" class=\"filter-container__search-field\">\r\n\t\t\t<\/div>\r\n\r\n\t\t\t\r\n\r\n\r\n\t\t\t<ol class=\"custom-counter\">\r\n\r\n\t\t\t\r\n\r\n\t\t\t\t\t\t\t\t\t<li class=\"question-block filter-container__element\">\r\n\t\t\t\t\t\t<h3 class=\"filter-container__match-html\">What are your countries legal definitions of \u201cartificial intelligence\u201d?<\/h3>\r\n\t\t\t\t\t\t<button id=\"show-me\">+<\/button>\r\n\t\t\t\t\t\t<div class=\"question_answer filter-container__match-html\" style=\"display:none;\"><p>Singapore has defined \u201cartificial intelligence\u201d within the Model Artificial Intelligence Governance Framework (\u201c<strong>Model Framework<\/strong>\u201d), issued by the Infocomm Media Development Authority (\u201c<strong>IMDA<\/strong>\u201d) and the Personal Data Protection Commission (\u201c<strong>PDPC<\/strong>\u201d):<\/p>\n<p style=\"padding-left: 10px\">Artificial intelligence (or \u201c<strong>AI<\/strong>\u201d) refers to a set of technologies that seek to simulate human traits such as knowledge, reasoning, problem solving, perception, learning and planning, and, depending on the AI model, produce an output or decision (such as a prediction, recommendation and\/or classification).<\/p>\n<p>This Model Framework is a voluntary document, setting out ethical and governance principles for the use of AI and translating them into practical recommendations for organisations to adopt. It applies across all sectors.<\/p>\n<p>Singapore also distinguishes between traditional\/discriminative AI and generative AI, where \u201ctraditional AI\u2019 refers to \u201cAI models that make predictions by leveraging insights derived from historical data\u201d, and \u201cgenerative AI\u201d refers to \u201cAI models capable of generating text, images and other media types. They learn the patterns and structure of their input training data and generate new data with similar characteristics\u201d. This distinction is set out in our Model AI Governance Framework for Generative AI.<\/p>\n<\/div>\r\n\r\n\r\n\t\t\t\t\t<\/li>\r\n\r\n\t\t\t\t\t\t\t\t\t<li class=\"question-block filter-container__element\">\r\n\t\t\t\t\t\t<h3 class=\"filter-container__match-html\">Has your country developed a national strategy for artificial intelligence? If so, has there been any progress in its implementation? Are there plans for updates or revisions?<\/h3>\r\n\t\t\t\t\t\t<button id=\"show-me\">+<\/button>\r\n\t\t\t\t\t\t<div class=\"question_answer filter-container__match-html\" style=\"display:none;\"><p>In December 2023, Singapore issued its second National Artificial Intelligence Strategy \u201cNAIS 2.0\u201d. The first National AI strategy was announced in 2019. Singapore\u2019s NAIS 2.0 makes 3 shifts from the first national AI strategy announced in 2019. The first is that AI should be seen as a \u201cnecessity\u201d and not only an \u201copportunity\u201d \u2013 people \u201cmust know\u201d AI and not just see it as \u201cgood to have\u201d. The second is that Singapore must move from a local approach to a global approach, such that Singapore should be well connected to global innovation networks, contribute to meaningful AI breakthroughs, and develop AI products that the world values. The third is that Singapore will move beyond the flagship national AI projects in key areas such as healthcare, education and border security, and administer AI-enabled solutions at scale.<\/p>\n<p>The NAIS 2.0 also emphasizes that Singapore will \u201cretain agility\u201d in the regulatory approach \u2013 the Government will take a \u201cpragmatic approach\u201d, supporting innovation but still ensuring that AI is developed and used responsibly. The Government will also take \u201cdifferentiated approaches to managing risks to and from AI, ranging from regulatory moves to voluntary guidelines\u201d, and will \u201cdo so thoughtfully and in concert with others, accounting for the global nature of AI\u201d.<\/p>\n<p>In terms of implementation, the Singapore government has continued its funding and support for companies to adopt AI solutions, ranging from grants, to initiatives partnering them with major cloud service providers to access AI tools and computing power, to sandboxes where small and medium enterprises can trial generative AI solutions from vendors. The Singapore has most recently launched the \u201cGlobal AI Assurance Sandbox\u201d builders or deployers of generative AI solutions can be matched with specialist technical testers.<\/p>\n<\/div>\r\n\r\n\r\n\t\t\t\t\t<\/li>\r\n\r\n\t\t\t\t\t\t\t\t\t<li class=\"question-block filter-container__element\">\r\n\t\t\t\t\t\t<h3 class=\"filter-container__match-html\">Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.<\/h3>\r\n\t\t\t\t\t\t<button id=\"show-me\">+<\/button>\r\n\t\t\t\t\t\t<div class=\"question_answer filter-container__match-html\" style=\"display:none;\"><p>Singapore does not have legislation that specifically addresses the use of artificial intelligence across a variety of sectors (cf. the EU AI Act). The government is presently not looking to enact regulation for AI, but is focussing its efforts on promoting the responsible use of AI though mediums such as the Model Artificial Intelligence Governance Framework, and its AI governance testing framework and software toolkit called \u201cAI Verify\u201d. The government will continue to monitor the state of technology and how it is being used before deciding on a regulatory approach.<\/p>\n<p>However, Singapore has enacted laws in relation to specific applications of AI technology. Our Road Traffic Act 1961 was amended in 2017 to provide for the trial and use of autonomous motor vehicles (\u201c<strong>AVs<\/strong>\u201d), as our road traffic laws were previously premised on there being a human driver (previous uses of AVs on the roads would be by way of exemptions from the Act). In relation to medical devices that incorporate AI technology (\u201c<strong>AI-MDs<\/strong>\u201d), these must also be registered under the Health Products Act 2007, as all medical devices must be registered regardless of whether they incorporate AI technology. However, the Health Sciences Authority\u2019s Regulatory Guidelines for Software Medical Devices specifies the additional information that must be submitted when registering an AI-MD, for example, information about the data sets used for training and validation, a description of the AI model, reports to substantiate its performance claims, and the level of human intervention in the system.<\/p>\n<p>The first express mention of \u201cartificial intelligence\u201d in our laws was in September 2024 when the Elections (Integrity of Online Advertising) (Amendment) Bill was introduced to prohibit the publication of online election advertising containing digitally generated or manipulated content about candidates in parliamentary and presidential elections. Generative AI was listed as an example of digital means by which content could be generated or manipulated. It has since come into force on 22 January 2025.<\/p>\n<p>Nevertheless, in all instances where AI technology is applied, existing laws can still apply. For example, tort law and contract law can apply where the AI system does not perform as expected, and the Personal Data Protection Act 2012 applies where the AI system is used to process personal data. Companies that develop or utilise AI systems must also comply with existing corporate laws, intellectual property laws, employment laws and competition laws, to name a few.<\/p>\n<p>Our regulators have also issued a series of guidelines to assist the industry with utilising this new technology, such as:<\/p>\n<p>(a) IMDA\/PDPC issued (in January 2020) the 2nd Edition of the \u201cModel Artificial Intelligence Governance Framework\u201d, setting out key principles organisations must take into account when developing and deploying traditional AI systems.<\/p>\n<p>The Model Framework is based on 2 high-level guiding principles to promote trust in the use of AI, where organisations using AI in decision-making should ensure that the decision-making process is explainable, transparent and fair, and that AI solutions should be human-centric with human well-being and safety at the forefront. It is complemented by the Implementation and Self-Assessment Guide for Organisations which sets out a series of questions for organisations to self-assess how their practices align with the Model Framework.<\/p>\n<p>(b) IMDA and the AI Verify Foundation issued (in June 2024) a \u201cModel Governance Framework for Generative AI\u201d (<strong>\u201cModel Gen-AI Framework<\/strong>\u201d), which sets out 9 dimensions which policymakers, industry, the research community and the broader public must take action in to build trustworthy generative AI systems.<\/p>\n<p>(c) The Monetary Authority of Singapore (MAS) released (in November 2018) the \u201cPrinciples to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore\u2019s Financial Sector\u201d, and leads the Veritas consortium within the financial industry to promote the responsible use of AI;<\/p>\n<p>(d) The Intellectual Property Office of Singapore (IPOS) issued the \u201cIP and Artificial Intelligence Information Note to provide an overview of how different types of IP rights can be used to protect AI inventions.<\/p>\n<\/div>\r\n\r\n\r\n\t\t\t\t\t<\/li>\r\n\r\n\t\t\t\t\t\t\t\t\t<li class=\"question-block filter-container__element\">\r\n\t\t\t\t\t\t<h3 class=\"filter-container__match-html\">Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?<\/h3>\r\n\t\t\t\t\t\t<button id=\"show-me\">+<\/button>\r\n\t\t\t\t\t\t<div class=\"question_answer filter-container__match-html\" style=\"display:none;\"><p>Singapore has not enacted legislation that specifically deals with defective artificial intelligence systems. It would thus fall to be governed by the existing regime for that particular product \u2013 for example, in the case of AI MDs regulated under the Health Products Act 2007, the Authority may suspend or cancel the registration of such AI-MD if it has reasonable grounds to believe that the safety of the AI-MD has changed so as to render it unsuitable to continue to be registered, or if it is in the public interest to do so (see section 37 of the Health Products Act).<\/p>\n<p>Ordinary principles of tort and contract will also apply. Please see S\/N 5 below for further details.<\/p>\n<\/div>\r\n\r\n\r\n\t\t\t\t\t<\/li>\r\n\r\n\t\t\t\t\t\t\t\t\t<li class=\"question-block filter-container__element\">\r\n\t\t\t\t\t\t<h3 class=\"filter-container__match-html\">Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems. Have there been any court decisions or legislative developments clarifying liability frameworks applied to artificial intelligence?<\/h3>\r\n\t\t\t\t\t\t<button id=\"show-me\">+<\/button>\r\n\t\t\t\t\t\t<div class=\"question_answer filter-container__match-html\" style=\"display:none;\"><p><u>Civil liability<\/u><\/p>\n<p>Where damage is caused by the AI system, the affected person may seek a remedy in tort or contract (if there is a contract between the parties). However, AI technology has some unique features that may affect how conventional tort and contract principles of liability are applied:<\/p>\n<ol style=\"padding-left: 0\" type=\"a\">\n<li>AI is a \u201cblack box\u201d \u2013 it is not always possible to explain how or why the AI system reached a particular outcome even if the factors the system is programmed to take into account are known, and the type of model chosen affects how easily the system can be explained, as some models are more complex than others \u2013 this would increase the difficulty in proving that the damage was the result of a defect in the programming of the AI system, as opposed to some other cause;<\/li>\n<li>AI is self-learning, where it can learn from the data it has been exposed to during its training and improve the results generated without being explicitly programmed, meaning that the output of the system will not always be foreseeable;<\/li>\n<li>AI has multiple people involved in its development, from procuring and selecting the datasets, to training the algorithm, to monitoring the performance of the algorithm so it will be a complex fact-finding exercise to determine who is liable when damage is caused. AI is heavily reliant on the data that it is trained on, as it makes predictions based on that data, so if the data is flawed, the accuracy of the output is affected, and there could be errors compounded by other errors (e.g. in addition to flawed datasets for training, the algorithm was not a suitable one).<\/li>\n<\/ol>\n<p><u>Criminal liability<\/u><\/p>\n<p>Our criminal laws presently do not attribute liability to AI systems directly. Criminal liability presently only attaches to natural or legal persons, of which an AI system is not. Where an AI system causes damage, or breaks a criminal law, it would warrant an inquiry into how this arose, and it would turn on the facts whether the programmer of the system, its owner, the person who operated it, or any other person, is criminally liable. The mental state of the human in operating or overseeing the system is a key determining factor \u2013 was the consequence something they intended or knew about?<\/p>\n<p>For example, if a person uses an AI system to deliberately commit crimes (contrary to what the AI system was designed for), such as hacking, that person could potentially be found guilty of an offence under the Computer Misuse Act 1993.<\/p>\n<p>The Singapore Academy of Law\u2019s Law Reform Committee has issued a \u201cReport on Criminal Liability, Robotics and AI Systems\u201d in February 2021 to explore these issues in depth, and cautioning that there is no \u201cone size fits all\u201d approach to the application of criminal liability across all uses of AI.<\/p>\n<p><u>Regulatory guidance<\/u><\/p>\n<p>In the Model Gen-AI Framework, it was suggested that \u201cresponsibility can be allocated based on the level of control that each stakeholder has in the generative AI development chain, so that the able party takes necessary action to provide end-users\u201d. This is an ongoing development where \u201cthe details of how responsibilities will be allocated\u2026will need to be worked out gradually\u201d, with model developers \u201cwell-placed\u201d to lead this development as they are \u201cthe most knowledgeable about their own models and how they are deployed\u201d.<\/p>\n<p>Singapore presently does not have decisions or legislative developments on harms caused by AI.<\/p>\n<\/div>\r\n\r\n\r\n\t\t\t\t\t<\/li>\r\n\r\n\t\t\t\t\t\t\t\t\t<li class=\"question-block filter-container__element\">\r\n\t\t\t\t\t\t<h3 class=\"filter-container__match-html\">Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the deployer, the user and the victim?<\/h3>\r\n\t\t\t\t\t\t<button id=\"show-me\">+<\/button>\r\n\t\t\t\t\t\t<div class=\"question_answer filter-container__match-html\" style=\"display:none;\"><p>The person responsible for the harm caused by an AI system would turn on the facts of the case, and if there is a contract between the parties, what is set out in the contract.<\/p>\n<p>For example, if the user did not use the AI system for its intended purpose, but for a different purpose despite clear warnings from the developer about the limitations of the AI system, then the developer may not be held responsible for any harm caused. Similarly, if a victim dashed across the road in front of an autonomous vehicle without checking for traffic, he or she may be found contributorily negligent.<\/p>\n<p>Singapore\u2019s Model Gen-AI Framework recommends that responsibility be allocated based on the level of control that each stakeholder has in the generative AI development chain, drawing on how the cloud industry has built and codified comprehensive shared responsibility models. It also considers \u201csafety nets\u201d where unanticipated harm occurs, such as the offering of indemnities, the amending of legal frameworks to make it simpler for end-users to prove damage caused by AI-enable products and services, and the applicability of no-fault insurance.<\/p>\n<\/div>\r\n\r\n\r\n\t\t\t\t\t<\/li>\r\n\r\n\t\t\t\t\t\t\t\t\t<li class=\"question-block filter-container__element\">\r\n\t\t\t\t\t\t<h3 class=\"filter-container__match-html\">What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?<\/h3>\r\n\t\t\t\t\t\t<button id=\"show-me\">+<\/button>\r\n\t\t\t\t\t\t<div class=\"question_answer filter-container__match-html\" style=\"display:none;\"><p>In civil cases, the burden of proof is on the balance of probabilities. Where the victim alleges negligence on the part of the defendant AI developer\/operator, the victim must establish that the defendant owed it a duty of care, there was a breach of that duty (falling below a standard of care), and that the breach caused the loss, where the loss is not too remote.<\/p>\n<p>In criminal cases, the case must be proven by the prosecution beyond reasonable doubt. What must be proven depends on the <em>actus reus<\/em> and <em>mens rea <\/em>elements of the offence set out in legislation.<\/p>\n<\/div>\r\n\r\n\r\n\t\t\t\t\t<\/li>\r\n\r\n\t\t\t\t\t\t\t\t\t<li class=\"question-block filter-container__element\">\r\n\t\t\t\t\t\t<h3 class=\"filter-container__match-html\">Is the use of artificial intelligence insured and\/or insurable in your jurisdiction?<\/h3>\r\n\t\t\t\t\t\t<button id=\"show-me\">+<\/button>\r\n\t\t\t\t\t\t<div class=\"question_answer filter-container__match-html\" style=\"display:none;\"><p>For the use of autonomous vehicles, it is a requirement that the person authorised to undertake the trial or use of the vehicle must have in place liability insurance indemnifying the owner and any authorised driver or operator of the vehicle in relation to death, bodily injury or damage to property caused by, or arising out of, the use of the vehicle on a road. In lieu of such liability insurance, the person must deposit with the authority a security of not less than SGD$1.5 million, so that the victim will always have a remedy. For more details, please see section 6C of the Road Traffic Act 1961, and regulations 14 and 15 of the Road Traffic (Autonomous Motor Vehicles) Rules 2017.<\/p>\n<p>For the deployment of AI technology in other products or services, whether insurance is required is determined by the existing statutory regime for that product or service, and not whether AI is being used.<\/p>\n<p>Nonetheless, developers and users of AI systems are free to consult insurance providers and obtain their own coverage.<\/p>\n<\/div>\r\n\r\n\r\n\t\t\t\t\t<\/li>\r\n\r\n\t\t\t\t\t\t\t\t\t<li class=\"question-block filter-container__element\">\r\n\t\t\t\t\t\t<h3 class=\"filter-container__match-html\">Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?<\/h3>\r\n\t\t\t\t\t\t<button id=\"show-me\">+<\/button>\r\n\t\t\t\t\t\t<div class=\"question_answer filter-container__match-html\" style=\"display:none;\"><p>Under Singapore law, the inventor must be the \u201cactual deviser\u201d of the invention (see section 2(1) of the Patents Act 1994), and this must be a natural person (see the cases of <em>National University Hospital (Singapore) Pte Ltd v Cicada Cube Pte Ltd<\/em> [2017] SGHC 53 at para 51, and <em>Energenics Pte Ltd v Musse Singapore Pte Ltd and anor<\/em> [2013] SGHCR 21 at para 24).<\/p>\n<p>This is also the position set out (at pages 18 \u2013 19) in the joint report by the IPOS and the Singapore Management University titled \u201cWhen Code Creates: A Landscape Report on Issues at the Intersection of Artificial Intelligence and Intellectual Property Law\u201d (\u201c<strong>the IPOS-SMU Landscape Report<\/strong>\u201d).<\/p>\n<\/div>\r\n\r\n\r\n\t\t\t\t\t<\/li>\r\n\r\n\t\t\t\t\t\t\t\t\t<li class=\"question-block filter-container__element\">\r\n\t\t\t\t\t\t<h3 class=\"filter-container__match-html\">Do images generated by and\/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?<\/h3>\r\n\t\t\t\t\t\t<button id=\"show-me\">+<\/button>\r\n\t\t\t\t\t\t<div class=\"question_answer filter-container__match-html\" style=\"display:none;\"><p>The IPOS-SMU Landscape Report discusses 3 issues relating to AI and copyright \u2013 (1) Can AI be named as an author; (2) Is there copyright protection for works generated by AI; and (3) Who is the owner of an AI generated work.<\/p>\n<p>On the first issue, the answer is \u201cno\u201d. Singapore\u2019s Copyright Act 2021 requires the author to be a natural person, and for a work to be protected by copyright, it must be original (see <em>Asia Pacific Publishing Pte Ltd v Pioneers &amp; Leaders (Publishers) Pte Ltd<\/em>).<\/p>\n<p>On the second issue, for copyright to subsist, \u201cthere must be an authorial creation that is casually connected with the engagement of the human intellect\u201d (i.e. \u201cthe application of intellectual effort, creativity, or the exercise of mental labour, skill or judgment\u201d) \u2013 see <em>Global Yellow Pages Limited v Promedia Directories Pte Ltd<\/em> [2017] SGCA 28. The answer would thus depend on the nature and extent of the prompts entered by the human, as well as how the AI image generator operates to create images (as that impacts how much control the human has over the AI-generated output). The IPOS-SMU Landscape Report distinguishes between \u201cAI-assisted\u201d works (which are akin to a human using a tool as part of their creative process) and \u201cAI-generated\u201d works with no human intervention.<\/p>\n<p>Singapore does not have an equivalent in our Copyright Act 2021 to the UK\u2019s Copyright, Designs and Patents Act 1988 provision for the protection of computer-generated works.<\/p>\n<p>On the third issue, assuming that AI-generated works can be protected by copyright, the owner of copyright can be a non-natural person with legal personality (e.g. a company) who has been assigned the ownership. Sections 133 and 134 of the Copyright Act 2021 address copyright ownership where the maker of the work is the default first owner, and where the work is created in the course of employment<\/p>\n<\/div>\r\n\r\n\r\n\t\t\t\t\t<\/li>\r\n\r\n\t\t\t\t\t\t\t\t\t<li class=\"question-block filter-container__element\">\r\n\t\t\t\t\t\t<h3 class=\"filter-container__match-html\">What are the main issues to consider when using artificial intelligence systems in the workplace? Have any new regulations been introduced regarding AI-driven hiring, performance assessment, or employee monitoring?<\/h3>\r\n\t\t\t\t\t\t<button id=\"show-me\">+<\/button>\r\n\t\t\t\t\t\t<div class=\"question_answer filter-container__match-html\" style=\"display:none;\"><p>The Model AI Framework sets out 4 key areas where organisations should adopt measures to promote the responsible use of AI:<\/p>\n<ol style=\"padding-left: 0\" type=\"a\">\n<li>Adapt existing or set up internal governance structures and measures to have appropriate oversight over how AI technologies are used in the business; to minimise risks and allocate responsibilities relating to algorithmic decision-making;<\/li>\n<li>Determine the appropriate level of human involvement in AI-augmented decision-making based on the organisation\u2019s risk appetite for the use of AI and the nature of the decision to be made;<\/li>\n<li>Operations management, such that the organisation addresses potential issues when developing, selecting and maintaining AI models, including the management of data (e.g. ensuring it is drawn from representative sources);<\/li>\n<li>Strategies for interacting and communicating with stakeholders (e.g. to inform them that AI is being used and how it affects them).<\/li>\n<\/ol>\n<p>Separately, if an organisation is using generative AI to enhance productivity (e.g. employees use a ChatGPT-like AI system to generate marketing materials, summarise documents, etc.), the organisation should have in place guidelines for employees on the use of such tools, and ensure that employees are aware of the limitations of such technology. For example, the organisation should require employees to check the output of the AI system for accuracy, and warn them not to input sensitive data into the system unless the necessary security measures are in place.<\/p>\n<p>In terms of regulations, Singapore does not have legislation specifically addressing the use of AI in employment. However, Singapore has passed the Workplace Fairness Act in January 2025 to strengthen protections against workplace discrimination based on characteristics such as gender, marital status and disability. Previous guidelines on anti-discriminatory practices in the workplace are now placed on a statutory footing. The Act is not yet in force as at the time of writing.<\/p>\n<\/div>\r\n\r\n\r\n\t\t\t\t\t<\/li>\r\n\r\n\t\t\t\t\t\t\t\t\t<li class=\"question-block filter-container__element\">\r\n\t\t\t\t\t\t<h3 class=\"filter-container__match-html\">What privacy issues arise from the development (including training) and use of artificial intelligence?<\/h3>\r\n\t\t\t\t\t\t<button id=\"show-me\">+<\/button>\r\n\t\t\t\t\t\t<div class=\"question_answer filter-container__match-html\" style=\"display:none;\"><p>Artificial intelligence trains and operates on a vast amount of data, which is likely to include personal data. Personal data could be obtained from a broad range of sources (e.g. CCTV cameras, GPS location data, computing devices) and may be obtained from the data subject or another individual (e.g. where applications like ChatGPT are used, the user could input another individual\u2019s personal data in the prompt). Data from multiple sources can also be combined to generate insights about a particular individual (e.g. their preferences, buying patterns, emotional state, health status, likelihood of repaying a loan on time). This gives rise to data privacy issues such as whether the data subjects are adequately informed of what personal data will be collected and how the AI system may use and disclose their personal data, as well as whether individuals can prevent use and disclosure or ensure that the data \/ inferences are accurate.<\/p>\n<p>The Personal Data Protection Act 2012 (\u201c<strong>PDPA<\/strong>\u201d) must be complied with where personal data is processed, whether for the development or in the deployment of the AI system.<\/p>\n<p>Organisations can consider the feasibility of using anonymised data, which will not be subject to the PDPA. However, even if a data set is initially anonymised, organisations should be mindful that the risk of reidentification could increase over time (e.g. as the AI system aggregates more data to derive correlations).<\/p>\n<\/div>\r\n\r\n\r\n\t\t\t\t\t<\/li>\r\n\r\n\t\t\t\t\t\t\t\t\t<li class=\"question-block filter-container__element\">\r\n\t\t\t\t\t\t<h3 class=\"filter-container__match-html\">How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view? Are there any recent precedents addressing the legality of data scraping for AI training?<\/h3>\r\n\t\t\t\t\t\t<button id=\"show-me\">+<\/button>\r\n\t\t\t\t\t\t<div class=\"question_answer filter-container__match-html\" style=\"display:none;\"><p>From an IP perspective, in relation to using copyrighted materials to train AI systems, Singapore\u2019s Copyright Act 2021 sets out various permitted uses of copyright works, the 2 most notable being \u201cfair use\u201d (section 190) and the \u201ccomputational data analysis\u201d exception (section 244). There have not been any reported judicial decisions yet in relation to the applicability of these exceptions to training AI systems.<\/p>\n<p>However, the computational data analysis exception will apply only in limited circumstances, and thus may not cover all instances of data scraping. One of the key conditions for the exception to apply is that the person who makes a copy of any copyrighted material must have lawful access to the material (called the first copy) from which the copy is made. In other words, the person must not have accessed the first copy by circumventing paywalls, or by breaching the terms of use of a database (unless that term is void under section 187 of the Copyright Act 2021, which does not allow contracts to override statutory exceptions). The copy must also be made for the purpose of using a computer program to identify, extract and analyse information or data from the work, or using the work as an example of a type of information or data to improve the functioning of a computer program in relation to that type of data \u2013 it may not be used for any other purpose.<\/p>\n<p>From a privacy perspective, if personal data about an individual is publicly available, the organisation need not obtain consent for the collection, use or disclosure of such personal data. However, it still has to comply with all other obligations in the PDPA, and use the data only for purposes that a reasonable person would consider appropriate in the circumstances (e.g. it must not use the data for illegal purposes or where it would be harmful to the individual concerned). The organisation may also be bound by the terms and conditions imposed on the use of the data from source it obtained the data from.<\/p>\n<p>From a competition perspective, there is no specific guidance from our competition regulator on this issue of data scraping at present.<\/p>\n<\/div>\r\n\r\n\r\n\t\t\t\t\t<\/li>\r\n\r\n\t\t\t\t\t\t\t\t\t<li class=\"question-block filter-container__element\">\r\n\t\t\t\t\t\t<h3 class=\"filter-container__match-html\">To what extent is the prohibition of data scraping in the terms of use of a website enforceable?<\/h3>\r\n\t\t\t\t\t\t<button id=\"show-me\">+<\/button>\r\n\t\t\t\t\t\t<div class=\"question_answer filter-container__match-html\" style=\"display:none;\"><p>If a website\u2019s terms of use prohibit data scraping but a person does so, this could constitute a breach of agreement. However, there is no settled position in Singapore that the terms of use of a website are always enforceable against the user. This would depend on whether the terms of use were sufficiently brought to the attention of the user, and whether the user actually accepted them. Many sites do not require the user to expressly agree (e.g. by clicking a button) before they can access the site so as not to negatively affect the user experience.<\/p>\n<\/div>\r\n\r\n\r\n\t\t\t\t\t<\/li>\r\n\r\n\t\t\t\t\t\t\t\t\t<li class=\"question-block filter-container__element\">\r\n\t\t\t\t\t\t<h3 class=\"filter-container__match-html\">Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?<\/h3>\r\n\t\t\t\t\t\t<button id=\"show-me\">+<\/button>\r\n\t\t\t\t\t\t<div class=\"question_answer filter-container__match-html\" style=\"display:none;\"><p>The PDPC has issued the following guidance to assist the industry with navigating AI use:<\/p>\n<ol style=\"padding-left: 0\" type=\"a\">\n<li>5 June 2018: PDPC published a Discussion Paper on AI and Personal Data \u2013 Fostering Responsible Development and Adoption of AI, which was their preliminary analysis of issues pertinent to the commercial development and adoption of AI solutions;<\/li>\n<li>January 2019: IMDA\/PDPC published the 1st edition of the Model AI Framework;<\/li>\n<li>January 2020: IMDA\/PDPC published the 2nd edition of the Model AI Framework;<\/li>\n<li>April 2024: PDPC published the Advisory Guidelines on use of Personal Data in AI Recommendation and Decision Systems, to give organisations certainty on when they can use personal data to develop and deploy AI systems, and guidance on how they can be transparent to consumers about how their AI systems will use personal data to make recommendations, predictions or decisions;<\/li>\n<li>July 2024: PDPC published a Guide on Synthetic Data Generation to assist organisations in understanding synthetic data generation techniques and potential use cases for AI;<\/li>\n<li>July 2025: IMDA released a draft Privacy Enhancing Technology (\u201c<strong>PET<\/strong>\u201d) adoption guide for industry consultation, targeted at C-suite and senior decision makers considering the adoption of PETs to enable safe access to more data for AI training.<\/li>\n<\/ol>\n<\/div>\r\n\r\n\r\n\t\t\t\t\t<\/li>\r\n\r\n\t\t\t\t\t\t\t\t\t<li class=\"question-block filter-container__element\">\r\n\t\t\t\t\t\t<h3 class=\"filter-container__match-html\">Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?<\/h3>\r\n\t\t\t\t\t\t<button id=\"show-me\">+<\/button>\r\n\t\t\t\t\t\t<div class=\"question_answer filter-container__match-html\" style=\"display:none;\"><p>The PDPC has not yet published any enforcement decision or released any statement on specific cases involving the processing of personal data with artificial intelligence.<\/p>\n<\/div>\r\n\r\n\r\n\t\t\t\t\t<\/li>\r\n\r\n\t\t\t\t\t\t\t\t\t<li class=\"question-block filter-container__element\">\r\n\t\t\t\t\t\t<h3 class=\"filter-container__match-html\">Have your national courts already managed cases involving artificial intelligence? If yes, what are the key takeaways from these cases?<\/h3>\r\n\t\t\t\t\t\t<button id=\"show-me\">+<\/button>\r\n\t\t\t\t\t\t<div class=\"question_answer filter-container__match-html\" style=\"display:none;\"><p>Singapore\u2019s courts have not yet issued decisions on cases involving artificial intelligence.<\/p>\n<p>However, our Court of Appeal has issued a landmark decision on the use of deterministic algorithms to conclude contracts in the case of <em>Quoine Pte Ltd v B2C2 Ltd<\/em> (2020) SGCA(I) 02. A deterministic algorithm is one that \u201cwill always produce precisely the same output given the same input [\u2026] and does not have the capacity to develop its own responses to varying conditions\u201d (see [15]). Therefore, where an attempt is made to void contracts concluded by such algorithms for unilateral mistake, in order to determine knowledge of that mistake, the court will refer to the state of mind of the algorithm\u2019s programmers from the time of programming up to the point where the relevant contract is formed (see [97] to [99]).<\/p>\n<p>It will be interesting to see whether the same principles apply where the algorithm is non-deterministic (i.e. for the same input, it may produce different outputs), or if there are multiple programmers, as the software used by B2C2 was devised almost exclusively by one of its founders.<\/p>\n<\/div>\r\n\r\n\r\n\t\t\t\t\t<\/li>\r\n\r\n\t\t\t\t\t\t\t\t\t<li class=\"question-block filter-container__element\">\r\n\t\t\t\t\t\t<h3 class=\"filter-container__match-html\">Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?<\/h3>\r\n\t\t\t\t\t\t<button id=\"show-me\">+<\/button>\r\n\t\t\t\t\t\t<div class=\"question_answer filter-container__match-html\" style=\"display:none;\"><p>At present, Singapore does not have a dedicated AI regulator. The IMDA plays a key role in promoting the responsible adoption of AI across the public and private sectors. It has issued Model Frameworks for traditional and generative AI that apply across all sectors, as well as developed AI Verify (an AI governance testing framework and a software toolkit) in consultation with the industry, amongst other initiatives. Regulators in other sectors also issue guidelines on the use of AI for their sectors (e.g. health, finance, etc.).<\/p>\n<\/div>\r\n\r\n\r\n\t\t\t\t\t<\/li>\r\n\r\n\t\t\t\t\t\t\t\t\t<li class=\"question-block filter-container__element\">\r\n\t\t\t\t\t\t<h3 class=\"filter-container__match-html\">How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited? Which sectors have seen the most rapid adoption of AI technologies?<\/h3>\r\n\t\t\t\t\t\t<button id=\"show-me\">+<\/button>\r\n\t\t\t\t\t\t<div class=\"question_answer filter-container__match-html\" style=\"display:none;\"><p>The use of AI by businesses in Singapore is gaining momentum across all sectors. In addition to issuing detailed guidance and free-to-use testing toolkits, Singapore\u2019s regulators support this with funding and sandboxes, matching organisation with providers where it comes to finding the right AI solution or AI testing (see, for example, the \u201cGlobal AI Assurance Sandbox\u201d initiative matching builders\/deployers of genAI applications with specialist testing vendors).<\/p>\n<p>To aid businesses in deploying AI solutions in the workplace, our regulators have also issued \u201cA Guide to Job Redesign in the Age of AI\u201d, where \u201cjobs\u201d should be broken down into \u201ctasks\u201d, as AI impacts on how tasks are to be performed. The guide also sets out considerations to decide whether a task should be automated.<\/p>\n<\/div>\r\n\r\n\r\n\t\t\t\t\t<\/li>\r\n\r\n\t\t\t\t\t\t\t\t\t<li class=\"question-block filter-container__element\">\r\n\t\t\t\t\t\t<h3 class=\"filter-container__match-html\">Is artificial intelligence being used in the legal sector, by lawyers and\/or in-house counsels? If so, how? Are AI-driven legal tools widely adopted, and what are the main regulatory concerns surrounding them?<\/h3>\r\n\t\t\t\t\t\t<button id=\"show-me\">+<\/button>\r\n\t\t\t\t\t\t<div class=\"question_answer filter-container__match-html\" style=\"display:none;\"><p>The legal sector is already using AI in a variety of ways, such as for discovery in litigation, and for due diligence processes in M&amp;A transactions. With the accessibility of generative AI tools, the legal sector is also starting to explore how to integrate such tools into their workflows such as for research or document generation.<\/p>\n<p>On 1 October 2024, the Singapore Courts issued a \u201cGuide on the Use of Generative Artificial Intelligence Tools by Court Users\u201d, applying to both lawyers and litigants in person. The use of generative AI is not prohibited in court documents, but users remain responsible for all AI-generated content used. Disclosure to the court is not required pre-emptively, and only if asked by the Court.<\/p>\n<p>The Ministry of Law is presently working on guidelines for lawyers on the use of generative AI in legal work.<\/p>\n<\/div>\r\n\r\n\r\n\t\t\t\t\t<\/li>\r\n\r\n\t\t\t\t\t\t\t\t\t<li class=\"question-block filter-container__element\">\r\n\t\t\t\t\t\t<h3 class=\"filter-container__match-html\">What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?<\/h3>\r\n\t\t\t\t\t\t<button id=\"show-me\">+<\/button>\r\n\t\t\t\t\t\t<div class=\"question_answer filter-container__match-html\" style=\"display:none;\"><p>Of the 5 challenges set out, 3 relate to the developing landscape on how this new technology should be regulated, as there is no one-size-fits-all solution. The other 2 are changes to the nature of legal practice. We have chosen to address the challenges and opportunities together, since the challenges are actually opportunities to clarify the law and also ensure the legal profession keeps pace with technological developments.<\/p>\n<ol style=\"padding-left: 0\">\n<li>Because this is a developing field both in Singapore and overseas, it is important for lawyers to keep abreast of overseas developments, as technology easily flows across international borders. The pace at which legislation and guidelines are issued across the world has increased exponentially in over the years, and 2024 \u2013 2025 have seen many court decisions issued in relation to AI use (especially relating to IP issues) but the positions may change as parties appeal. Lawyers must remain up to date on these latest developments.<\/li>\n<li>AI can be deployed in so many ways, so there is no one-size-fits-all solution (or regulation), and lawyers must be keenly aware of this. The rules surrounding AI used in a music recommendation system will be different from that in a system used by a bank to determine if a person should be granted a loan, because of the gravity of their impact on a person. The challenge will be in calibrating the level of governance measures\/precautions to be taken in each scenario, without exposing the organisation to unnecessary (legal and other) risk.<\/li>\n<li>Determining liability where the AI system causes damage, or does not perform as expected. Lawyers must be aware of whether there are features of AI that make it different from other technologies, and assess whether there may be limitations in applying existing legal principles and how to overcome them. AI systems learn from the data they are trained on and can improve with the experience without being explicitly programmed. Aside from having an \u2018autonomous\u2019 quality (where their outcome may not always be foreseen), the quality of the data used to train the system also matters, as well as how different the real-world data input into the system is from the training data, as that also affects the AI system\u2019s performance.<\/li>\n<li>The nature of the work performed by lawyers will change as Large Language Models are increasingly incorporated into legal practice, in tandem with other AI tools. Lawyers must understand the technology so that they can decide how to harness it in their work (including taking precautions for client confidentiality and checking the content generated by generative AI tools), and explain its use to their clients.<\/li>\n<li>There is an increasing demand from the public for legal AI tools for laypersons to use so they can access the law on their own. Lawyers will have to address issues such as where to draw the line where generative AI is giving legal information versus giving legal advice, and also who is to assume liability if the advice\/information rendered is incorrect.<\/li>\n<\/ol>\n<\/div>\r\n\r\n\r\n\t\t\t\t\t<\/li>\r\n\r\n\t\t\t\t\t\t\t\t\t<li class=\"question-block filter-container__element\">\r\n\t\t\t\t\t\t<h3 class=\"filter-container__match-html\">Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months? Are there any ongoing initiatives that could reshape AI governance?<\/h3>\r\n\t\t\t\t\t\t<button id=\"show-me\">+<\/button>\r\n\t\t\t\t\t\t<div class=\"question_answer filter-container__match-html\" style=\"display:none;\"><p>Singapore takes a practical, balanced approach towards the regulation of artificial intelligence, with the aim of promoting the safe and responsible development and use of artificial intelligence. While Singapore is not currently looking to enact any general AI legislation, we will update our regulatory frameworks where necessary, and do so in concert with other jurisdictions, to account for the global nature of AI.<\/p>\n<p>Therefore, over the next 12 months, we will likely see our regulators issuing more guidelines to the industry in their specific sectors, together with more public consultations. Relevant use cases of AI will also be analyzed so that any basis for new laws and regulations is grounded on evidence. Our regulators have indicated that no person has all the answers where it comes to regulating this space, hence they will be working closely with the industry to understand the benefits and challenges of AI across a spectrum of use cases before deciding on the regulatory approach.<\/p>\n<p>Testing and evaluation frameworks (e.g. AI Verify) will continue to be developed in partnership with the industry. Presently, our testing frameworks are not pass-fail, but having to go through a series of questions and\/or technical tests will require organisations to consider their AI systems and internal governance measures thoroughly, and work on the areas flagged for improvement. The goal is to standardise \u201cwhat to test\u201d and \u201chow to test\u201d so that an AI system tested by 2 different testers would have the same test outcome. To this end, Singapore has most recently released for public consultation (between May to June 2025) a Starter Kit for Safety Testing of LLM-Based Applications.<\/p>\n<p>In addition, Singapore has also highlighted the importance of having evaluations for AI models tailored to local conditions \u2013 currently, the framing of toxicity, bias and demographic considerations in Large Language Model (LLM) evaluations tends to be Western-centric, and existing benchmark datasets and tools are primarily developed in English. A study \u201cSingapore AI Safety Red Teaming Challenge\u201d was published in February 2025 in partnership with 9 institutions across APAC, focusing on bias stereotypes in different cultures in LLMs.<\/p>\n<\/div>\r\n\r\n\r\n\t\t\t\t\t<\/li>\r\n\r\n\t\t\t\t\r\n<div class=\"word-count-hidden\" style=\"display:none;\">Estimated word count: <span class=\"word-count\">6024<\/span><\/div>\r\n\r\n\t\t\t<\/ol>\r\n\r\n<script type=\"text\/javascript\" src=\"\/wp-content\/themes\/twentyseventeen\/src\/jquery\/components\/filter-guides.js\" async><\/script><\/div>"}},"_links":{"self":[{"href":"https:\/\/my.legal500.com\/guides\/wp-json\/wp\/v2\/comparative_guide\/110066","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/my.legal500.com\/guides\/wp-json\/wp\/v2\/comparative_guide"}],"about":[{"href":"https:\/\/my.legal500.com\/guides\/wp-json\/wp\/v2\/types\/comparative_guide"}],"wp:attachment":[{"href":"https:\/\/my.legal500.com\/guides\/wp-json\/wp\/v2\/media?parent=110066"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}