This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in Belgium.
What are your countries legal definitions of “artificial intelligence”?
There is no national legal definition of AI at this stage in Belgian law.
The definition generally accepted and used in Belgium is therefore the definition of AI given by the European Commission. According to this definition, AI systems are “systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals. AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things applications).” This definition is for instance used in the report of AI4Belgium, an initiative launched in 2019 by the Belgian government together with stakeholders of the industry.
Has your country developed a national strategy for artificial intelligence?
In March 2019, the Belgian government launched AI4Belgium in cooperation with private stakeholders of the industry.
In addition, each federated regional entity has adopted an individual ambitious strategy for AI:
In Brussels: the Brussels government adopted an AI policy and created FARI, an institute to boost research around AI.
In Flanders: the Flemish government adopted the Flemish AI plan in March 2019;
In Wallonia: the Walloon government created the DigitalWallonia4.ai programme in July 2019;
More recently, at the end of 2022, the Belgian government issued a national convergence plan for the development of AI. This national strategy focuses on 9 objectives and recommends around 70 actions. Those objectives are the following:
Promote a trustworthy AI;
Strengthen Belgium’s competitiveness and attractiveness through AI;
Develop a data economy and a high-performance infrastructure;
AI at the heart of healthcare;
AI for a more sustainable mobility;
Preserve the environment;
Better, lifelong training; and
Provide better services and protection to the citizens.
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
There are currently no rules, laws or guidelines specifically applicable to AI in Belgium.
The law that could potentially apply to AI can take many forms. In general, and in the absence of a specific definition of AI, when applying existing laws and qualifying AI products or services, AI will be considered as a software or more generally a digital service (see Q4 on conformity of digital services).
No national legislative initiative has been made public yet. At the European level, the EU Commission presented its draft proposal for an EU AI Act in 2021. On 14 June 2023, the EU Parliament has voted its first negotiating position on this proposal.
The current proposal aims to establish a legal framework for AI based on a risk-based approach. The AI act suggests imposing obligations on providers of AI systems, depending on the level of risk associated with the AI technology. The proposal categorises AI systems into different levels of risk:
Unacceptable risk: AI systems that pose a threat to individuals will be prohibited. Examples of such systems include social scoring systems and real-time and remote biometric identification systems.
High risk: AI systems that have a significant impact on safety or fundamental rights fall into this category. All high-risk AI systems will undergo a thorough assessment before they can be introduced on the market, and continuous monitoring throughout their lifespan will be required.
Generative AI: AI systems that create specific content, such as ChatGPT, must adhere to transparency requirements. This includes disclosing that the content was generated by AI, implementing measures to prevent the generation of illegal content, and publishing summaries of copyrighted data used for training.
Limited risk: AI systems with limited risk should meet minimum transparency requirements, enabling users to make informed decisions. After interacting with these applications, users can choose whether to continue using them. Users should also be aware when they are engaging with AI systems, particularly those that generate or manipulate image, audio, or video content (e.g., deepfakes).
The EU AI act is anticipated to be finalised and adopted by the end of 2023 or early 2024.
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
Belgium has not implemented specific laws dedicated to defective AI systems. Rules governing defective products or services may however be relevant to AI systems. These may be summarised as follows:
Product liability (Law of 25 February 1991): defective AI systems can be subject to product liability laws in Belgium. In a nutshell, manufacturers may be held liable for the damage caused by a defect of their products. If an AI system qualifies as a product and is defective (which may be the case if it is incorporated in a tangible good), the manufacturer may be held liable for any harm or damage caused to individuals or property (see also Q5);
Consumer protection and legal warranty: defective AI systems may also be handled from a consumer protection perspective. In particular, Belgian law has transposed EU Directive 2019/770 on certain aspects concerning contracts for the supply of digital content and digital services and EU Directive 2019/771 on certain aspects concerning contracts for the sale of goods within Article 1649bis to 1649octies and Book III, Title VIbis of the Belgian Civil Code. Under these provisions, conformity of digital content or services (including AI systems) is assessed under an objective conformity criterion (i.e. comply with what the public at large is entitled to expect from AI systems of the same nature) and a subjective conformity criterion (i.e. comply with what has been specifically agreed upon with the consumer). If an AI system is considered to be in non-conformity, consumers may seek remedies (e.g. having the AI system brought in conformity, price reduction, or contract termination).
Privacy and Data Protection: AI systems often process personal data, and their defects may result in privacy breaches or data protection violations. In Belgium, the EU General Data Protection Regulation (GDPR) applies and imposes obligations on organisations handling personal data. If a defective AI system leads to unauthorised access, data breaches, or other privacy violations, individuals may seek compensation or other remedies under the GDPR.
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
Under Belgian law, there are no specific civil or criminal liability rules governing AI systems. Only provisions of general law apply. It thus remains to be seen how the courts will handle damage caused by AI systems.
At this stage and from a civil liability standpoint, there seems to be a general understanding amongst scholars that the following sources of liability would be the most relevant (this may however not be exhaustive):
Manufacturer’s liability: the Law of 25 February 1991 on product liability transposes the EU Directive 85/374/CEE into Belgian law. Under this regime, the manufacturer of an AI system could be held liable for the harm or damage caused to a person or to goods by a defect in its product (the notion of which covers tangible goods and software);
Seller’s liability and warranty regime: under Belgian law (Article 1582 et seq. of the former Civil Code), the seller is expected to deliver goods that are conform with the agreement. A seller may be found liable for hidden defects and issues of non-conformity. The seller’s duties are further increased in the case of a B2C sale as consumer protection rules may further apply and prevent the seller from limiting its liability (Article 1649bis of the former Civil Code);
User’s liability in tort: the user of defective things may be held liable for the damage caused by the thing’s defect (Article 1384 of the former Civil Code), even if the user did not commit any wrongdoing as such. In principle, the notion of “thing” only covers tangible goods but this could apply in the case of a damage caused by a tangible good incorporating an AI system (e.g. a robot). An AI user may also be held liable if he/she wrongfully uses a (non-defective) AI system to cause a damage (Article 1382 of the former Civil Code);
User’s liability in contract: under general contract law rules (Article 5.230 of the Civil Code), the person using a thing/item to carry out a contractual duty is contractually liable for a breach cause by a defect in the thing/item used. This provision is particularly relevant in cases where contractual services are rendered with the assistance of a (defective) AI system (e.g. automated asset management services). Parties may contractually depart from this principle.
As for criminal liability, there are no rules adapted to criminal liability in cases of damages caused by AI systems, making it difficult to establish one’s liability for this type of damages. Some scholars even argue that courts could use the theory of attribution, meaning that the person or entity liable for damages caused by AI systems is the one to whom the punishable behaviour could be objectively and subjectively attributed. However, it remains to be seen how the Belgian legislator and the courts will apply existing criminal legislation in cases involving damages caused by AI systems.
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
See Q5 for the various (most) relevant liability regimes.
Different (cumulative) liability regime may be triggered depending on the circumstances. Some of these liability regimes trigger the user’s liability, whereas others target the seller or the manufacturer (which could cover the program developer).
Normally, the victim is not liable for its own damage, safe two key exceptions:
the victim has agreed to a liability clause (it should however be noted that liability clauses are likely to be deemed abusive and void in a B2C environment);
the victim has committed a negligence (e.g. misuse of the AI system). In such case, the victim will be solely or jointly liable with other parties, depending on the circumstances. In principle, the allocation of the liability is decided using the criterion of the contribution to the damage.
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
As a general rule, it is for the party seeking compensation to bring evidence of the conditions supporting his/her claim (Article 8.4 of the Civil Code).
However, depending on the liability ground invoked (see Q5), the elements to prove may vary and may be more or less complex to establish.
It is also worth mentioning that in principle parties may contractually reallocate the burden of proof, with the noticeable exception of B2C contracts, where such reallocation is generally deemed abusive and void (Article VI.82 et seq. of the Code of Economic Law).
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
Yes, it is. Several players active in the Belgian insurance market already offer insurance products covering AI-related risks and potential damages.
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
No. The inventor named in a patent must be a human being. Currently, Belgian law is silent on AI inventions.
The legal concept of inventorship requiring a human being to be the inventor was challenged before the European Patent Office (EPO) when two applications indicating an AI system (DABUS) as the inventor were filed. In 2019, the EPO refused these applications (EP 18275163, EP 18275174) on the ground that the European Patent Convention (EPC) requires the inventor to be a natural person. The applicant filed appeals which were dismissed by the EPO Legal Board of Appeal in oral proceedings on 21 December 2021 (cases J-8/20 and J-9/20). The Legal Board confirmed that under the EPC the inventor has to be a person with legal capacity and that a statement indicating the origin of the right to the European patent must specify the inventor’s successor in title.
Inventions in the field of AI may be considered as computer implemented inventions.
AI can also be used as a tool in the inventing process, but the usual legal requirements will apply when assessing the validity of a patent, notably in terms of inventiveness and sufficiency of disclosure.
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
Insofar the images fulfil the condition of originality, which means they are an intellectual creation of the author reflecting his personality and expressing his free and creative choices in the production of that image, they will benefit from copyright protection. The condition of originality cannot be fulfilled by a machine or an AI system acting alone. The authorship of the image will be attributed to the creator, who is a human being or physical person.
This matter and its consequences are of course debated in view of the technological (r)evolution.
What are the main issues to consider when using artificial intelligence systems in the workplace?
AI raises important risks in terms of:
Human rights (privacy, fairness, representation and dignity);
Biases: AI systems can multiply and systematise existing human biases, inequalities or discrimination by formalising rules for management processes based on those, e.g. by using insufficient representative data or outdated data. This could occur in the hiring and recruitment process, leading to unfair employment decisions;
Harassment: AI programs could make inappropriate comments about worker’s appearance, sex or race which in certain cases could lead to (criminal) punishments;
Autonomy and representation: systematically relying on AI-informed decision-making in the workplace can reduce workers’ autonomy and representation, especially if AI-based hiring also leads to a standardisation of worker profiles;
Supervision of the workers: if the control and monitoring of workers are legitimate prerogatives of the employer, he/she will have to ensure that the AI-based control mechanisms deployed for this purpose respect legitimate interests and fundamental rights of workers (well-being at work). Also, the risk of discrimination cannot be totally ruled out when using AI systems to monitor workers.
GDPR and privacy:
The collection and curation of data by AI systems can raise concerns in terms of privacy;
Under Article 22 of the GDPR, a person has the right not to be subject (except in cases where the GDPR or the national law of the Member States allows it, and subject to certain safeguards) to a decision based exclusively on automated processing, including profiling, and producing legal effects concerning him or her or significantly affecting him or her in a similar way. Therefore, in such case, human intervention should be provided.
An automated dismissal decision could be considered as manifestly unreasonable, within the meaning of the Belgian collective labour agreement (CLA) n°109;
The use of AI in the workplace could be considered as “a new technology with significant collective consequences for employment, work organization or working conditions”, within the meaning of the Belgian CLA n°39. In such case, the CLA requires an employer with at least 50 workers to provide some written information on the new technology and to consult with workers’ representatives on the social consequences of its introduction. Otherwise, employers may not unilaterally terminate the employment contracts of the workers concerned, except for reasons unrelated to the new technology. This sanction may be particularly relevant if, in order to measure workers’ work and performance, the employer relies on an AI system that would be considered a new technology, the results of which could subsequently be used by the employer to make his decision to terminate the employment relationship;
Deciding who should be held accountable in case of system harm (e.g. misinformation based on false content of AI or physical harm caused by AI-based critical decisions in medicine) is difficult. Having a human intervening may help but it may be unclear which employment decisions require this level of oversight.
There are no specific laws, rules or recommendations yet in Belgium regarding AI in the workplace. However, certain legislative provisions, which apply to many fields, may also apply to the use of AI: the European Convention on Human rights, the Charter of Fundamental Rights of the European Union, the GDPR, the Law of 3 July 1978 on employment contracts, the Law of 30 July 1981 to suppress certain acts inspired by racism and xenophobia, the Law of 10 May 2007 to combat certain forms of discrimination and against discrimination between women and men and some CLAs. Other standards, applicable to specific business sectors, regulate the use of AI in them. But for the time being, AI-specific standards are a matter of soft law.
What privacy issues arise from the use of artificial intelligence?
Not all AI systems use personal data. For those that do, the problems that can arise are as follows:
The absence of consent for the use of the data that will be given to the AI system and on the basis of which it will produce an output;
Big data and the possibility to re-identify a person through the crossing of different data.
What are the rules applicable to the use of personal data to train artificial intelligence systems?
Belgium currently has no specific rules on the training of AI.
However, a number of rules already present in the Belgian legal framework could potentially apply to the training of AI, such as the GDPR and the constitutional principle of non-discrimination (set out in laws specific to different sectors).
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
The Belgian Data Protection Authority (DPA) has not as such published guidelines on AI. However, it occasionally responds to questions involving AI and privacy. It also issues opinions and recommendations on this type of issue.
In its 2022 annual report, the DPA recognises the growing importance of issues relating to AI and, above all, the expectations of citizens regarding the various problems that these new technologies may cause.
In 2022, the DPA intervened in a case involving the use of AI (see below). The DPA also published opinions on draft regulations involving AI in order to regulate a sector, such as the processing of algorithmic data by public authorities. In particular, the authority intervened in a normative project of the Federal Public Service Finance concerning the probe projects for the purpose of controlling home fraud.
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
The Belgian Data Protection Authority intervened in a case in collaboration with the French Data Protection Authority (CNIL). The Belgian Data Protection Authority has been able to apply its data protection powers in a case involving the processing of data collected through the use of data extraction/collection software.
Have your national courts already managed cases involving artificial intelligence?
To the best of our knowledge, Belgian courts have not yet had the opportunity to render interesting decisions pertaining to AI.
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
There is no regulator or authority directly responsible for supervising the use and development of AI in Belgium. When used to perform regulated activities, AI can be the subject of guidance or regulations from supervisors of specific sectors such as for example the financial sector. The Belgian Data Protection Authority also indirectly supervises its use through the angle of personal data.
In addition, the Belgian government has appointed the Federal Public Service for Strategy and Support (BOSA) to implement specific actions with regard to digitalisation. In that context, it can issue guidance and or reports that are not binding nor mandatory but are useful to understand Belgium’s stance and strategy with regard to AI, such as for instance the 2022 National Convergence Plan for the Development of Artificial Intelligence.
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
Depending on the sector, the use of AI has generally seen a steady increase in Belgium in the last few years. According to a report of the Belgian Federal Planning Bureau of March 2023, 25% of large companies (more than 250 employees) have designed their own form of AI and 42% have made use of AI technology. For SME’s the numbers are slightly lower, in particular when it comes to designing their own AI tools. According to the report, the use of AI is notable in the ICT-heavy sectors, but is also prominent among publishers, providers of audiovisual services and other industries such as wood, energy, and chemical production.
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
Certain AI tools are already actively used in the legal sector, most notably for categorising information and assisting with document drafting and text proofing.
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
Changes in skill development: lawyers will need to develop new skills and knowledge to understand the possibilities and limitations of AI to effectively leverage its potential.
Data protection, privacy and confidentiality: the use of AI involves processing (large volumes of) data, which raises challenges for compliance with the GDPR and the lawyers’ duty to confidentiality.
Legal responsibility: AI systems may make or suggest decisions that have ethical and even legal consequences. Determining who bears the responsibility for these decisions and ensuring transparency and accountability can be challenging from a Belgian legal perspective.
Data quality: access to high quality legal data with AI tools can be a challenge. Lawyers need to ensure that the data they use from AI models is accurate, relevant, and up-to-date, which can require significant effort and resources. In addition, many processes of digitalisation are not yet finetuned in Belgium, leading to incomplete or even non-existent digital databases.
Regulatory complexities: lawyers must navigate the evolving legal and regulatory landscape surrounding AI on a national and EU level, ensuring compliance with relevant laws, regulations, and professional ethical guidelines.
Legal research and document analysis: AI-powered tools can assist lawyers with comprehensive legal research, analysing large volumes of legal documents, and extracting relevant information, enabling more efficient and accurate legal work.
Workflow automation and efficiency: AI can automate repetitive and time-consuming tasks, such as document generation, grammar reviews and legal document classification, allowing lawyers to focus on more complex work.
Streamlined processes: AI can streamline contract analysis and due diligence processes, helping lawyers identify potential risks, inconsistencies, and important clauses in a more efficient and timely manner.
Predictive analytics: AI algorithms can analyse legal data, precedents, and case outcomes, providing predictive insights to lawyers and suggesting legal sources.
Enhanced client services: AI-powered virtual assistants, online legal platforms and chatbots can improve access to legal services, provide basic legal information and guidance, or filtering information so clients can directly be put in contact with the right legal professionals.
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?
There are currently no specific Belgian laws on AI. On a European level however, the EU AI Act and the Product Liability Directive will impact the Belgian regulatory landscape, although these EU initiatives will not come into effect in the coming 12 months.
Estimated word count: 4168
Join our mailing list to receive updates on new Guides: