-
What are your countries legal definitions of “artificial intelligence”?
To date no Maltese legislation contains a legal definition of “artificial intelligence”; however, being a member of the European Union, Malta is required to abide by EU legislation, including the EU AI Act which is currently being discussed in the Council of the European Union. Article 3(1) of the AI Act defines ‘artificial intelligence system’ (AI system) as software that is developed with one or more of the techniques and approaches listed in Annex I of the same Act and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with. Annex I states that such techniques and approaches comprise:
- Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
- Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
- Statistical approaches, Bayesian estimation, search and optimization methods.
-
Has your country developed a national strategy for artificial intelligence?
In October 2018 the Malta.AI Taskforce (‘the Taskforce’) was set up by the Maltese Government to discuss various matters relating to AI in Malta. As a result, on the 3rd October 2019, Malta launched its national strategy for artificial intelligence (‘AI’) (‘the Strategy’) through which Malta aspires to become the “Ultimate AI Launchpad”, a place in which local and foreign companies, and entrepreneurs, can develop, prototype, test and scale AI, and showcase their innovations across an entire nation primed for adoption. The Strategy, which is essentially a project plan for the regulation, use and development of AI in Malta, looks at the vision and goals of Government in this respect, discusses investment, start-ups and innovation, public sector adoption and the creation of an AI-powered Government, encouraging the procurement of smart technologies, private sector adoption as well as matters relating to education and workforce, legal and ethical issues and ecosystem infrastructure. Additionally, the Stategy looked at the implementation of a Regulatory Sandbox for AI and a national AI certification framework.
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
Together with the Strategy, the Taskfoce mentioned in Question 2 also published an Ethical AI Framework, Towards Trustworthy AI, which aims to establish a set of guiding principles and trustworthy AI governance and control practices. The Maltese Government’s ambition is to create a practical and workable framework that can serve as a guide and enabler for AI practitioners to create trustworthy AI in and outside Malta, supporting AI practitioners in identifying and managing the potential risks of AI. A National Technology Ethics Committee will also be set up to oversee the Ethical AI Framework and its intersection across various policy initiatives. Malta’s Ethical AI Framework builds on the European Commission’s AI HLEG Ethics Guidelines for Trustworthy AI published on 8 April 2019 and the Recommendation of the Council on Artificial Intelligence adopted on 21 May 2019 by the OECD countries and various non-member adherents. The Malta Ethical AI Framework sets out four Ethical AI Principles for establishing trustworthy AI which are in alignment with the European Commission’s AI HLEG Ethics Guidelines for Trustworthy AI: human autonomy: humans interacting with AI systems must be able to keep full and effective self-determination over themselves; harm prevention: AI systems must not cause harm at any stage of their lifecycle to humans, the natural environment or other living beings; fairness: the development, deployment, use and operation of AI systems must be fair; explicability: end users and other members of the public should be able to understand and challenge the operation of AI systems as required for the particular use case.
The said Regulatory Sandbox for AI also mentioned in Question 2 above is intended to provide regulatory exemptions, enabling firms to explore and test concepts and solutions with proportionate safeguards. As also mentioned above, Malta has also focused on a national AI Certification Programme, which will be based on the Framework and its underlying control practices and which will provide applicants with valuable recognition in the marketplace that their AI systems have been developed in an ethically aligned, transparent and socially responsible manner.
Malta’s existing laws have not undergone any amendments taking into account AI; however, pursuant to the Strategy, a National Technology Ethics Committee is also to be set up under the MDIA, which will oversee the Framework and its intersection across various policy initiatives. Another legislative initiative is the Technology Regulation Advisory Committee which will be set up to focus on assessing and determining the extent to which existing laws and regulations will apply to AI technologies. It will also focus on analysing local laws that may need to change, monitoring developments at European level.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
As mentioned in Question 3 above, Malta has not implemented any AI-specific rules to date. Nonetheless, there are other general laws which can address defective intelligence systems. One example would be Articles 56 to 71 of the Consumer Affairs Act (Chapter 378, Laws of Malta) which transpose the Product Liability Directive. These artices introduce the principle of strict (no fault) liability into the product liability regime. Furthermore, under the Product Safety Act (Chapter 427, Laws of Malta) (which transposes the Product Safety Directive), a product is safe if it meets all statutory safety requirements under European or national law (or in default thereof, Commission recommendations and codes of practice), and any distributor who supplies products which he should know to be unsafe (even though he does not actually know this), would be liable under the said Act. There are also provisions under general contract and tort law which may be applicable in this repsect – these are discussed in Question 5 below.
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
Before the amendments to the Consumer Affairs Act that incorporated the Product Liability Directive into Maltese law, product liability was governed by the general tort and contract laws of the Civil Code. These laws were based on the concept of fault liability. However, the amendments introduced strict liability provisions in accordance with the EU’s Product Liability Directive.
Under the Civil Code (Chapter 16, Laws of Malta), Article 1378 states that the seller has an obligation to provide a warranty for the item sold, covering any hidden defects. This warranty obligation exists regardless of whether it was explicitly stated in the contract. Article 1424 states that the seller must ensure that the item sold is free from any hidden defects that would make it unfit for its intended use or significantly reduce its value, to the extent that the buyer would not have purchased it or would have paid a lower price had they been aware of the defects. The seller is held responsible for hidden defects, even if they were unknown to them, unless they have explicitly stated otherwise in the contract. However, the seller is not responsible for any visible defects that the buyer could have discovered on their own (Articles 1425-1426). The determination of whether a defect was apparent to the buyer is based on the standard of a reasonably diligent buyer exercising ordinary attention. Additionally, the buyer is not expected to seek technical assistance to verify that the purchased item is free from hidden defects at the time of sale.
The relevant tort law provisions (Articles 1031-1033) establish that every person is liable for damages caused by their own fault. A person is considered at fault if they do not exercise the prudence, diligence, or attention of a responsible individual. However, liability is not incurred for damages caused by a lack of prudence, diligence, or attention to a higher degree. Nevertheless, any person who, intentionally or negligently, through imprudence or lack of attention, breaches a duty imposed by law, is liable for any resulting damages.
Furthermore, Article 1040 provides that “the owner of an animal, or any person using an animal during such time as such person is using it, shall be liable for any damage caused by it, whether the animal was under his charge or had strayed or escaped”. This provision this calls into play the concept of strict liability. Many academic writers are drawing a parallel with such a situation and one where, for example, a robot runs amok, stating that such provisions should be used in such a case. This comparison is being made in parallel with similar articles in foreign legislation. Accordingly, it is possible that the user of an AI system could also be sued under this provision by the victim who has suffered damage through the use of an AI system, even thought the user was not at fault and didn’t have the intention to cause harm. At the time of writing there is no case law in Malta to confirm the applicability of Article 1040 to the use of a defective AI system. Nonetheless, in a 2019 Report by the European Commission on the ‘Liability for Artificial Intelligence and other Emerging Digital Technologies’, it was discussed that from the 19th century onwards, legislators generally responded to risks brought about by new technologies by introducing strict liability.
The Product Liability Directive was incorporated into Maltese law through Act XXVI of 2000, which amended the Consumer Affairs Act by introducing Articles 56 to 71. These provisions clearly introduced the principle of strict liability into the product liability framework. Generally, these provisions closely follow the provisions of the Directive. However, there is an incorrect implementation of the definition of a “producer.” In the case of imported products manufactured outside Malta, the local importer is considered the “producer” instead of the manufacturer within the EU or the importer into the EU in the case of products manufactured outside the EU, as stated in the Directive.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
Article 1125 of the Civil Code establishes the general principle that failure to fulfil any contractual obligation results in liability for damages. However, Article 1132 states that the level of diligence required in fulfilling any obligation is that of a responsible person.
When the seller is responsible for a hidden defect, the buyer’s remedy is the refund of the purchase price. However, if the seller was aware of the defects in the item sold, they not only have to refund the price but also bear liability for damages towards the buyer (Article 1429). On the other hand, if the seller was unaware of the defect, they are not liable for damages, although they are still required to refund the price and reimburse the buyer for any expenses incurred in connection with the sale.
Article 1045 states that the person responsible for the damage must compensate the injured party for the actual loss directly caused by the act in question. This includes expenses incurred by the injured party as a result of the damage, as well as any loss of current or future wages or earnings due to permanent incapacity resulting from the act. The court determines the calculation of damages based on the circumstances of the case, including the nature and extent of the incapacity and the condition of the injured party. However, if the injured party’s own negligence contributed to the damage, the court may reduce the amount of damages awarded accordingly (Article 1051).
In general, under Maltese law, moral damages cannot be awarded unless expressly provided for by law. Article 62 of the Consumer Affairs Act permits the producer to raise the defense of development risks. Additionally, contributory negligence on the part of the injured party may result in a reduction of damages, as calculated according to Article 1051 of the Civil Code. However, when the damage is caused by both the product’s defect and the act or omission of a third party, the producer’s liability is not reduced (Article 66).
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
Compensation for a victim of damage is obtainable under a civil, not under a criminal lawsuit. In view of the fact that Malta has not yet implemented specific AI legislation, one needs to look at the burden of proof under Maltese civil law generally. In civil cases, the Courts determine the proving of a fact on the balance of probabilities.
Under Maltese civil law, the responsibility of proving a fact (the burden of proof) is regulated by the Code of Organisation & Civil Procedure (Chapter 12, Laws of Malta). According to a general principle, the party making an allegation is required to provide the necessary evidence to support their claim. This principle applies to both the plaintiff and the defendant in a case. In a case of strict liability, on the other hand (as described above in Question 6) the burden of proof is shifted onto the defendant; thus, the defendant has to prove why he is not liable to pay the damages. Furthermore, there is no need for the plaintiff to prove fault, negligence or intention. Article 58 of the Consumer Affairs Act, in fact, states that an injured party in a lawsuit arising from a defective product shall only have the onus of proving the damage, the defect and the causal relationship between the defectand the damage and shall not have the onus of proving the fault of the producer.
Both parties must substantiate their claims or defenses using the most reliable evidence available. If the Court determines that the evidence presented for any claim or defense is irrelevant, unnecessary, or not the most reliable evidence, the Court has the authority to reject such evidence.
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
The Framework has placed provisions to ensure that in the future, one will be able to take out an insurance policy as a risk mitigation method against any potential damage from the artificial intelligence system.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
Article 9 of the Patents and Designs Act (Chapter 417, Laws of Malta) provide that any natural person or legal entity may file an application for a patent either alone or jointly with another. The decision of the Legal Board of Appeal of the EPO in case J 8/20 on the other hand confirmed that under the European Patent Convention (EPC) an inventor designated in a patent application must be a human being.
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
In accordance with the Maltese Copyright Act (Chapter 415, Laws of Malta) copyright can be conferred on every work eligible for copyright of which the author is either i) an individual who is a citizen of or is domiciled or permanently resident in Malta or in State in which copyright is protected under an international agreement to which Malta is a party or ii) a body of persons or a commercial partnership constituted, established, registered and vested with legal personality under the laws of Malta or of a State in which copyright is protected under an international agreement to which Malta is a party.
Furthermore, Article 3(2) stipulates that an artistic work shall not be eligible for copyright unless the work has an original character and it has been written down, recorded, fixed or otherwise reduced to material form. Thus, for a work to be protected through copyright it must constitute a concrete and original expression by an author. The standard of originality comprises two components. First, the work should not be a direct copy of something else. Second, it should reflect the unique intellectual creation of the author. While AI can meet the first requirement by avoiding direct replication, it cannot fulfill the second. Presently, AI systems have the ability to produce independent works that diverge significantly from their learned style, thus demonstrating novelty. However, the second aspect of originality is intrinsically connected to human individuals, emphasizing the crucial role authors play in creating artistic works according to existing laws.
According to CJEU case law such as Case C-683/17, Cofemel — Sociedade de Vestuário SA v G-Star Raw, a work benefits from copyright protection only if the author made free, personal and creative choices in the creation process, that reflect one’s own personality. AI machines are able to carry out mechanical, deterministic processes, on the basis of the information provided and programmed and thus do not show creative preference or self-expression, at least at the current time. On the other hand works created though AI but which still comprise a level of human creativity may be protected by copyright under Maltese law, although the degree of human involvement required has not been established.
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
In our opinion, the following are the main issues to consider when using artificial intelligence systems in the workplace:
- Data privacy and security: AI systems rely on vast amounts of data to train and make predictions. It is crucial to ensure that data collection, storage, and usage comply with privacy regulations and that adequate security measures are in place to protect sensitive information.
- Bias and fairness: AI algorithms can inherit biases from the data they are trained on, leading to unfair outcomes or discrimination. It is important to address bias in AI systems to ensure fairness in areas such as hiring, promotion, and decision-making processes.
- Transparency: Many AI systems, such as deep learning models, can be complex and opaque, making it challenging to understand how they arrive at their decisions. Ensuring transparency of AI systems is vital for building trust and understanding the reasoning behind their outputs.
- Ethical considerations: AI can raise ethical concerns, such as the potential for job displacement, invasion of privacy, and the impact on human autonomy. Organizations must carefully consider the ethical implications of AI adoption and implement safeguards to mitigate potential risks.
- Skills and training: The successful implementation of AI systems requires a workforce with the necessary skills and expertise to develop, deploy, and maintain these systems. Investing in training programs and upskilling employees is crucial for ensuring effective use of AI technologies.
- Legal and regulatory compliance: As AI technology evolves, legal and regulatory frameworks are also evolving to address potential challenges. Organizations must stay updated with the relevant laws and regulations governing AI use to ensure compliance and avoid legal issues.
- User acceptance and adoption: Introducing AI systems into the workplace can lead to resistance or concerns among employees. It is important to consider user acceptance and adoption by involving employees early in the process, providing training and support, and addressing any concerns or misconceptions.
- Accountability and responsibility: AI systems operate based on algorithms and data, but organizations and individuals must take responsibility for the outcomes. Establishing clear lines of accountability and understanding who is responsible for system performance, errors, or unintended consequences is essential.
- Integration and change management: Implementing AI systems may require integrating them into existing workflows and systems. Effective change management strategies, along with clear communication and employee involvement, are vital to ensure smooth integration and minimize disruption.
- Long-term impact and adaptability: AI technologies continue to evolve rapidly, and organizations need to consider the long-term impact and adaptability of the systems they adopt. Regular evaluation, monitoring, and updates are necessary to keep AI systems effective and aligned with evolving business needs.
-
What privacy issues arise from the use of artificial intelligence?
We would identify the following as being the privacy issues arise from the use of artificial intelligence:
- One challenge which could arise in terms of AI and data protection is that when a company uses AI to make a significant and important decision about one’s data, this could pose certain risks to that data. To combat this challenge, the trustworthy AI Requirements mentioned in the Framework are intended to be continuously evaluated and addressed throughout the AI system’s lifecycle.
- Data collection and usage: AI systems require large amounts of data to train and make accurate predictions. The collection, storage, and usage of this data can raise privacy concerns if individuals’ personal information is collected without their knowledge or consent. Organizations must be transparent about the types of data they collect and how it will be used to build trust with users.
- Inference and re-identification: AI systems can infer sensitive information about individuals from seemingly innocuous data. By analyzing patterns and correlations, AI algorithms may be able to identify individuals or reveal personal details that were not explicitly provided. This can lead to privacy breaches and potential misuse of personal information.
- Third-party data sharing: In some cases, organizations may share data with third parties, such as partners or vendors, for various purposes like model training or system improvement. This sharing can pose privacy risks if adequate safeguards are not in place to protect the data and ensure compliance with privacy regulations.
- Biometric data and facial recognition: AI-powered facial recognition systems can capture, analyze, and process biometric data, including facial features, expressions, and identities. The widespread use of facial recognition raises concerns about privacy, particularly when it is deployed in public spaces or used without individuals’ explicit consent.
- Profiling and discrimination: AI systems can create detailed profiles of individuals based on their data and use those profiles to make decisions or recommendations. If these profiles are based on sensitive attributes such as race, gender, or religion, it can result in discriminatory outcomes or privacy infringements.
- Lack of user control and transparency: AI algorithms can be complex and difficult to understand, making it challenging for individuals to know how their data is being used or to exercise control over it. Lack of transparency in AI systems can erode trust and undermine individuals’ privacy rights.
- Data breaches and security vulnerabilities: AI systems rely on vast amounts of data, making them attractive targets for cyberattacks. If AI systems are not adequately secured, data breaches can occur, leading to the exposure of personal information and privacy violations.
- Internet of Things (IoT) and smart devices: AI is often integrated with IoT devices, such as smart speakers, wearables, or connected appliances. The constant data collection and processing by these devices can lead to privacy concerns if individuals’ interactions and behaviors are monitored without their explicit consent.
-
What are the rules applicable to the use of personal data to train artificial intelligence systems?
As a European Member States, Malta is required to oberve the princples enshrined in the GDPR. The implications of the GDPR on AI companies and applications which involve the use of personal data in particular are quite significant. In the first place, the GDPR generally restricts access to and collection of data. Secondly, data can only be used for its original intended purpose, thus restricting reuse of data for novel purposes and the possibility of new value through combination of datasets. Under the GDPR, decisions that were taken solely in an automated manner must allow for human review of that decision, if it significantly affects the data subject. Furthermore, the data subject has a right to an explanation as to how a decision was reached. Furthermore, AI companies and applications which involve the use of personal data need to implement safeguards and data protection must be present by design and by default.
Additionally, the Framework also provides the following guidelines for the protection of personal data when AI systems are used:
- Determine the type and scope of data to be used in AI development.
- Conduct a Privacy Impact Assessment and ensure compliance with all applicable legislative requirements relating to data protection, including:
A. Transparency: notifying individuals that you are collecting their personal information, the purposes for which you will process it, to whom (if anyone) you will disclose it, how you will store the information, and other key information;
B. Lawful basis for processing: ensure that you have a lawful basis for the intended processing of that data, including obtaining valid consent from the individual where appropriate. - Ensure individuals can exercise appropriate levels of control over their personal data (e.g. mechanisms for giving and revoking valid consent to different types of processing).
- Ensure that personal data is processed only in accordance with the organisation’s privacy or data protection policy, as well as all applicable legal requirements.
- Involve any relevant Data Processing Officers as early as possible in data collection and processing.
- Implement an internal mechanism for individuals to flag privacy issues related to data collection and processing.
- Consider ways to develop the AI system or train the model with minimal use of potentially sensitive or personal data, including use of encryption, anonymisation or aggregation.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
At the time of writing, the Office of the Information and Data Protection Commissioner has not issued any guidelines on artificial intelligence; however, some guidance is provided through the Framework as discussed in Question 13 above.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
At the time of writing, the Office of the Information and Data Protection Commissioner has not discussed any cases specifically involving artificial intelligence.
-
Have your national courts already managed cases involving artificial intelligence?
At the time of writing, the Maltese courts have not issued any judgements specifically involving artificial intelligence.
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
The authority responsible for supervising the use and development of artificial intelligence is the Malta Digital Innovation Authority (‘MDIA’) which was established back in 2018 through the Malta Digital Innovation Authority Act, together with the Innovative Technology Arrangements and Services Act, that states what are the powers granted to the Authority. The Authority serves a dual role. On the one hand, it is a regulator specifically focused on innovative technology and forms part of the entities strategically established in the Maltese ecosystem. On the other, it is also a promoter of innovative technologies. This is done through various incentives that are announced from time to time. The MDIA is also responsible for certification of Innovative Technology Arrangements, including AI systems.
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
The Maltese government has recognized the potential of AI and has been actively promoting its use in various sectors. The government has launched initiatives to foster AI research and development, as well as to encourage the adoption of AI technologies in industries such as healthcare, finance, and transportation. Some specific examples are provided below:
- Healthcare: AI is being explored in the healthcare sector in Malta to improve patient care, diagnostics, and medical research. AI-powered tools can help in the analysis of medical data, early detection of diseases, and personalized treatment recommendations.
- Financial Services: The financial sector in Malta has shown interest in AI technologies for applications such as fraud detection, risk assessment, and customer service automation. AI can help financial institutions analyze large volumes of data quickly and accurately, enabling them to make informed decisions.
- iGaming: AI technologies are frequently incorporated into solutions to help detect and reduce fraud, enhance marketing effectiveness, and augment customer service interactions and customer experience functions.
- Manufacturing and aviation maintenance, repair and overhaul industries: AI-driven solutions are being deployed for condition monitoring and predictive maintenance activities. The solutions draw on the vast amount of data that aircraft, ships and machines now generate.
- Transportation and Logistics: AI is being used to optimize transportation and logistics operations in Malta. Intelligent systems can help with route planning, traffic management, and predictive maintenance of vehicles and infrastructure. Additionally, AI-powered chatbots and virtual assistants are employed to enhance customer support in the transportation sector.
- Smart Cities: Malta has been exploring the concept of smart cities, where AI plays a significant role. This involves the use of AI for energy management, waste management, public safety, and improving overall efficiency in urban infrastructure.
- Public sector: One of the primary projects which the Maltese Government is proposing is that of creating an AI-powered Government. Currently, from the public sector perspective, the ‘servizz.gov’ platform is the central point of information for all government services in Malta. One of the projects which the government is focusing on is exploring how AI can be applied to the ‘servizz.gov’ customer service workflow, in order to drive performance enhancements and process clients’ requests as efficiently and accurately as possible.
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
The use of AI systems is not yet widespread amongst legal practitioners in Malta; however a few firms do make use of AI applications for document review and analysis, legal research, contract analysis and genereation and due diligence and compliance matters.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
In our opinion, the following are the key challenges key opportunities raised by artificial intelligence for lawyers:
Challenges:
- Job Displacement: AI has the potential to automate certain tasks traditionally performed by lawyers, such as legal research and document review. This could lead to job displacement and require lawyers to adapt their skills and roles.
- Ethical and Legal Implications: The use of AI in legal practice raises ethical and legal implications, such as ensuring transparency, accountability, and fairness in AI decision-making. Lawyers need to navigate these complex issues to maintain professional standards.
- Data Privacy and Security: AI relies on vast amounts of data, including sensitive client information. Lawyers must address concerns about data privacy and security to ensure compliance with relevant laws and regulations, such as the General Data Protection Regulation (GDPR).
- Bias and Discrimination: AI systems can perpetuate bias and discrimination if they are trained on biased data or if their algorithms are not carefully designed. Lawyers need to be vigilant in identifying and addressing bias to ensure fair and equitable outcomes for their clients.
- Liability and Responsibility: As AI becomes more prevalent in legal practice, questions of liability and responsibility arise. Lawyers must navigate issues related to accountability for errors or failures in AI systems and determine how to allocate responsibility between human lawyers and AI technologies.
Opportunities:
- Enhanced Efficiency and Productivity: AI can automate repetitive and time-consuming tasks, allowing lawyers to focus on more complex and strategic work. This can increase efficiency and productivity, enabling lawyers to serve clients more effectively.
- Legal Research and Analysis: AI-powered tools can assist lawyers in conducting extensive legal research, analyzing case law, and identifying relevant precedents. This can significantly speed up the research process and help lawyers provide more accurate and comprehensive advice.
- Contract Review and Due Diligence: AI can streamline contract review and due diligence processes by quickly identifying relevant clauses, potential risks, and anomalies in large volumes of legal documents. Lawyers can save time and resources by leveraging AI technology in these areas.
- Predictive Analytics: AI algorithms can analyze vast amounts of legal data to identify patterns, trends, and potential outcomes. Lawyers can use predictive analytics to assess the likelihood of success in litigation, predict potential legal risks, and make more informed decisions for their clients.
- Client Interaction and Communication: AI-powered chatbots and virtual assistants can improve client interaction and communication by providing instant responses to frequently asked questions, scheduling appointments, and offering basic legal advice. This can enhance client satisfaction and engagement.
On a final note, while AI offers opportunities, it does not replace the expertise, judgment, and human touch that lawyers provide. AI should be seen as a tool to augment legal services and support lawyers in their professional practice.
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?
Malta is set on embracing the ethical development, importation and use of AI. The following are some of the initiatives being carried out by Malta in this respect:
- As mentioned above, Malta has been focusing on the development of ethical guidelines for AI. The Malta Digital Innovation Authority (MDIA) has been working on establishing ethical guidelines for AI deployment, addressing issues such as transparency, fairness, and bias. These guidelines aim to provide a framework for responsible AI development and deployment in various sectors, and will continue to be updated.
- Malta is also focusing on a national AI Certification Programme, which will be based on the Framework and its underlying control practices and which will provide applicants with valuable recognition in the marketplace that their AI systems have been developed in an ethically aligned, transparent and socially responsible manner.
- Malta will also be developing a regulatory framework for AI will be fundamental for Malta. The government and regulatory bodies are actively working to establish clear guidelines and regulations that govern the use and deployment of AI technologies across different sectors, in line with the current versions of the AI Act and AI Liability Directive. A gap analysis of existing laws and how these may help or hinder AI development is also one of the priorities for Malta in this respect.
Malta: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in Malta.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
-
What privacy issues arise from the use of artificial intelligence?
-
What are the rules applicable to the use of personal data to train artificial intelligence systems?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
-
Have your national courts already managed cases involving artificial intelligence?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?