Reality Challenges and Legislative Prospects
Introduction
In an era where artificial intelligence (AI) technologies are advancing at an unprecedented pace, a fundamental legal issue emerges regarding the position of AI within the legal system, specifically in the context of civil liability for damages resulting from the decisions or actions of AI systems. Should liability be attributed to the programmer? The owner of the system? Or the AI itself? These questions have become pressing in light of the widespread use of smart systems in areas such as autonomous vehicles, digital medical diagnostics, and automated banking services.
This article addresses the legal foundation for civil liability arising from AI in the United Arab Emirates, reviewing the current legislative framework, judicial stances, and providing a comparative analysis of international jurisprudence, along with proposing future legislative recommendations aligned with the state’s ambitious vision for innovation and artificial intelligence.
First: The Concept of Artificial Intelligence and Civil Liability
1. Definition of Artificial Intelligence
Artificial Intelligence is defined as “the ability of computer systems to simulate human mental processes such as learning, analysis, and decision-making,” often characterized by self-learning capabilities and autonomous operation without direct human intervention.
2. Civil Liability: The Traditional Concept
Civil liability is traditionally based on three fundamental pillars:
- Fault (or wrongful act)
- Damage
- Causal link
This system establishes compensation for the injured party for any material or moral harm suffered.
However, this traditional model faces significant challenges when applied to AI, which is neither a legal person nor capable of being attributed “fault” in the conventional legal sense.
Second: The Legislative Stance in the United Arab Emirates
1. The General Legislative Framework
As of now, there is no specific Emirati law directly regulating civil liability for damages caused by AI systems. Nevertheless, the general principles under the Civil Transactions Law (Federal Law No. 5 of 1985), the Medical Liability Law, and the Commercial Transactions Law provide a foundation from which liability may be inferred.
Article 282 of the Civil Transactions Law states:
“Any harm caused to another shall render the actor, even if not discerning, liable to make good the damage.”
This provision implies that the owner or operator of an AI system could be held liable for damages, even in the absence of direct fault, thereby allowing the application of the doctrines of tortious liability or liability for things.
2. UAE Initiatives
The United Arab Emirates is among the pioneering countries in adopting AI technologies, having established a dedicated Ministry of Artificial Intelligence in 2017 and launched the UAE Artificial Intelligence Strategy 2031. These developments necessitate the evolution of the legislative framework to keep pace with this transformation.
Third: Legal Models of Liability for Artificial Intelligence
1. Tortious Liability of the Owner or Operator
In instances where an AI system commits a wrongful act — for example, an autonomous vehicle causing an accident — liability generally falls upon the owner, based on the principle of control and oversight, requiring the owner to ensure the system’s safety.
Real-World Example:
In a case in the United States, an autonomous vehicle operated by Uber was involved in a fatal accident. The company was held liable, even though the human driver was not in control of the vehicle, because Uber was operating the intelligent system.
2. Contractual Liability of the Programmer or Developer
If the damage stems from a defect in the program’s design, liability may be attributed to the developer pursuant to the terms of the contract governing the relationship, particularly in contracts concluded with governmental entities or private companies.
3. Strict Liability (Presumed Liability)
Some legal scholars argue that AI systems constitute “hazardous activities,” warranting the application of the risk theory, whereby liability arises upon the occurrence of damage without the need to prove fault — similar to liability regimes applicable to hazardous materials.
Fourth: Practical and Judicial Challenges
- Difficulty in proving fault: AI systems learn from vast datasets and may make unexpected decisions.
- Overlap of responsibilities: Among the manufacturer, programmer, cloud provider, operator, and owner.
- Absence of legal personality for AI: Prevents direct imposition of liability upon AI systems themselves.
- Lack of specialized legislation: Despite ambitious initiatives, a regulatory gap persists.
Fifth: Future Legislative Recommendations
- Enact specialized legislation on AI, including a comprehensive chapter on civil liability, considering the peculiarities of AI systems.
- Establish a national registry for intelligent systems, recording information on programmers, operators, and software updates.
- Mandate companies to provide specialized insurance for AI systems to cover potential damages.
- Regulate joint liability between manufacturers and end-users depending on the nature of the defect.