Proposed Artificial Intelligence Liability Directive (AILD) & Medical Devices/IVDs
As noted in 2020, in the report on the European enterprise survey on the use of technologies based on artificial intelligence (AI) authored by the EU Commission Directorate-General for Communications Networks, Content and Technology, liability ranked among the top three barriers to the use of AI by European companies, cited as the most relevant external obstacle (43%) for companies that are planning to but have not yet adopted AI. Subsequently, at the end of September 2022, the Commission adopted a proposal for the AI Liability Directive (AILD) for non-contractual civil liability rules to AI to address the specific challenges posed by AI to existing liability rules and to complement the proposed new Product Liability Directive (PLD).
Considerations for AI-Based Devices with the AILD
While significant changes to the proposed AILD are highly likely to be made, the are several considerations for AI-based devices placed on the market which we discuss here:
Liability and AI Performance
Article 6 of the PLD proposal states that “a product shall be considered defective when it does not provide the safety which the public at large is entitled to expect, taking all circumstances into account” with several circumstances to be taken into account listed in the proposal. In other words, in a scenario where a device does not meet the manufacturer’s performance claims, but is safe, there are no requirements established in the proposal.
In contrast to the proposed new PLD, the proposed AILD covers claims for compensation of the damage caused by an output of an AI system or the failure of such a system to produce an output where such an output should have been produced.
Therefore, it appears like, with the proposed AILD, AI-based devices will have a greater burden of liability compared to other devices that only fall within the scope of the new PLD proposal.
Causal link in the case of fault
The proposed AILD establishes three conditions, all which must be fulfilled, for there to be a presumption of a causal link between the fault of the defendant and the AI system output or failure of the AI system to produce an output. However, in the case of claims involving high-risk AI systems (as defined in the AI Act), where there has been a lack of duty of care, failure with any one of several AI Act design requirements could be used to argue a causal link:
-
- Lack of development on the basis of training, validation and testing data sets that meet AI Act quality criteria
-
- Design and development not performed in a manner conforming with the AI Act transparency requirements
- Design and development not performed in a way that allows for effective oversight by natural persons during the period in which the AI system is in use
- Inappropriate level of accuracy, robustness and cybersecurity established during design and development
- Necessary corrective actions to bring the AI system in conformity with the AI Act, withdrawals or recalls not taken immediately
These requirements highlight the criticality of robust QMS controls for AI-based device manufacturers and identification of relevant design inputs established by the AI Act, and establishment of appropriate post-market controls.
Manufacturers of high-risk AI systems should also note that the above aspects of the proposed AILD do not cover the requirements for disclosure of evidence.