UCALL Blog

AI en recht

European Commission proposes AI liability rules: the Product Liability Directive

In a series of three blog-posts, I share my first observations regarding the proposed EC “AI-package”, a set of proposals that aims for the creation of “trust” of citizens in AI technology that is to be developed and deployed in the European Union. In the first blogpost I sketched the backgrounds that motivated the Union legislator to propose the AI-package. In the second post I analysed the directive regarding the adaptation of extra-contractual liability rules to Artificial Intelligence (AILD). In this third and concluding post, I address the Proposed Product Liability Directive (PPLD). The PPLD bestows liability upon the producer of defective products. It builds upon the same principles as the “old” Product Liability Regime. The PPLD addresses inter alia liability for defective software, and thus provides remuneration mechanisms for victims of AI-inflicted damages.

Product liability

The Proposed Product Liability Directive (PPLD) provides an entirely new directive albeit based on the same principles of the 1985 PLD. The wordings of the PPLD provisions are much clearer than the AILD-provisions, as discussed here. As the nature of this post is limited to AI-applications, and the PPLD deserves a broader, integral and in-depth review on a different platform, I will focus here only on the novelties that may be relevant for AI-driven innovation.

Definitions

In article 4(1) PPLD, it must be noted that the products-definition now includes “electricity, digital manufacturing files and software”. This eliminates the scope-uncertainties regarding software that followed from its predecessor. It is furthermore clarified in article 4(3) that “components” can either be tangible or intangible (which thus includes software components and data), and can also comprise “related services” that are integrated in, or interconnected to a product. Related services are, according to section 4, those digital services without which a product would not operate according to its functions. When for instance a self-driving vehicle could not drive correctly without a certain location-provisioning service, this service would fall under the scope of section 4.  A product (or component) is deemed in the “manufacturer’s control” (section 5), when a manufacturer either authorises the integration, use, or implementation of a component (such as software updates, or upgrades) by a third party, or when he modifies the product. It must furthermore be noted that besides death, personal injury and property damage (excluding the product/components itself and commercial property), “loss or corruption of data that is not used exclusively for professional purposes” is brought under the scope of damage to be remunerated (section 6(c)).

Defectiveness for AI-products

Article 6 addresses defectiveness, for which producers can be held liable. The ‘traditional’ notion that a product is defective when it does “not provide the safety which the public at a large is entitled to expect” (i.e. due to the presentation of the product, the reasonably foreseeable use, or misuse), has been extended in the PPLD. It inter alia adds that also “the effect on the product of any ability to continue to learn after deployment” (article 6(1)(c) PPLD) can render a product defective. An autonomous vehicle for instance that left the factory in a perfect condition, can later become defective due to its capacity to automatically update itself – in which case its producer can in principle be held liable under the PPLD. Furthermore, the level of control that a controller might continue to exercise over a product after it has been put on the market may play a role in the assessment of defectiveness (sub e), as well as in the applicable product safety requirements including cybersecurity (sub f). Amongst other things, this institutes after-market-introduction obligations for manufacturers, to keep (especially) AI-incorporating products safe to a certain extent (such as through updates and patches).

Economic operators

Article 7 PPLD creates more of a one-stop-shop for consumers than the 1985 PLD did: almost every actor who is involved in the production and distribution chain can be sought for damage compensation. This includes for instance component manufacturers (article 7(1)); importers and representatives (2); fulfilment service providers (3); “any natural or legal person that [substantially] modifies a product that has already been placed on the market” (4); distributors (5), and platform providers (6).

Evidentiary rules

Article 8 stipulates that national courts may order that the defendant discloses relevant evidence at its disposal to a claimant who has “presented facts and evidence sufficient to support the plausibility of the claim for compensation”, to the extent that necessary and proportionate (section 2). This in conformity with the system provided in the AILD as discussed in my previous post, although in clearer wordings. It must be noted that here too the Trade Secret Directive is mentioned, but the GDPR remains unnoticed. Article 9 further places the burden of proof regarding defectiveness, damage suffered and the causal nexus on the claimants. It thus remains up to the victim to establish that for instance an AV that was involved in an accident was defective, and that this defect gave rise to the damage. As also observed in my previous post, this can be very challenging and expensive for victims due to the complexity of AI-technology, the potential myriad of actors involved and the massive amounts of processed data.  

However, the PPLD provides claimants with certain procedural aids in the form of rebuttable presumptions. These seem to be of significant value to me: while they do not fully reverse the burden of proof, once established the presumptions require the defendant, who is in a much better information position than the claimant, to show that a certain product was in conformity with the rules. Section 2 stipulates that defectiveness shall be presumed, when either a) the defendant did not comply with an obligation to disclose relevant evidence; b) it is established by the claimant that applicable mandatory safety requirements were violated; or c) the claimant proves that “the damage was caused by an obvious malfunction of the product during normal use or under ordinary circumstances”. Furthermore, the causal link between defectiveness and damage must be presumed (section 3) “where it has been established that the product is defective and the damage caused is of a kind typically consistent with the defect in question”. Also, when a court observes that “the claimant faces excessive difficulties, due to technical or scientific complexity, to prove” (section 4) either defectiveness or causality (or both), such shall be presumed if the claimant can demonstrate that “(a) the product contributed to the damage; and (b) it is likely that the product was defective or that its defectiveness is a likely cause of the damage, or both”. As mentioned, these presumptions can be rebutted by the defendant (section 5).

Defences

Article 10 addresses defences economic operators can raise against a liability claim. It must first be noted that the “most problematic” defences for claimants still exist in the PPLD. These include the “later-existence defence” (section 1(c)), i.e. when it is probable that a defect came into being after the market introduction of the product, and the “development risks defence” (section 1(e)), which frees the manufacturer from liability if he can prove that “the objective state of scientific and technical knowledge” at the time of market introduction “was not such that the defectiveness could be discovered”. The latter entails the obligation to thoroughly research the known risks of novel technology, but holds at the same time a ground for exoneration when such knowledge is not available. In terms of AI-technology, both defences can likely be invoked easily by producers. As AI often is self-learning and thus self-developing, “new” defects  it might be impossible to predict and investigate, and will often originate after market-introduction.

However, the later-existence defence cannot be invoked (section 2) when “the defectiveness of the product is due to any of the following, provided that it is within the manufacturers control: (a) a related service; (b) software, including software updates or upgrades; or (c) the lack of software updates or upgrades necessary to maintain safety”. This is a major improvement for consumers seeking redress from the manufacturer of an AI-application, obviously. It can be argued that a similar mechanism should be introduced for the development risks defence. This could entail that when a potential defect is discovered, the producer should actively prevent this defect from causing damage. Failing to do so would bar him from invoking the development risks defence.

Concluding observations

The fact that software is brought under its scope, the extension to (in)tangible components and related services, as well as the positive obligations for manufacturers to keep the products safe after they were put into circulation, in combination with the procedural and evidentiary aids and the limitation of the later-existence defence shall significantly improve the consumer’s position. I think it is reduces their risks of unjustified under-compensation, and it might contribute to their trust in AI-related technology. However, I would advocate a limitation of the development risks defence to the extent a producer is able to “fix” a problem that was discovered after marketing of a product, where he could not do that at the time of putting the product into circulation.

This post was authored by Mr. dr. Roeland de Bruin. Roeland de Bruin is practicing attorney at KienhuisHoving Advocaten, specializing in intellectual property, IT law and privacy, and assistant professor at the Molengraaff Institute for Private Law. In 2022 he successfully defended his doctoral thesis Regulating Innovation of Autonomous Vehicles: Improving Liability & Privacy in Europe“, supervised by prof. dr. Ivo Giesen, prof. dr. Madeleine de Cock Buning and prof. dr. Elbert de Jong.