Re-framing the law to account for responsibility of autonomous robots
The emergence of a new generation of autonomous robots, capable to learn on the fly, brings a fresh set of challenges to issues of responsibility. Examples include robots embedded in a self-driving car or an algorithm on the stock-exchange market concluding business deals. This article examines the legal conditions under which these robots can be considered liable for the consequences of their actions, when they have caused damage. We will also explore the consequences of a shift in how the law attributes a legal status—not so distant to that of legal persons—to this new breed of robots.
Responsibility from autonomy
Autonomous robots have been the focus of interest of the French commission of reflection on the research ethics related to digital science and technology, CERNA, since 2013. Some private companies have been looking into this as well. What has attracted the attention of legal experts is their very nature: their autonomy.
Indeed, their ability to adapt to new inputs and to act independently, without any external control or intervention, means that they could cause damage and even be considered liable for it. Legal experts are debating what happens if the robot takes a decision that causes harm, as a consequence of its own learning process that modified its pre-programmed commands. Legislators need to decide whether such robots have non-contractual or even contractual liability. Typically such liability issues are covered under well-established legislation covering product safety, consumer rights and liability for defective products.
Legislative gap
Existing legislation currently fails to address the issue of autonomous robot liability. Let’s take the case of a robot, compliant with all safety regulations when put into circulation. As a result of an autonomous learning and adaptation process, this robot could make unforeseen decisions causing damage. So should this decision be considered wrong?
In the scenario where the robot’s decision is wrong, legal experts may seek to attribute responsibility for the damage. They may therefore need to consider whether the robot, as a product itself, was defective by consequence. They would also need to understand whether the state of scientific and technical knowledge at the time of the product release could have identified such a defect. In the case of autonomous robots, they would only be regarded as defective if they were unable to learn—not if they make a damaging autonomous decision.
In addition, once the robot is put into circulation, the liability of a producer or programmer could only be proved in exceptional cases. Indeed, a normal functioning robot would be unpredictable. Alternatively, the producer could enjoy a form of immunity similar to that of the protection of firearm manufacturers in the United States.
Robot liability
All these questions point to the need for new legal regulations. Such new legislation might lead to the full or partial direct liability of the robot for its own acts or omissions. It might sound far-fetched but it is no longer the territory of sci-fi writers.
To establish a robot’s liability, we need to find the answers to a series of questions. First: does an autonomous robot have a legal capacity? After all, it is able to engage in transactions, handle the business of its owner and maintain a particular relationship with others. If we assume that an autonomous robot has a certain legal capacity, can it acquire rights and undertake obligations? Can it conclude contracts? Going one step further, can an independently acting robot be sued?
Furthermore, assuming that they possess a degree of self-determination, shall we treat them as a legal person? Or do we need to create a special legal e-person status for robots? What would be the extent of this e-persons’ rights and obligations? How do we separate an e-person’s limited from unlimited liability? Will an e-person be authorised to instigate court proceedings? Can it be sued if it caused damage to a third person, should no natural person be found at the end of the chain of liability?
Liability cover
A possible solution could be a compulsory insurance scheme for robots, similar to our car insurance. However, obtaining compensation for punitive damages could be too complicated or too costly within the framework of an insurance scheme. Another solution could be the creation of a special compensation fund for people affected by robot-induced damage. The advantage of this is that it precludes from establishing the fault or liability of the robot and removes the need for robot insurance. The fact that the robot caused damage, creates a sufficient basis for indemnification in itself.
But who will pay into this compensation fund? Those who have an economic interest in the robot’s functioning would be the primary contributors. The economic interest is not limited to those who manufacture, programme, sell or use them. Indeed, everyone is likely to enjoy the benefits of robotics, both on a private and on a societal level. Therefore paying into a compensation fund is in the interest of all and could be raised as a new tax.
Yet, a scenario where robots themselves will pay into the fund also seems possible. Think, for example, about driverless taxis that might transfer the fare–or part of it–into the fund. Part of the fund that has not been used for compensation payment could be re-invested in research and development. This would encourage manufacturers to develop safer robots and help spread their use in further areas.
The European Parliament is currently working on answering these kinds of questions. EuroScientist readers are invited to join in the discussion of these highly important legal questions.
Orsolya is the legal and policy advisor to an MEP at the European Parliament, Brussels. She is also a member of the Association on the Rights of Robots (ADDR), based in Paris, France.
Featured image credit: CC BY-SA 2.0 by Jiuguang Wang