Blog Image

Publications

Ethics and liability – Mistakes by machines

Home Insights Ethics and liability – Mistakes by machines

Contributed by:

Contributed by: Liz Blythe, Michael Taylor and Zoe Sims

Published on:

Published on: March 29, 2019

Share:

The increasing prevalence of AI tools designed to target and influence consumer behaviour, and assist organisations regarding decision-making, has sparked widespread international debate and calls for the responsible development and integration of AI technologies, particularly in applications that can directly impact individuals. It has also called into question issues around the attribution of legal responsibility when things go wrong. 

Applying an ethical lens
AI technology is increasingly being used to make decisions which directly affect human behaviour and, in some cases, human rights. While you may not be bothered why an algorithm has chosen to send you a particular shopping recommendation in a pop up when online shopping - you might be more inclined to worry if new insurance technology uses previously unobtainable insights to increase your health or life insurance premiums, or restrict your access to these products entirely.

One application that has sparked particularly fierce debate is the use of AI technology to make decisions that affect people's rights and freedoms. AI systems that purport to be able to determine a person's risk profile and propensity to reoffend are already being used in some jurisdictions to inform judicial decisions regarding bail, parole and sentencing.1 However, there has been significant controversy around whether these algorithms are acting in a biased fashion and, more fundamentally, whether this is an appropriate context in which to employ AI to assist with decision-making.2  Many have advocated a need for clear guidelines restricting the use of AI tools that influence behaviour and make automated decisions which impact human beings' access to credit, insurance and other products or fundamental rights and freedoms, and requiring a higher standard of transparency, accountability and reliability where any such implementations are permitted.

These calls for change have not gone unnoticed. Legislators, regulators and commentators globally are increasingly giving serious consideration to these issues. In 2018, the UK House of Lords Select Committee on AI recommended a voluntary AI code of conduct, and proposed five core principles, emphasising the importance of human rights in the design and development of AI.3  Europe's General Data Protection Regulation contemplates automated decision making by any means where it provides legal effects concerning, or otherwise significantly affecting data subjects and imposes restrictions on this type of behaviour.4 We expect this to be just the start of regulation in this area.

What happens when things go wrong?
Even if we do set boundaries requiring the responsible design, development and operation of AI technologies, inevitably mistakes will be made. When things go wrong, people will look to recover their loss, but can we rely on our traditional legal concepts regarding attribution of liability to do so?

The concept of causation is at the heart of attributing legal responsibility. In contract and tort law, to recover compensation, a plaintiff must show that the defendant's breach caused a loss. This concept is also enshrined in legislation. Take self-driving cars for example, in the event of an accident, who is at fault if no humans were controlling the vehicles involved? The answer is not straightforward.

AI presents a new challenge when using traditional legal concepts to determine fault (and, in turn, compensation). The decisions made by AI-powered machines may not be a direct consequence of, or attributable to, whatever human involvement there has been (e.g. human programming or remembering to run updates to software).  

For example if AI technology is learning from the data it has access to (which may be wholly or partly out of the programmer's control) when something goes wrong, it may be unclear whether the programmer who initially developed the algorithm is legally responsible. In fact, once an algorithm powered by machine learning has been running for some time, even the programmer who initially developed it may not be able to explain why it is making the decisions it is making. Microsoft's AI Twitterbot "Tay" is an example of this. Tay was shutdown just 16 hours after its launch due to making offensive remarks in response to learning from its interactions with human Twitter users.5 Who should be responsible in these circumstances? The Twitter users who trolled the algorithm? Or Microsoft whose name was attached to the project? Or perhaps one of the suppliers who assisted in building it and did not employ controls to stop this sort of thing happening?

This is a tough question to answer and demonstrates the difficulties of using a human centred concept of responsibility to attribute liability in the case of a self-teaching, and ever learning, machine. 

It is not clear how legislators and regulators will deal with this problem and New Zealand courts have not yet had to grapple with the issue. Possible legal solutions could include broadening statutory no-fault systems, such as those adopted under the Accident Compensation Act 2001, or broadening the scope of vicarious liability to find one of the human actors involved (however remote their involvement may have been in the relevant incident) liable for the purposes of compensation.  

It is only a matter of time before concerns about the ethical use of AI and attribution of legal liability come to a head. It is clear that the law is not currently designed to deal with these issues in a clear-cut way, but it is likely that changes in this area may be on the horizon. Early-adopters should keep this in mind when implementing new AI technology as it will be important to ensure that unwanted liability does not attach in unexpected ways and that implementations err on the right side of the ethical debate so that your technology does not later become prohibited, or require a costly re-engineering effort to comply with the inevitable changes in law.

If you would like any advice regarding the issues discussed in this article, or assistance in getting the right legal protections in place for your business before implementing AI technology in your organisation, please do not hesitate to contact us.

To view the other articles in our "Implementing AI in your business" series, please visit our landing page here.

This article was first published by CIO New Zealand.  

FOOTNOTES
  1. For example, equivant's (formerly Northpointe) "COMPAS" tool.
  2. See for example, The Atlantic "A popular algorithm is no better at predicting crimes than random people" (January 2018).
  3. Government response to House of Lords Artificial Intelligence Select Committee's Report on AI in the UK "Ready, Willing and Able?" (June 2018).
  4. Article 22.
  5. The Guardian "Tay, Microsoft's AI chatbot, gets a crash course in racism from Twitter" (March 2016).

 

 


This publication is intended only to provide a summary of the subject covered. It does not purport to be comprehensive or to provide legal advice. No person should act in reliance on any statement contained in this publication without first obtaining specific professional advice. If you require any advice or further information on the subject matter of this newsletter, please contact the partner/solicitor in the firm who normally advises you, or alternatively contact one of the partners listed below.

Talk to one of our experts:
Related Expertise