Rani Subassandran
Rani Subassandran
Rani is an external contributor to OFLS, and is an University of Oxford alumnus of the Bachelor of Civil Law (2011)

Who Pays When a Robot Errs?

Who Pays When a Robot Errs?
I. Introduction

Artificial intelligence (AI) systems have been embedded into our lives in the form of smart home products, e-payment systems, and more. However, our interaction with AI systems pose serious challenges, from police departments using ‘dirty data’ to train predictive policing systems1 to the Cambridge Analytica scandal that allegedly influenced the 2016 US Presidential Election.2 This post explores the difficulties faced in regulating liability arising from the use of AI systems. Section II examines the issue of whom to blame when a robot errs, and section III discusses the desirability of Lord Bingham’s rule of law principles3 forming the backbone of the regulation of AI systems.

II. Assessing Liability for AI Systems

When asked who should be liable when a robot errs, one’s natural thought may be the creators of the robot itself. Undeniably, this is true in cases where there is a software bug, a hardware malfunction, or the creators of the robot have coded insufficient safety standards and protocols. In these cases, the chain of causation between the negligent, or mala fide, programming of the AI system and the consequent harm is easily established.

However, the question of liability is not as straightforward when it comes to most AI systems used everyday. Today’s robots are no longer merely executing pre-programmed instructions, but rather are programmed to learn from the data they collect. They are fully capable of forming their own solutions based on their analyses of patterns within that data – a process that is currently beyond human discernment. These robots, become ‘black boxes’,4 even to their creators.

The complexity surrounding liability of AI harms is highlighted in several cases, such as:

  1. When a ‘human-in-the-loop’ is added to ‘black boxes’,5 e.g. a self-driving car designed to rely on a backup driver to intervene in the event of possible collisions.6 Should the driver fail to do so successfully, the question of liability points to the creators, the robot itself, and the ‘human-in-the-loop’.7

  2. When liability arises in public-private partnerships. For example, a case in Arkansas saw hundreds suffer a drastic reduction in their free care hours due to an AI system being programmed to prioritise cost-saving measures. Legal claims were issued against the government agency instead of the private developer that was primarily responsible for the harm caused.8 Furthermore, in these instances, courts struggle to hold private vendors accountable due to trade secret exceptions.9

  3. When the AI harms are neither negligent nor criminal, such as when Microsoft’s chatbot, Tay, programmed to update itself in real-time by learning from users’ interactions, began to tweet hate speech following interactions with trolls on Twitter.10

While the doctrine of contributory negligence could be modified and broadly adopted to resolve the issue of mixed liability, one aspect of the legal philosophy underpinning the law of remedies, i.e. that the wrongdoer be punished, would be lost where the wrongdoer is a robot. As robots are incapable of being conscious of their wrongdoing, could they be granted legal personhood?

The definitions provided by 18th and 19th-century philosophers may be a good starting point in understanding this issue. Immanuel Kant claimed that personhood is the capability of giving and receiving reasons when considering how to act.11 It is arguable that robots, as purely rational beings, could be granted personhood in this sense as soon as we reach the stage of creating explainable AI - whereby robots not only receive and act with reason, but also ‘give’ reasons for their actions. John Locke, however, defined a ‘person’ as an intelligent being that can reason, reflect, and consider himself as the same thinking agent in different times and places, and is capable of accountability.12 In the Lockean sense, a robot would fail to identify with the concept of personhood. Robots are incapable of thinking of themselves as persisting over time and they cannot reflect upon the morality of their actions, current or past.

In a 2017 resolution, the European Parliament urged the European Commission to propose what it called ‘electronic personality’ for sophisticated autonomous robots.13 Floridi and Taddeo opine that if robots could be blamed and punished instead of humans, this would provide a loophole for irresponsible people to dismiss the need to exercise the utmost care in the design, creation, training and use of robots.14

This section has highlighted the complexity of attributing liability for AI harms. The next section explores the importance of ensuring that the design, development, training, and use of AI systems is based on adequate rule of law principles as a necessary mechanism to address the highlighted issues surrounding liability for AI harms.

III – Rule of Law and AI Harms

According to Professor Bostrom, the very existence of humanity will be at stake once we enter the age of AI revolution. Until now, we have only successfully designed, created, and used AI for designated tasks, e.g. a chess-playing robot cannot also act as a chatbot. However, we are currently on the cusp of creating human-level machine intelligence.15 Thereafter, it is envisaged that we will have zero control in the race towards artificial super intelligence (ASI), i.e. an AI that will far surpass the very best form of human intelligence.16

Whilst it is easy to enter into the mindset of anthropomorphizing robots, it is important to remember that robots are not capable of understanding the concept of evil. As Urban explains, an AI can be unfriendly if it were specifically programmed that way, i.e. an AI system is motivated by ‘whatever we programmed its motivation to be’.17 This, however, is challenging for the following reasons: (i) even programming a simple message is a complicated process, (ii) there is the issue of AI ‘black boxes’, and (iii) since humanity is flawed and could always improve, we shall have to find a way to program the ability for humanity to continue evolving.

As daunting as this sounds, even if there is the slightest chance of such programming being possible,18 we owe it to humanity to take every conceivable action to move towards the concept of ‘Trustworthy AI’ as provided in the European Commission’s Draft Ethics Guidelines for Trustworthy AI. The Commission states that ‘Trustworthy AI’ should: (i) comply with all applicable laws and regulations, (ii) adhere to all ethical principles, and (iii) be robust both from a technical and social perspective to avoid causing unintentional harm.19

First and foremost, we need concrete rule of law principles in order to lead us towards ‘Trustworthy AI’ and strengthening our chances of ending up with friendly ASI. Whilst the original conceptions of the rule of law is credited to Dicey’s three rules, modern conceptions, in particular Lord Bingham’s, have dealt with it more exhaustively,20 thereby providing a better rule of law roadmap towards ‘Trustworthy AI’.

The first of the eight principles articulated by Lord Bingham is that the law must be accessible and as far as possible, intelligible, clear, and predictable. In cases where the regulation of our use of AI may be unintelligible, unclear, and unpredictable, such as in ‘black box’ cases, concerted effort must be undertaken to tackle the problem. Responses to this issue include the adoption of measures aimed at making AI explainable, i.e. ‘providing insight into the internal state of an algorithm’.21 Deeks states that courts will play a significant role in the development of explainable AI when they ‘seek information about the inputs, outputs, and reliability of agency algorithms or express interest in testing counterfactuals’.22

The role of courts is particularly significant where AI systems are used in the criminal justice system, such as states using data-driven predictions to assess the risk of recidivism in setting prison sentences23 and police departments deploying predictive policing systems.24 In such cases, in accordance with Lord Bingham’s second principle, any question of legal right and liability should be resolved by application of the law by the courts, and not through the exercise of discretion by the state actors involved. It is also necessary to rethink traditional defences, such as trade secret exceptions, to ensure that AI systems have been created and applied in a fair manner.

The third principle is that the law should apply equally to all. Since the rise of big data, there have been various allegations of misuse of user data by tech giants like Facebook.25 In December 2019, it was reported that more than three hundred million Facebook users’ personal data was left unprotected for nearly two weeks, which security experts believed would have resulted in criminals harvesting the stolen data for identity theft, spam messages, and phishing campaigns.26 Despite Facebook’s repeated data breaches, it has been argued that commensurate legal action has not been taken against them,27 raising the question of whether this is an instance of tech giants being too powerful for the reach of the law.

Closely linked is the fourth principle which states that ministers and public officers, at all levels, must exercise their powers in good faith, fairly, and reasonably, without exceeding their limits. Repercussion for failure to adopt this principle in the use of AI systems is seen through the Chinese government’s ‘Strike Hard Campaign’ in Xinjiang. The campaign used facial recognition, natural language processing, and genetic profiling to exercise state social control, resulting in allegedly more than one million Uyghurs and Kazakhs being sent to ‘political education’ camps, and many being detained for activities that are not illegal under Chinese laws.28 The exercise of such Orwellian Big Brother surveillance and illegitimate state control, with the aid of AI systems, represents catastrophic miscarriages of justice.

The above examples of Facebook’s breach of one’s right to privacy and the Chinese government’s violation of one’s fundamental right to liberty and fair trial could also be construed as failure to adhere strictly to the fifth rule of law principle - that the law must afford adequate protection of fundamental human rights. In almost all cases, it is necessary to ensure that AI systems are only created and deployed once adequate laws and policies to protect basic human rights have been passed. As Santow states, ‘AI-powered technology can give rise to precisely the sorts of human rights violations and other problems that it is designed to avoid.’29

It is clear from the above discussion that access to fair, inexpensive, and timely justice, which incidentally comprises Lord Bingham’s sixth and seventh principles, is paramount to upholding his other principles of the rule of law. The increasing use of AI systems, such as predictive policing, amplify systemic biases, leading to more claims being instituted, further burdening an already over-burdened legal justice system. Interestingly, one solution to this problem according to experts like Professor Richard Susskind, is the rise of online courts, using technology such as AI, machine learning, and virtual reality.30

Lord Bingham’s final principle requires compliance by the state with its obligations in international law, as in national law. The most pressing reason for compliance by states with international law is the sheer necessity of their doing so, since states, alone, cannot adequately provide for the needs of their citizens.31 In the important journey towards trustworthy ASI, the principles of the rule of law must form a clear set of international laws with mandatory application to all nations and parties involved in the development and use of AI. The recent European Commission’s proposal to lay down harmonised rules on AI and amending certain EU legislations to that effect is certainly a much welcome step in this direction.32

IV. Conclusion

There are significant challenges in regulating liability arising from AI harms, especially due to complications such as human-in-the-loop situations, AI ‘black-boxes’, and the inherent limitations of AI systems that preclude them from being assigned legal personhood. Despite these challenges, the promise of AI to create and promote colossal universal good is also significant, as is the frightening prospect of inadvertently creating an unfriendly ASI.

It is asserted that a good starting point towards the regulation of trustworthy AI systems is Lord Bingham’s principles of the rule of law. These principles should form the backbone of the greater efforts to create a comprehensive suite of international laws with mandatory application to all, underpinning our effort to develop trustworthy AI systems. The 17th century philosopher Baruch Spinoza stated that the law is the mathematics of freedom33 - in the fight against the ‘weapons of math destruction’34 the only ammunition we have in our arsenal is the rule of law.

Footnotes

[1] Rashida Richardson, Jason Schultz, Kate Crawford, ‘Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice’ (2019) 94 New York University Law Review 192 < https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3333423 > accessed 6 February 2021.

[2] Nicholas Confessore, ‘Cambridge Analytica and Facebook: The Scandal and the Fallout So Far’ The New York Times (4 April 2018) < https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html > accessed 9 March 2021.

[3] Tom Bingham, The Rule of Law (Penguin, 2010).

[4] Frank Pasquale, The Black Box Society: The Secret Algorithms that Control Money and Information (Harvard University Press, 2015) 3-4.

[5] Lorrie Cranor, ‘A Framework for Reasoning about the Human in the Loop’ (UPSEC, 2008) < http://perma.cc/JA53-8AL8 > accessed 6 February 2021.

[6] Michael Laris, ‘Tempe Police Release Video of Moments before Autonomous Uber Hit Pedestrian’ (Washington Post Online, 21 March 2018) < http://www.washingtonpost.com/ > accessed 6 January 2021.

[7] Mark A. Lemley and Bryan Casey, ‘Remedies for Robots’ (2019) 86(5) The University of Chicago Law Review 1311, 1332.

[8] Kate Crawford and Jason Schultz, ‘AI Systems as State Actors’, (2019) 119(7) Columbia Law Review 1941, 1947.

[9] Rebecca Wexler, ‘Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System’ (2018) 70 Stanford Law Review 1343, 1349-50.

[10] Rachel Metz, ‘Microsoft’s Neo-Nazi Sexbot was a Great Lesson for Makers of AI Assistants’ (MIT Technology Review, 2018) < http://perma.cc/D3DH-CLEM > accessed 31 January 2021.

[11] Immanuel Kant, The Metaphysics of Morals (Cambridge University Press, 1996) 223.

[12] John Locke, ‘Of Identity and Diversity’ in An Essay Concerning Human Understanding (Project Gutenberg) < http://www.gutenberg.org/cache/epub/10615/pg10615.html > accessed 6 February 2021.

[13] European Parliament, Report with Recommendation to the Commission on Civil Law Rules on Robotics (2015/2103(INL), 27 January 2017) < https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html?redirect > accessed 6 February 2021.

[14] Luciano Floridi and Mariarosaria Taddeo, ‘Don’t Grant Robots Legal Personhood’ (2018) 557 Nature 309 < https://media.nature.com/original/magazine-assets/d41586-018-05154-5/d41586-018-05154-5.pdf > accessed 17 April 2021.

[15] Nick Bostrom, Superintelligence:Paths, Dangers, Strategies (Oxford University Press, 2016) 19-20.

[16] ibid.

[17] Tim Urban, ‘The AI Revolution: Our Immortality or Extinction’ (Wait but Why, 27 January 2015) < https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html > accessed 6 February 2021.

[18] Luke Muehlhauser and Nick Bostrom, ‘Why We Need Friendly AI’, (2014) 13(36) Think 41, 43 http://journals.cambridge.org/abstract_S1477175613000316 accessed 5 February 2021.

[19] The European Commission’s High-Level Expert Group on Artificial Intelligence, ‘Draft Ethics Guidelines for Trustworthy AI’ (18 December 2018) < http://ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai > accessed 2 February 2021.

[20] Bingham (n 3).

[21] Sandra Wachter, Brent Mittelstadt & Chris Russell, ‘Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR’ (2018) 31 Harv. J.L. & Tech. 841, 850.

[22] Ashley Deeks, ‘The Judicial Demand for Explainable Artifical Intelligence’ (2019) 119(7) Columbia Law Review 1829, 1841.

[23] Laurel Eckhouse, ‘Big Data May Be Reinforcing Racial Bias in the Criminal Justice System’ (The Washington Post, 10 February 2017) < https://www.washingtonpost.com/opinions/big-data-may-be-reinforcing-racial-bias-in-the-criminal-justice-system/2017/02/10/d63de518-ee3a-11e6-9973-c5efb7ccfb0d_story.htm > accessed 9 March 2021.

[24] Karen Hao, ‘Police across the US are Training Crime-Predicting AIs on Falsified Data’ (MIT Technology Review, 2019) < https://www.technologyreview.com/2019/02/13/137444/predictive-policing-algorithms-ai-crime-dirty-data/ > accessed 6 February 2021.

[25] Zak Doffman, ‘1.5m Users Hit by New Facebook Privacy Breach as Extent of Data Misuse Exposed’ (Forbes Online, 18 April 2019) < https://www.forbes.com/sites/zakdoffman/2019/04/18/facebook-illegally-harvested-data-from-1-5m-users-as-it-leveraged-its-data-machine/?sh=74b55ea6a2e6 > accessed 6 February 2021.

[26] Paul Bischoff, ‘Report: 267 Million Facebook Users IDs and Phone Numbers Exposed Online’ (Comparitech, 9 March 2020) < https://www.comparitech.com/blog/information-security/267-million-phone-numbers-exposed-online/ > accessed on 9 March 2021.

[27] Zak Doffman (n 31).

[28] Olivia Shen, ‘AI Dreams and Authoritarian Nightmares’ in Golley, Jaivin, and Hillman, China Dreams (ANUE Press, 2020) 144 – 145.

[29] Edward Santow, ‘Can Artifical Intelligence be trusted with our Human Rights?’ (2020) 91(4) Australian Quarterly 10, 13.

[30] Richard Susskind, Online Courts and the Future of Justice (Oxford University Press, 2019).

[31] Douglas Hurd, The Search for Peace (Warner Books, 1997) 6.

[32] European Commision, Proposal for a Regulation Laying Down harmonized Rules on AI (2021/0106(COD), 21 April 2021) < https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence-artificial-intelligence > accessed 28 April 2021.

[33] Huntington Cairns, ‘Spinoza’s Theory of Law’ (1948) 48(7) Columbia Law Review 1032.

[34] Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Crown Publishers, 2016).

comments powered by Disqus