The Jeff Cumberbatch Column – The legal Profile of Artificial Intelligence
If a person purchases a driverless car for their own use and properly maintains it, it would be unfair to fault them for any accidents when the vehicle is supposed to independently drive itself…Madeline Roe – (2019) 60 Boston College Law Review 315, 343.
Driverless cars…appear to be the way of the future. They can create efficiency, change people’s quality of life, and foster positive impacts on the environment- Ibid, 347
I would be among the first to admit that the content of this week’s column is unlikely to resonate with a majority of readers. But it cannot always be a matter of discussing ever-revolving parochial concerns while the rest of civilization forges ahead with developments to transform existence as we now know it.
There is no doubt that we live today in a “smart” world. I am reminded of this weekly at every tutorial when the discussion of a question is preceded not by a rustling of papers as it was in my days as a student, but rather by the un-pocketing or un-bagging of smart phones by the students to get on the University’s E-learning interface so as to access the relevant materials. The situation is further compounded by a Faculty policy of “paperlessness” so that no printed materials are distributed as before.
The smart phone itself is but a small drop in the ocean of current artificial intelligence however. Artificial intelligence, the ability of a machine to think, learn and perform tasks ordinarily related to human action, has expanded over the years to include robots that perform the most highly skilled tasks, driverless cars, security surveillance, and personal assistants such as Siri, Google Now, and Alexa, to name a few. Indeed, there are not many facets of life in which artificial intelligence might not be of beneficial use, be it in agriculture, banking, energy or the E-tail industry.
The increasing ubiquity of AI systems means that there is a greater likelihood of interaction with human beings. This raises issues of the legal, moral and ethical responsibility if this interaction should result in some harm being suffered by the human being. This immediately invokes a consideration of whether these systems should be treated as being endowed with a legal or other personality so as to be held responsible for such outcomes.
Two areas in which this issue comes readily to mind are those of autonomous driverless cars and surgical robots where serious physical injury or even death may be a consequence. I should disclose that my interest in this area was piqued by the research proposal of one-third year student for her independent research paper. As supervisor of that paper, I have been keen in recent weeks to familiarize myself with the existing literature in the area.
One study that has proved to be most usefully informative in this context is an article in the 2019 Boston College of Law Journal by Professor Madeline Roe of the Boston College Law School, entitled Who’s Driving That Car?: An Analysis of the Regulatory and Potential Liability Frameworks for Driverless Cars. In this piece, Professor Roe attempts to explore possible frameworks of liability for driverless cars and argues for the further regulation of these vehicles. Her hypothesis is that liability for accidents will most likely shift from the driver to the manufacturer.
Of course, these are not live issues in Barbados as yet and, given our traditional unduly conservative approach to novelty, I am not certain that they will be anytime soon either. Just look at our hostile attitude to the smart phone where we were content to highlight the ways in which it might be misused rather than its patent utility in order to justify its official prohibition in classrooms. I may be deemed an eternal pessimist, but the idea of Barbadians readily embracing autonomous driverless cars (those remotely controlled only) does not come easily to my imagination.
In her article referred to above, Professor Roe uses some court decisions on assessing the liability for the use of surgical robots as analogies for determining that for accidents caused by driverless cars. In one such case, a doctor performed a robotic prostatectomy on the claimant whose body mass index vastly exceeded that recommended for this type of surgery, with the immediate consequence of serious complications for the claimant.
The surgeon then converted the procedure to open surgery and completed it without the surgical robot. However, the claimant thereafter had a poor quality of life and eventually passed away. The manufacturer of the robot was eventually determined by the Supreme Court of Washington to be responsible for the death on the basis of product liability for its failure to warn the hospital and doctor about the risks of using what was considered to be an “unavoidably unsafe” product. In product liability cases, a manufacturer will be held strictly liable for the design or creation of a product that is deemed to be inherently unsafe when used in the manner intended.
In another case, the Kentucky Court of Appeals held a surgeon and the hospital liable to the claimant for a botched surgical robot procedure. This was on the basis of negligence or a failure by the defendant to achieve the required standard of care. Here, the court emphasized the necessity for expert testimony to assist the unschooled jury as to the required standard of medical care in the circumstances.
From this jurisprudence, the article proceeds to contrast the surgeon with extensive medical knowledge using a robot with the more general public use of driverless cars where the sole preconditions to operate them are being above a certain age and the acquisition of a licence. Yet the car is supposed to do the entirety of driving on its own. Should a licence then be required at al for the use of a driverless car? What if an emergency should arise?
The author notes that by allowing manufacturers to test-drive cars without people inside lays the foundation for not requiring a licence to be in such cars and concludes that the legislature “will have to weigh the utility of transporting people who are able to drive by themselves with the safety concern of the driverless car malfunctioning, [thereby]forcing the passenger to take the wheel”.
Another distinction between the two scenarios is that there are potentially two clear causes of human error with injury caused by a surgical robot; either that of the doctor, who makes an error during the operation by the robot (medical negligence) or that of the engineer, where the robot malfunctions (product liability). With the driverless car, who is at fault if it faces a problem that its programming does not account for? In negligence, an actor is liable for reasonably foreseeable consequences only, so clearly there would be no liability in the event of an unforeseeable event. Should a strict liability [no fault] standard be applied then? And would such a policy not lead to defensive engineering or, possibly, stasis?
Professor Roe also notes the variable of speed in the comparison. A road accident takes a few seconds only to occur and there would be little time, if any to correct mistakes made by the car, unlike during robot-assisted surgery. In conclusion, she notes the response of a new California regulation that permits the testing of driverless cars without passengers so long as they follow a series of stipulated rules relating to disengagement of the autonomous mode and the prompt reporting of nay accidents.