Compassion and vision

Posted on Friday, January 11, 2019

Before the holidays, I talked about my atrocious habit of tearing articles out of The Economist and letting them build up in a file folder until I finally get around to reading them. The good news is that I found two articles on the ethics of autonomous vehicles (AVs), a topic I discussed briefly in my blog from a few weeks ago, Baby You Can Drive My Car. I could have used them at that time but, of course, they were unread and sitting in the file folder.

The earliest is from ‘way back’ in the May 12th 2018 edition and is entitled Robotic rules of the road in the print edition but is now called How do you define “safe driving” in terms a machine can understand? (subscription may be required). It makes the point that “a clear set of rules would free carmakers from having to make implicit ethical choices about how vehicles would behave in a given situation”. Once the rules had been deemed ‘ethical’ an autonomous vehicle – or more importantly, its programmers – would be judged by the same criteria a human driver would: did the driver make a good or a bad ethical decision?

The article goes on to talk about a number of proposals including one by an AV technology firm, Mobileye, called Responsibility-Sensitive Safety and one by Voyage, yet another AV company, called Open Autonomous Safety. It notes that “the truth is that AVs will always be held to higher safety standards than human drivers”. That maybe unfair but, as a Toronto newscaster used to say in the 1970s, “that too is reality”.

The second article was called A selection from the trolley from October 27th 2018 which, on the website, now has the more pedestrian title of Whom should self-driving cars protect in an accident?

The joke from the original title has to do with something called ‘The trolley problem’. A villain has tied five people to a trolley track on the mainline and tied one person to a branch line. The trolley is hurtling out of control towards the switch, which you are standing by. What do you do? Someone inevitably will die but whom do you save? The richness of the problem comes from imagining different people among the five on the main or the one on the branch. Five versus one seems an obvious choice but what if the five were known ‘bad guys’ and the one was a brilliant brain surgeon?

Philosophers can debate the ethical course of action, but a team from MIT led by Edmond Awad decided to approach it empirically. They set up a website to let the online world answer the question “Whom would you save?” for the case of an AV which experiences brake failure approaching a pedestrian crossing. Participants were asked to choose between, for example, a man and a woman with a baby carriage or two business executives (guess who won).

The website garnered 40M responses from 233 countries, territories or statelets.

The article gives the complete results but the most favored was a person with a baby carriage (no surprise) and the least favored was a cat, followed closely by a criminal and then a dog. At the ‘saved’-end, children and pregnant women were only slightly behind the baby carriage in terms of preference. But apart from these clear choices, the rest of the results were fairly ‘flat’ and much lower, indicating no strong feelings one way or the other.

(‘Fat man’ was in the least favored side of the graph and older people were ranked even lower. Presumably, ‘old fat guy’ is somewhere down around dogs, criminals and cats. I take that personally.)

The authors highlight cultural and even country-to-country differences, identifying three clusters “Western”, “Eastern” and “Southern”. Countries with high levels of gender equality were more likely to rank women higher for saving and Southern countries were less marked in their downgrading of dogs and cats. The Economist asked if AVs must have different programming in different countries to reflect distinct moral points of view.

Also ‘challenging’ was that people’s ‘moral choices’ can conflict with legal or societal norms. For example, Germany is unique in the world for already having proposed ethical rules for AVs. These explicitly exclude discrimination based on age. But the MIT experiment showed a clear preference for saving children (with their whole life in front of them as the expression goes) over older people (who have less time to live presumably) – except in the Eastern cluster where the preference was present but less dramatic.

Finally, while researching the website links, I came across this article in the website’s The World in 2019 section. Entitled There are no killer robots yet—but regulators must respond to AI in 2019, it discusses the general issue of legal and moral responsibility in Artificial Intelligence. On the specific topic of AVs it says:

“The right response is to require makers of autonomous vehicles to publish regular safety reports, put safety drivers in their cars to oversee them during testing and install ‘black box’ data recorders so that investigators can work out what happened if something goes wrong.”

I stick by my point Baby You Can Drive My Car: this is difficult, not impossible, but extremely complicated. MIT’s moral choice experiment shows the diversity with which ‘the average Joe’ assesses these decisions.

So far there has only been one, unfortunate death of a pedestrian and one non-AV death of a driver who allegedly thought ‘assisted driving’ allowed them to ignore what was happening on the road. (The recent news about a lawsuit for a death in a Tesla accident had nothing to do with assisted or autonomous driving.) The statistical evidence is overwhelming that AVs – even with their limitations – are far safer than human drivers, but human voters will not see the issue in such a utilitarian fashion nor will they easily admit that they are less safe than ‘some dumb machine’. AVs are most advanced in the US which is precisely the most litigious country on the planet: lawyers will have a ‘field day’ with AV-related accidents and they will be sure to argue their cases on the nightly news as well as the courtroom.

As AVs start to become more prevalent, more accidents are inevitable, accidents which may not even be the AV’s fault. But as this occurs, this debate will move from the comfortable confines of academic and scientific research to the front-lines of national legislatures, grandstanding politicians and Fox News (or its local equivalents). Premature launches (like Waymo maybe) could provoke negative reactions that set back the whole field for many years, if they have unfortunate events which the public and politicians interpret as representing unmanageable risks.

These The Economist articles show that smart people are working on the right problems. But will they come up with politically acceptable solutions within a reasonable time frame? Or will my prediction that I will not ride in a AV family-car on an open road prove to be the case?

A just machine to make big decisions / Programmed by fellows with compassion and vision

Title Reference: Another Steely Dan! Ok, half a Steely Dan. Donald Fagen’s I.G.Y (aka What a Beautiful World or on the original disk, International Geophysical Year) from 1982’s The Nightfly. The song got to number 2 in Canada but only number 8 in the US. An ironic look at the promise of science and engineering in the late 1950’s, it frequently runs through my mind when I read a story that seems like ‘To Infinity… and Beyond!’ hubris. Furthermore, the lines I close the article with seem to sum up much of the over-reach of Artificial Intelligence. I believe Artificial Intelligence has great promise – and is doing great things today – but sometimes we get too far ahead of ourselves. Facial recognition is one thing. “A just machine to make big decisions” is quite another. And to repeat the date reference, Fagen wrote those lines nearly 40 years ago.

 

No Comments »

Leave a Reply