Writing about the car of the future the past two years has given me some insight into the trends in reporting on driverless cars.
The tide has turned and writers appear to be scratching their collective heads on these fantastical claims of imminent convenience. Last year at this time, the world seemed to be a bit rosier about autonomous vehicles or AVs. Now, it feels the articles I read now are more like how you feel at the end of the party and everyone left is just watching air escape out of a balloon slowly.
I’m glad the hype has calmed down.
Society is not yet ready for driverless cars. Why you ask?
- We can’t fund our current infrastructure needs, let alone those that driverless cars need to run.
- No one can agree on the national level how to regulate them.
- Poll after poll has stated that Americans don’t even want to ride in them.
- Automaker and tech companies keep plugging away but they are also realizing creating a safe driverless car is really difficult. Yeah…
Jalopnik posted a telling article in December called 2018 was a Hard Reality Check for Autonomous Cars. An excellent read on details how companies pulled back their vaulted driverless vehicle projections.
Even though there have been a number of fatalities with motorists using Tesla’s auto pilot, none could compare with the death of pedestrian/bicyclist Elaine Herzberg last March in Tempe, Arizona. She was the woman crossing a dark street who was struck and killed by an Uber driverless car that was piloted by a human who was distracted. This changed the trajectory of Uber—the company is now deep into micromobility projects instead of driverless cars. This tragedy also made us all realize that AVs are not science fiction fantasy, but devices that can have life or death consequences.
The hardware and mechanics of AVs is one thing…the “thinking” of the machine is a totally different complexity.
In October 2018, a well-publicized study entitled The Moral Machine Experiment came out. Two million people worldwide participated by playing an online game to understand human moral decisions in such scenarios as the famous Trolley Problem. Essentially, you are faced with a moral dilemma of who to save if a runaway trolley is barreling towards people. The Trolley Problem evokes one of the most famous lines in Star Trek (usually quoted by Spock on several occasions): “The needs of the many outweigh the needs of the few.”
The Moral Machine Experiment put the infamous trolley problem to driverless cars and the global study revealed how ethics diverge depending on where you live.
This week, three experts in the field came out with a counter argument called Doubting Driverless Dilemmas. They claim that the ideas presented on driverless car ethics in the Moral Machine Experiment “are too contrived to be of practical use, represent an incorrect model of proper safe decision making, and should not be used to inform public policy.”
The authors were Julian De Frietas and George Alvarez from Harvard University’s Department of Psychology and Sam Anthony, CTO of Perceptive Automata—a company that is developing perception software for AVs.
In an interview with Robotics Business Review, Julian De Frietas said the following:
As someone who has published research involving trolley dilemmas, the whole point is to simplify real-world complexities so that you’re only looking at one or two factors in order to evaluate whether they’re influencing people’s moral intuitions. So the trolley dilemma makes a lot of sense as a psychology tool, but what struck me as strange was that they wanted to reapply that back onto the real world in order to try to inform policy and make a statement about the state of AVs right now.
So I always had this feeling, and one day Sam came to our labs and presented and expressed a similar sort of frustration. At that point, I thought ‘I’m not crazy for thinking this.’ I really did doubt myself at first, because their work had been covered in every single popular media that you could think of.
Sam Anthony also stated for the record:
From my perspective, we’re working on perception for autonomous vehicles, and there are big, real questions about what it means if an AV gets into an incident because it sees the world differently from how a human sees the world. There’s real meat there.
There are two problems with the trolley dilemma – first of all, it’s a distraction to the work that is being done on making AVs safer, and second, it has this built-in assumption that AVs can see the world perfectly. People shouldn’t think that’s true, because there’s a lot of work to do to characterize and understand how perception works for these vehicles, and how it differs from human perception.
Despite all the hype on driverless cars, there are many people behind the scenes who think about these issues which for humans are quite complex moral dilemmas that somehow have to translate to a Computer or AI environment. Perhaps this is the most difficult issue of all with AVs—figuring out all the ethical scenarios.
Let’s keep this in mind as we continue to discuss AVs and the ethics surrounding them.