We see many news stories touting the potential benefits of driverless cars: increased mobility for the elderly, congestion mitigation and safer roads to name a few. But amid the enthusiasm, we ran across two stories this week that illustrate the conflicts and contradictions that arise when trying to reconcile the role driverless cars will play in society.
The first comes from California, where Google has been road testing autonomous vehicles for more than a year. The California Department of Motor Vehicles recently unveiled draft regulations governing the rollout of autonomous vehicles on the state’s highways. Clearly California is worried about the safety ramifications when humans interact with machines that have minds of their own.
The regulations require manufacturers to comply with specific safety requirements and to conduct third-party vehicle performance testing. Manufacturers will also have to provide the state with regular reports and comply with privacy and cyber-security requirements.
Now here’s where it gets interesting. The regulations also require the “operator” of a driverless car to be a licensed driver and to possess an “autonomous vehicle operator certificate issued by the DMV.” (Read that through a couple of times and let the inherent contradiction sink in.)
The operator must also be trained to take control of the vehicle in the event of a systems failure or emergency. To that end, autonomous vehicles must have steering wheels and control pedals, things the original Google car did not have.
Also note that fewer young people are bothering to get driver’s licenses these days for a variety of reasons. Under the California regime, which may provide a template for other states, they could not “operate” a driverless vehicle.
Google said it is “gravely disappointed” by the proposed rules, but we credit California for addressing issues we’ve been raising for years: What happens if a driverless car can’t respond appropriately to changing road conditions or if its systems fail? Will the operator be able to assume control? Who’s responsible in an accident?
These are not inconsequential questions considering our next story. Seems that driverless cars have been racking up accidents at twice the rate of human-driven cars. The reason? They’re getting hit from behind because they’re programmed to never exceed the speed limit. GM researchers admit this becomes particularly problematic (and dangerous?) when driverless cars try to merge into faster traffic or cross multiple highway lanes to exit. As a result, their human operators have to step in to complete the maneuver safely.
The article argues that the driverless cars are “not at fault” since they’re typically hit by “inattentive or aggressive humans unaccustomed to machine motorists that always follow the rules and proceed with caution.” Maybe autonomous vehicles aren’t at fault in the legal sense, but don’t they bear some responsibility since they’re unable to adjust to road conditions as a human driver would? Speeding up to merge or to avoid an accident is the responsible, and predictable, thing to do.
The GM researchers are debating the wisdom of always sticking to the speed limit, but so far, strict adherence is the rule. In contrast, a competent, situationally aware human driver will technically break the law—for example, crossing the double yellow line to avoid a bicyclist—for safety sake.
If driverless cars can’t learn to bend the rules, to adapt, they will always be in conflict with human-driven vehicles. Despite California’s regulatory efforts, it’s unreasonable to assume there will always be a properly licensed and trained human ready to take control. After all, isn’t the ultimate goal of driverless technology to remove humans from the driving equation completely?
If human-driven cars are removed from the roads entirely, vehicle conflicts may become a thing of the past, but what happens when four driverless cars pull up to a four-way intersection simultaneously? Sounds like the beginning to a bad joke.