My Charter

The social, political and technological trends that affect how we live may interact unpredictably, but that doesn't mean logic and imagination can't guide us to better outcomes. Blaugury observes the strange goings on and raises a red flag, when needed.

Thursday, February 20, 2014

There Ought To Be a Law


(Must. Not. Use. “Auto.” Pun.)
A fourth consecutive post anticipating the arrival of self-driving (SD) vehicles means I’m either in a groove, or a rut. (Hard to say, interrupted as I’ve been by a month-long, family-wide battle with a nasty flu. (Oh, yeah, and some football games.)) This time the focus is on legal and ethical issues. Next time I’ll try to wrap everything up, in what I hope will be recognized as a Blaugury idiom, by commenting on the auspices.
Previously we learned that by the end of 2013 licensing laws for autonomous vehicles were already on the books in Nevada, Florida, California and Michigan. Not a groundswell, but nonetheless a clear indication of the direction we’re heading. As other states get on board they’re either going to establish a rational, easily extended pattern, or a confusing, potentially dangerous patchwork. Smart money is on the latter.


Image: DonkeyHotey


That America’s deep ideological rifts are widening is well-documented. Open cultural and political conflicts are destroying public civility and further diminishing our government’s modest talents for enacting meaningful, collaborative change. If we also consider how many of our fellow citizens and legal representatives are willfully, cheerfully ignorant of science and technology, rather than rational vehicle codes we should expect only legislated disasters. 
What, too alarmist?
Automation: Coming to a Vehicle Near You 
Last time, I raised the specter of disruptive innovation, a term coined by Harvard professor Clayton Christensen to describe a new technology that displaces one or more established technologies -- creating in the process new product and service markets. Matter-of-fact as the definition sounds in the abstract, applying it to the coming automation of America’s transportation grids is like describing a hydrogen bomb as “fireworks.” 
Robot vehicles most certainly will cause -- disruption. Equally understated, the scale of the shakeup will be -- enormous. Deep and wide, the aftershocks could last a generation or more, affecting everyone who drives or rides in a car, truck or bus. Or who knows anyone who does. (Or who knows anyone who knows anyone who . . . well, you get the picture.)
With no trace of hyperbole, a NYT article asserts this technology will transform society as profoundly as the Internet has. One could argue they’re badly underestimating the potential impacts; but even if they’re not, odds are it’s going to be a rough ride.
As good as the Internet has been to us, during the twenty-odd years it’s been around, few would argue that its “Wild-West” environment has been all good. Fewer still would be content to let SD-vehicles develop the way Internet technologies have done -- in whatever directions thousands of competing innovators and entrepreneurs have wanted to take them, and with few regulatory constraints. 
Automated vehicles, on the other hand, are going to be -- must be -- entirely different animals. Unlike the relatively ephemeral threats posed by the Internet, robot cars are going to present serious visceral concerns. There will be skin in the game. 
However, while the potential physical dangers may draw the most attention, they represent only a fraction of the risks in such a large-scale project. Recognizing the broad scope demanded a systematic, multidisciplinary approach, DARPA, Google, MIT, and a growing list of automobile-related industries are already thinking -- and testing -- around the problem set* of a not-too-distant future that includes fully automated transportation networks.
(* Actually, instead of problem set, more like a myriad of complexly intertwined problem matrices -- virtually all of which must be resolved, before autonomous vehicles are loosed on the public.) 
Just when is that going to happen? Difficult as it is to predict what adoption rates will be, some carmakers have pledged to begin selling autonomous vehicles by 2020. That’s but six years away. Even more significantly, the really big pieces are in motion, or already in place. As revealed in this Department of Transportation presentation, government, industry and academia have been meeting to assign preliminary roles and responsibilities. They’re preparing to divvy up what may be the biggest pie, ever -- the Internet of Vehicles, autonomous and otherwise. 
(What happens when that pie hits the fan . . . er, road?)
Reaching Critical Mass 
While we can applaud their early start on the planning, we should be concerned they’ve yet to recognize the roles ordinary people should be playing in their overall calculus. If we are not prepared -- that is, we citizens who live, work and drive in the United States of America (to say nothing of the dozens of other industrialized nations that will be affected in a similar timeframe) -- the long transition to our automated future is going to be a nightmare.
The general public needs a seat at the table for these meetings, because what’s on the horizon is not just another restructuring of a few isolated product and service markets, with some minor ripples in a few supplemental industries and labor markets. What’s coming is much bigger than the Internet. 
Most would agree the Internet has transformed the way every level of society operates, legally, commercially and privately. In the past two decades, advanced computing technologies merged with high-speed wireless and global satellite communications -- to the point of inseparability -- and revolutionized almost everything on the planet. 
Now, by grafting robot cars and trucks onto that well-established, increasingly potent mix of computers and information, we’re going to make that “almost everything” a lot, lot bigger -- exponentially so. We all should consider carefully what that means.
Trade Offs
It is true, automation does mean safer roads. A study by the Eno Center for Transportation estimates, “if 90 percent of vehicles were self-driving, as many as 21,700 lives per year could be saved, and economic and other benefits could reach a staggering $447 billion.” 
The enhanced communication technologies -- vehicle-to-vehicle, as well as vehicle-to-grid -- also will help lessen congestion in major population centers. Efficiencies of scheduling and delivery promise to increase productivity and reduce carbon emissions across the country. In the end, everyone will benefit. 
Sadly, the benefits may not alleviate the pain of transition, at least not for everyone. Automated vehicles are going to change the way we live, even more dramatically than the robots already replacing skilled humans in manufacturing and service sector jobs, in the USA and across the globe. (And that’s only the tip of the iceberg: an estimated 47% of all US jobs will be lost to the eventual wide growth and deep sector penetration of automation technologies, of which SD-vehicles are but a part.)
Automation means fewer jobs. Many fewer jobs -- which means more hard feelings and more social unrest. It’s an unavoidable dynamic of the new relationship. Even absent any other potential negatives, the unfortunate correlation makes it highly unlikely that, when vying for the pole position with SD vehicles, freedom of the American road advocates will simply stand down, exiting quietly into the pages of our nation’s history. (If one is looking for analogues, consider Civil War reenactors, on both sides.) 
Releasing robots to the open road will be an automation milestone -- the next step toward a future where machines will provide the labor for almost every imaginable product and service. But not everyone will be happy about it. Emotions will run high, and demagogues of every stripe will assail the public, no matter the cost. (In other words, like now, only much worse.)
In other words, there will be mayhem.
Image: DonkeyHotey

A Nation of Laws
The changing situation on our roads and highways will challenge our already fraying social contract. That self-driving vehicles also will raise new and complex legal questions -- e.g., Who will set the standards for the hardware and software? What will the manufacturer’s liability be? Who’s responsible when an SD-car has an accident, or breaks the law?, etc. -- can only make things worse.
It will be up to our governments and judiciary to impose order on the chaos. One would suppose (despite ample evidence to the contrary) these new laws will protect the public and punish bad actors. But laws have limits, and sometimes laws may expose us to danger, too.
To whit: Stanford fellow Bryant Walker Smith predicts SD-cars are going to be legal in the USA because of a principle of freedom that posits “everything is permitted unless prohibited”. Somewhat perversely, by this reasoning, it’s all but certain that legal issues -- more so than technical or social ones -- will be throttling the schedules for at least commercial and private market versions of robot vehicles. (Of course, the military’s schedule is its own.)
There are legitimate questions about what laws (and whose) will apply when the robot rollouts begin in earnest. As this NY Times article points out, vehicle codes haven’t changed all that much since the horse-and-buggy days. Do we simply tack on Asimov’s Three Laws of Robotics and call it a wrap? That won’t work. 
(FYI, a bit of a tangent, but science fiction writer Warren Ellis has formulated an alternative set of three laws. Harshly amusing and instructive, but not for the profanity-averse.)
There are many hazy areas in any system of laws, no matter how well-written and well-intentioned they may be. Clarifying them will demand litigation. Lots of it, and for a long time. (On the plus side, however, law school enrollments are certain to swell; and we can expect similarly robust job growth for the automobile, property and personal injury insurance industries.) 
Robot Ambulances and Ethical Burdens
Because SD-cars will be safer and more efficient, analysts predict insurance premiums will be higher for those who don’t have a robot driver. (At least one forward-thinking insurance company has started an ad campaign.) However, determining who’s liable in an accident won’t be a simple matter, even when video and black-box evidence may be available. 
New laws and codes will have to define the responsibilities of a vehicle's human operator/owner in at least two different domains, inside as well as outside their vehicles. When one adds in expressed or implied manufacturer warranties, recommended vs. mandatory maintenance schedules, computer performance and security concerns, and the operational vagaries of any exceedingly complex electronic system, one begins to appreciate the ambiguities of these new legal grounds. Let's hope our planners have ways to protect society from the consequences, when their plans become actions. Let’s hope planners are giving more than a passing thought to protecting society from the consequences, when their plans become actions.
Since it’s impossible to write laws that cover every situation, ethics must guide us when laws cannot. Ethics, however, are a reflection of external social norms -- assisted or, in individual cases, negated by a person’s internal moral principles. If you think solving ethical dilemmas is tricky, now, just wait. As this article in The Atlantic points out, autonomous vehicles are likely to come bundled with an entirely new set of puzzles.
For example, when faced with unexpected obstacles human drivers routinely break or bend the rules of the road -- crossing a double-yellow line to avoid a fallen tree; exceeding the speed limit in an emergency; etc. Faced with the same circumstances, robot drivers -- no matter how well-programmed -- might be incapable of behaving in the safe and responsible manner most of us take for granted, from ourselves and from our neighbors.
The why of this is easy to see: computer programmers are no more able to code in appropriate responses for every possible driving situation than lawmakers are to write laws to anticipate every legal question. An SD-car’s GPS link will never include a ethical/moral compass. The artificial intelligence will be programmed with every conceivable scenario, but it won’t be enough. It also must be prepared to respond to the inconceivable
A robot vehicle’s AI software will need parameters flexible enough to guide it in unforeseen circumstances. Obviously, when a playing child suddenly runs out into the street, an SD-car will be programmed to stop. But will it do the same for the child’s pet? And what if stopping (in either scenario) endangers the vehicle’s occupants? We have a right to expect humans to behave ethically and responsibly. Can we expect the same from robots?
Before they open our public roads to autonomous vehicles, planners in government, industry and academia have questions to answer. The stakes are enormous, and they need to get the answers right. Getting there will be difficult, but the end result must include ethical robot drivers a logical and a just code of laws to guide them. 
Then, step back, make popcorn, and cue the lawyers . . .
photo credits: DonkeyHotey via Flickr cc