SXSWORLD

SXSWorld February 2016

SXSWorld

Issue link: https://sxsw.uberflip.com/i/631812

Contents of this Issue

Navigation

Page 25 of 63

2 4 S X S W o r l d | F E B R U A R Y 2 0 1 6 | S X S W. C O M few years ago, when discussing the work he and his students did building driverless cars, Stanford University mechan- ical engineering professor Chris Gerdes tended to focus on speed, as illustrated by his 2012 TEDx Talk, "The Future Race Car—150 MPH and No Driver." These days, Gerdes and his stu- dents still test their driverless race cars at the track and consult with professional racing drivers, but he also consults with philosophers. Gerdes is at the forefront of an effort to incorporate human ethics into the programming that controls driverless cars. The push reflects how thoroughly the nuts-and-bolts challenges of developing prac- tical, road-ready autonomous vehicles have been solved. "When we look back," Gerdes says, "there were so many technological ques- tions, so many issues of what sensors do you need, how do we do this, what sort of computing power is necessary, will this work at all? Trying to cooperate with philosophers at that point really wasn't on anybody's mind. You needed [the technology] to get to a certain level of maturity." Now that autonomous vehicles are close to being ready for prime- time, Gerdes is concerned that a key roadblock to their rollout and acceptance could be the lack of an ethical framework guiding the decisions the cars make when sharing the road with vehicles con- trolled by human beings. This delay could mean unnecessary traffic injuries and fatalities. "The safety benefit is certainly my personal motivation for working on this," he explains. "I think that technology has the potential of eliminating the vast majority of the more than 30,000 fatalities that we get each year in the U.S." The current, if scant, accident data—in virtually all collisions on public roads involving driverless cars, human drivers in other vehicles were at fault—support his contention. "We've been pushing to give cars the ability to avoid any accident, but some accidents may be unavoidable," Gerdes says. "The vehicle has no clear trajectory; it will have to collide with something." One scenario he suggested would be a driverless car that is traveling behind a truck; a heav y box falls from the truck, and the lanes to the left and right are occupied by cars with human drivers. Should the driverless car hit the box, risking injury to its passengers, or swerve to miss it, risking injury to a driver in an adjacent lane? "We had to begin to confront these situations where there was no good outcome," Gerdes says. "And when we got to that point, it hap- pened I'd also been contacted by some folks in philosophy, including professor Patrick Lin at Cal Poly (California Polytechnic State University), who raised this idea that automated vehicle designers should be thinking about ethics." The effort isn't so much about refining a car's ability to avoid accidents altogether. Banning human drivers and turning the roads over to always attentive, rigorously law abiding autonomous vehicles could arguably achieve that almost overnight, but no one predicts that will happen soon. So what Gerdes and others in the field are trying to do is bake into the programming of autonomous vehicles a rational framework that will help explain the choices they make, particularly after the inevitable collisions involving injury, loss of life or property damage occur. "One of the things that became very obvious to us," Gerdes says of his early exposure to such ethical exercises as Philippa Foot's "trolley car problem," "is that philosophers like to ask questions and aren't as concerned about getting the answers. Whereas engi- neers are obsessed with answers and solutions, and sometimes don't even ask a question. And the reason that philosophers like to ask the trolley car question is there is no real answer that everybody is going to find acceptable. It provokes a lot of thought." Still, the exercise can only take engineers so far. "I can create all manner of ethical dilemmas, and I can keep making them harder by adding more details and adding more difficulty to the choices," Gerdes says. "But at the end of the day, I have to have a certain amount of software in my car. So I have to find some way to resolve that. We think that it's important to look to philosophy and ethics to say, 'Okay, these are the sort of situations that can arise ... what should the car do, and will that be good enough to be accepted by society?' " "That's really the point of ethically programming," he continues. "For instance, the car may not be able to avoid an obstacle; it may have to collide with something, and we may make a decision to have it collide with another vehicle as opposed to a pedestrian. So that's a design choice, but that's one that can be explained to people in terms of ethics." "There may be situations where that wasn't the best outcome," Gerdes says. "But at least we've been clear that this is the way that the cars think and process that. And that's an important thing to communicate, because people tend to look at these cars and say, 'well, they see everything and know everything, and they're going to be making some optimal decisions which will be best for everybody.' But in reality that's not true." T Chris Gerdes will speak about cars of the future as part of the Intelligent Future track during SXSW Interactive. See sxsw.com/interactive for more details. For Driverless Cars, Solving for "X" Means Explaining "Why?" by Rich Malley A Ch ri s G e rd es

Articles in this issue

Links on this page

Archives of this issue

view archives of SXSWORLD - SXSWorld February 2016