For most auto manufacturers and technology companies, self-driving or driverless vehicles are being tested on a relatively small scale. Engineers and software developers for major automakers have been working on ways to increase the effectiveness of these systems for more than a decade, before bringing them to market, in phases.
Tesla has taken a different approach. For years, the electric vehicle (EV) company has been giving customers access to “full self-driving” beta software, allowing Tesla owners to effectively be guinea pigs for the company’s technology. Over the past year, videos posted on social media have shown the technology to fail in many respects.
The company is now under investigation by the National Highway Traffic Safety Administration (NHTSA) for several accidents where the autopilot system was engaged.
Brian Reimer, a research scientist at the MIT Center for Transportation, leads a team that explores the intersection of human behavior and automated driving features in production and future vehicles.
In a recent interview, he told Granthshala That push toward autonomous driving is a balancing act between what an autonomous vehicle can do versus what a driver is able to do in combination with that technology. The goal is to take some of the more routine driving tasks out of the driver’s hands.
“Humans in their nature, without proper support, become more dependent on automation and often use it beyond the perception of the designer of the system,” he explained.
To counter that aspect of human nature, Reimer says autonomous systems should be viewed as a collaborative part of driving, helping drivers make in-the-moment decisions.
He points to Tesla’s Autopilot and full self-driving products. Recent studies show that Drivers using those technologies have become less observant on average.
Reimer admits that even the best engineering teams in the world cannot account for every variable like stationary emergency vehicles, but that repeated incidents like these should raise ethical questions.
“Once we understand a situation it becomes an abuse,” argues Reimer. “We can, at this point, infer the inability of Autopilot to detect stationary emergency vehicles on the side of the road.”
To prevent this type of loss from being standardized, Reimer favors a more careful approach to autonomous testing.
“The standard, to me, requires continuous process improvement,” he said. “It needs to be scientific data validated by third parties.”
Dr. Nicholas Evans agrees. As Professor of Philosophy at the University of Massachusetts-Lowell he studies the ethical questions surrounding emerging technologies and has been involved in research on autonomous driving.
He and Reimer agree that the uneven nature of autonomous vehicle testing is partly due to the absence of a relevant regulatory environment.
“If an autonomous vehicle was a drug, we would know how to test it,” Evans said in an interview.
Both researchers are in favor of a regulatory body similar to the Food and Drug Administration (FDA) that can oversee the development of these technologies and insure that they are being tested and used safely. This FDA-style body will not only analyze hard data but set ethical standards for testing.
Injuries or deaths resulting from uncontrolled technology development are a blow to companies and industries as a whole, Evans said. The benefits of having more inspections outweigh any reduction in speed in test cycles.
“One thing the automotive industry knows really well is what happens when you don’t respect consumer safety,” argued Evans. “The people I talk to in the automotive industry remember the Ford Pinto. They remember the state of the Takata airbags. When these things happen, the automotive industry … in many billions of dollars in damage It happens.”
When it comes to the ethical questions surrounding testing, Evans said that kind of regulation is needed for testing something marketed as having social value.
“They are marketed as interventions,” he insisted. “Tesla doesn’t just say ‘Having an autonomous vehicle would be good for you.’ Tesla says having autonomous vehicles on the road is good for everyone because it will make us safer and more efficient.”
One of the most prevalent questions is whether the public has a right to know when autonomous vehicles are being tested in their area.
A philosophy professor at Manhattan College, Dr. According to Heidi Fury, who has researched the ethical implications of autonomous vehicles, automakers should tell the public whether or not they have tests going on, it’s a gray area.
“It’s really hard for new and emerging technologies because it’s not always clear that the public can fully understand what the technologies are,” she said. “They are really subject to what philosophers would call risk aversion, where people tend to overestimate risks that are emotionally dominant and underestimate risks that are more mundane.”
This presents difficulty in obtaining informed consent for testing in a geographic area. Evans says community engagement and education are the best ways to tackle the issue, but warning stickers slapped on test cars can lead to adverse consequences such as the observer effect that can disrupt testing.
“I think the public has a right to test these things,” he said. “But they don’t need to know which cars are test cars on any given day.”
During development and testing, Fury says engineers would do well to consider the trolley problem, a classic thought experiment designed to explore ethical dilemmas. For example, if you were in control of a running train, would you kill five people on the track or switch tracks and kill just one person?
Sooner or later, autonomous vehicles will have to choose between imperfect options, Fury presses.
“We have to decide how we’re going to make the best of a bad situation,” she said. “Do we only care about the number of lives? Do we care about age or are they following the law or not?”
She says the trolley dilemma is a good starting point for uncovering the “morally sticky” parts of a situation, but it requires a broader conversation about moral beliefs and the mundane parts of driving decisions.
For Reimer, these conversations between public and private entities about testing and ethics are lacking. The absence of policy guidance from the federal government leaves a void in which suboptimal testing can be successful.
“We need to have a much more difficult conversation than anyone is comfortable with,” he said.
He believes that the lack of government involvement, coupled with a similar lack of transparency from industry, could lead to a series of regulations that could hinder the development of autonomous technology.
“Demonstrating that you can walk in smaller environments before you’re allowed to walk on public roads is huge,” Reimer said.
But those conversations, according to Evans, obscure the bigger moral picture. While the conversation currently focuses on getting the test right, there should be more dialogue about the ethical implications of having more self-driving vehicles on the road and who is affected by unintended consequences.
As for Fury, he is most concerned that ethical concerns will not be considered until the innovation of autonomous vehicles takes off, too late for the public to care. There is an emotional investment now because this is a new technology, but it goes away when autonomous vehicles become an everyday fact of life.
“This is going to be the biggest challenge,” she said. “When it stops sounding new and interesting, how do we still do the work we need to do ethically?”