A car with a (point of) view

By Andrew Burt

shutterstock_191586596

Autonomous cars. Few other technology-related topics these days stimulate the heated reactions that this topic seems to kindle – although the new Apple Pencil is certainly getting a lot of attention. Reactions to driverless cars run the gamut from excited anticipation to dystopian alarm. It’s not so much fear of the machines taking over that tends to raise our antennae, as the fact that we just like to be in charge. We all like to feel that we’re in control of our cars, rather than being shuttled about by an unseen chauffeur.

But how much are we really in control?

Chris Gerdes, director of the Center for Automotive Research at Stanford (CARS), told TU-Automotive Conference attendees that 94% of the automotive accidents that occur annually (over 5 million in the U.S. alone) are preventable because they’re caused by human error – whether from sleep deprivation, intoxication, texting, or simply poor reflexes. Intrinsically, we know that “someone” is at fault when an accident occurs (not us, of course!), so this statistic makes self-driving cars seem like the ideal solution. Are they? The answer isn’t neat and tidy.

Cars that can drive themselves bring to the table a number of challenges that have nothing to do with engineering or technical specifications.  There are legal and ethical implications that need to be considered. For example, from a legal standpoint, if an accident does happen (because nothing is 100% fool-proof), who is responsible? The owner? The carmaker? Component and/or software providers? In cases where bad roads or hazards were involved, does the state or local government bear the burden? Government policies need be developed that take autonomous vehicles into account; funding of future infrastructure projects will no doubt be impacted. And soon. Many automakers and suppliers have publicly stated plans to put self-driving cars on the road by 2020.

CMOS image sensors – the eyes of a camera – play an important and growing role in enabling autonomous driving. Vehicles may contain as many as 10 cameras when the age of self-driving cars arrives, and Strategy Analytics estimates that global shipments of automotive cameras will more than triple, reaching 102 million units by 2020 (see chart below). With all these sensors, comes an increased need for security, and solutions are being developed to minimize the potential for and impact of another critical factor: hacking.

2020 Automotive Camera Demand. (PRNewsFoto/Strategy Analytics)

For most of us to make the decision to invest in a driverless car, we’ll need to be confident it’s been made as impervious to cyber-penetration as possible. In a recent IEEE Spectrum article, a principal scientist with Security Innovations claimed to have found an inexpensive and fairly simple way to hack the costly light detection and ranging (lidar) systems used by most self-driving cars to identify obstacles. And in a widely reported experiment, hackers hired by Wired magazine took control of a connected SUV, which ultimately ended up in a ditch. The driver was not injured, but what if some electronics DIY types, tired of building drones, were to try their hand at hacking projects?

This brings us to the question of ethics. Google “ethics” and “self-driving car,” and you get a host of results – with good reason. While an image sensor-driven camera and CPU are basically analogous to the coupling of the human eye and the brain, they don’t possess the same decision-making skills as humans. Debate therefore abounds as to how to program cars to make split-second decisions.

shutterstock_148830749A great deal of research is being done in this area. As detailed recently in MIT Technology Review, Stanford’s CARS is working on programming vehicles to make real-world types of ethical decisions. Chris Gerdes affirms that carmakers are highly aware of this issue, and they have programmers actively working on the design control algorithms that must take these challenges into account.

For example, can a machine that’s not supposed to break laws do so to save a life – e.g., cross over a double-yellow line to avoid hitting a bicyclist or pedestrian? What if the scenario is more complex, and no matter the option, someone will be injured, or worse? Do the “needs of the many outweigh the needs of the few,” as Mr. Spock would say?

We’d like to know what you think. Please feel free to share in the comments section your views on driverless cars and their legal, ethical and other implications for the automotive industry and our world at large.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s