The headlines all read, Self-driving cars are here! People want autonomous cars!
Or, more accurately, Self-driving cars are coming sooner than we think – whether we like or not!
What those headlines, or the accompanying stories, rarely relate are the unintended consequences that come from vehicle autonomy. Those unintended consequences include a greater potential of someone hacking into your car and a chance that the software intended to make your car safer may actually be making it more dangerous.
Here’s where it all starts. Autonomy is not a feature you add to the option list, like an alpaca interior or candyflake paint color. To create an automated car means taking existing driver assist features like adaptive cruise control, blind spot monitoring, and lane centering and adding a significant step-up of vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) technology. At the core of the new tech are sensors and communications software that seek to replicate what used to be the driver’s job – keeping track of what other cars are doing as well as traffic lights and signs.
What that connective technology does is open the modern car up to something that didn’t exist as a threat only a few years ago: hacking. Just like your computer or smartphone, the car, with its dozens of computers, is more vulnerable the more it communicates with the outside world.
Two major remote hacks by professionals (the first was a Jeep last year; the most recent a European model Mitsubishi) have accentuated the risks. The Jeep’s hackers went through that car’s UConnect infotainment system to take control of the car’s accelerator, brakes, radio, and other features. FCA (Jeep’s parent company) recalled all of the Grand Cherokees and other cars that had the vulnerability and even sent customers a USB drive with software that would protect the car.
More recently a pair of British hackers took control of a Mitsubishi. What’s significant was that it was a plug-in hybrid model, so it featured more sophisticated hardware and software than some other models on the market. It was sophisticated in what the car could deliver to its owners, but it was not sophisticated as far as the hackers were concerned. They found a Wi-Fi access point on the car, written in a format they called “too simple and too short.” They were able to change the charging protocol for this plug-in car and also turn off the alarm system, among other things.
Here’s where the story gets interesting. The hackers tried to connect to Mitsubishi to let them know about the security breach and what they thought needed to be done to fix it. They were rebuffed, as has happened with some other earlier hacks. But, as the cars have gotten more complicated and hacks more common, some car companies are taking a different approach to security breaches. Borrowing from Silicon Valley software companies’ approach to software security, FCA is offering hackers a reward for pointing out shortcomings in millions of lines of code found in new cars.
This is the start of a new era – crowd-sourcing solutions for the increasingly complex innards of the modern automobile. But it also leads to another potential problem for the modern car owner. In the same way that car companies (led by Tesla, of course, because of its Silicon Valley roots) are looking to owners for technical advice, they appear to be moving toward more beta-testing of new software with customers.
Using customers to beta test software is old hat (how long was it before Google’s Gmail dropped the “beta” from its logo?), but software or apps on your smartphone are one thing. I’d argue that safety- or security-related software in a car is not something to put out there before it is thoroughly vetted. This becomes even more critical as cars move towards autonomy, where the stakes are raised higher. Tesla’s fatal crash while running in Autopilot mode is one proof point. It’s not a question of whether these semi-autonomous cars are safer than their manually operated counterparts. I’ll concede that in a heartbeat, but I also don’t think there is any reason for drivers and passengers to be subjected to software that is less than as secure as possible or as safe as possible.
Some auto companies appear to be aware of this, moving back release dates for their autonomous technology or reinforcing that full autonomy is not the intent. General Motors said as much when recently discussing the upcoming introduction of Cadillac Super Cruise.
We need to have software standards worthy of the ideals we expect our automotive engineers to uphold through the rest of the car. We need to aim higher than just expecting customers to point out failings of our slick infotainment and autonomy systems. Maybe more hackers need to be in the employ of auto companies rather than just freelance bug finders.