Hackers Can Control Your Car’s Brakes, Doors, Steering—Car Makers Can't Stop Them | Kathleen Fisher
[Music] We're hearing a lot about the Internet of Things, how your many, many devices are becoming networked computers. Many of these devices are, you know, $10 things that you buy and you put on your shelf, and you have it for a year, and you throw it away. I think not a lot of attention is being paid to the security of those kinds of devices.
In some sense, the companies that are making them can't afford to do it, but they can leave long-standing vulnerabilities. The automotive industry is another interesting example. A typical American modern automobile has somewhere between 30 and 100 what are called embedded control units. An embedded control unit is just a computer. Some of them are very, very small and run very simple code native on the hardware; some of them are full-blown Linux computers or Windows computers and are networked. A modern car has four to five network connections where the code, the computers on the car, talk to computers outside of the car.
So, an example is there's a Bluetooth connection so that your cell phone can talk to the car, so that you can play your music from your phone on the car. You can talk on the cell phone without having to use your hands. There's also a telematics unit, which is the thing that if you get in an accident will arrange to call 911 or have the paramedics come. That service, which is really useful and a great safety feature, means that your car has a cell phone number and that it's possible to communicate with your car over that cell phone connection.
Hackers can use those network connections to remotely break into the computer system that's on your car, and white hat hackers have shown they can do that. Then, they can rewrite any of the software on the car, replacing the code that was legitimately put there by the car manufacturer with whatever code they want to have there. In a typical modern car, pretty much all of the functionality of the car is now controlled by software.
So, braking is controlled by software because you really want to have anti-lock braking. Steering and acceleration are controlled by software because of cruise control. And when you like, you really want to have a car that can produce parallel parking for you. That means steering is under software control. The locks are under software control, so you can push the key fob button and have your locks open. Essentially, all of the functionality in cars is under software control, and for the most part, that's a really good thing.
Having it be under software control means that you can get increased functionality, you can have improved safety features, and get upgrades as the car companies figure out how to do things better. All that's really good. The downside is that because it's controlled by software, if an attacker can come in and replace that software, then they can control the braking, the acceleration, the locks, and everything that was under software control.
So, we're starting to see theft rings, for example, that are using electronic hacking in order to steal cars more easily. Lloyd's of London recently stopped insuring Land Rovers in England unless the Land Rovers were garaged in a locked facility because they were being stolen too frequently. So, that's the kind of state-of-the-art of the automotive industry.
The question is, well, why isn't it better? One starting point is it’s really hard to get good security; you have to do tons of things right. It costs money. So, the car industry could improve the security of their cars, and hopefully they will eventually. That improvement will cost them money, and the car industry doesn't have huge profit margins—they can't really afford to invest in security unless they can recoup the costs associated with that investment by passing the cost on to the consumer.
So, that means the price of the car is going to be higher. Then, why is the consumer going to go buy the car that's more than the equivalent car from a different manufacturer? Well, typically the answer is, well, you do advertising; you explain to the consumer why they're getting more value for this extra cost. The problem is if you imagine a car company starting an ad campaign to explain their car is more cyber secure, most consumers these days probably think their car was already cyber secure.
They didn't realize that their car could be hacked into. So, the result of such an advertising campaign could, in fact, be to make people afraid to buy any new car whatsoever rather than causing them to buy a particular car. I think an advertising approach to motivating consumers to pay slightly more for a particular car is not really viable. So, that means all of the companies have to do it at the same time.
They all have to do this extra investment, so that if the price of all cars goes up by a little, then there’s no longer this differentiation between manufacturers, and consumers would be choosing between cars that all have roughly the same level of security. Going back just a bit, another reason why the car companies can't advertise on the security is suppose one car company actually did go and invest a ton in making their cars more secure, and then they advertise their car was more secure.
That's kind of painting a big target on your back as far as the hacker community is concerned. Certain individuals would take that as a challenge, and they would go like—they would sort of get some number of credibility for hacking into any car. But if you hack into the car that is from the company that is advertising that their cars are secure, you get way more credibility if you find a vulnerability and then publish it. That car, all of a sudden, is less secure than the other cars that might actually have more vulnerabilities, but no one has discovered it because the vulnerability is public, and people know it exists and therefore can exploit it.
So, it's this weird situation where, although the car, in some absolute sense, is more secure, in a practical sense it’s less secure because there's a publicly known vulnerability. Advertising is bad both because it could scare away consumers entirely and could direct hackers to your specific car, decreasing the security of that car. As a result, we're left with how do you get sort of the car industry as a whole to produce more secure cars, and that requires some kind of external motivation.
They might decide to do it because it's just the right thing to do. Typically companies are motivated by financial reasons; they usually can't afford to do things just because of the right thing to do. In terms of outside forces, one outside force would be government regulation.
Cars many years ago were very unsafe; Ralph Nader wrote famously "Unsafe at Any Speed," and that prompted federal regulations. There's the five-star safety crash rating system, for example, that created a regime where cars were tested for their safety and given scores. Consumers could then use those scores in making their decisions. I think creating such a regime in the current political climate is unlikely.
Another possibility is the insurance companies might start to impose pressure—financial pressure—on the car industry. We saw that with Lloyd's of London refusing to insure Land Rovers that weren't garaged in locked facilities because of electronic theft. If the insurance companies start to notice that certain kinds of cars are getting hacked into, with negative consequences for theft or accidents, they might increase the insurance premiums on that particular brand.
Those insurance premiums might then drive consumers to cars that don't have those characteristics, and that might then motivate the car industry to improve the security. That's kind of a long chain, but at the moment I think that's my best guess as to how we'll get better cyber security in our cars.
I think the car industry is, in some sense, representative of many other industries. Medical devices are another domain—things like pacemakers and insulin pumps. They're relatively simple computer systems that are networked to other computers. Like, it makes sense for a pacemaker to have a Wi-Fi device so a doctor can monitor how the heart is doing.
But once you have that Wi-Fi connection, a hacker can use it to go in and modify the code in the pacemaker. There are basically lots of examples of industries where things that previously weren't computers at all are now not only computers but they're networked computers. Over time, those industries will need to accept that their networked computers and start to apply security techniques so that the systems are more secure. [Music]