The Cybersecurity Risks of Self-Driving Cars

By guest writer David Lukić

Defining what a self-driving car is will provide enough proper context to discuss the looming cybersecurity issues. A “self-driving car” is a type of “autonomous vehicle.” Such vehicles have the unusual ability to navigate roads and interpret traffic-control devices without the active involvement of a human driver in its control systems.

Self-driving cars could potentially limit the number of road accidents, commuting time, and the environmental impact of road travel. The idea itself continues to astonish even the brightest of us. A 2017 Business Insider report projected that ten million self-driving cars would be plying roads worldwide by 2020. While implementation tests continue on many aspects of these revolutionary vehicles, many agree that autonomous vehicle technology can serve immense benefits to society, including but not limited to economic productivity and reducing urban congestion.

But there are naysayers, some of whom are corporate risk managers and potential consumers. Their primary concern is not the ability of these vehicles to provide a comfortable ride. Instead, it’s about their cyber safety or cybersecurity. So yes, the fleet of the future hasn’t quite addressed all of its security problems.

Cybercriminals could be able to hijack the electronics of an autonomous car with the intent of causing a crash. There’s also the dire possibility of these cars becoming remotely-controlled weapons or exposing too much personal data to attackers.

Why Are Self-driving Cars Vulnerable To Hackers

Engin Kirda is a systems, software, and network security professor who holds joint appointments in the College of Computer and Information Science and College of Engineering at Northeastern University. In assessing the cybersecurity risk of self-driving cars, he focuses mainly on how carmakers are approaching the cybersecurity question on self-driving vehicles. It goes beyond simple spyware and Trojan attacks to more sophisticated schemes with robust malware.

Professor Kirda submits that the type of self-driving car and how it interfaces with the world around it determines its vulnerability to hacks. For example, a car that leverages the cloud for computations that needs internet connectivity or relies solely on external sensors to make decisions is high-risk in terms of cybersecurity.

One would expect any computerized system with a significant connection to the outside world to have a high “hackability” index. A highly hackable system is a good candidate for ransomware attacks. The more complex software is, the easier it is for bugs to creep in. bugs may include security vulnerabilities that any individual may choose to exploit. Hackers are also capable of tricking sensors if they find enough bugs to do so.

We can illustrate this with a road sign that looks like a stop sign to humans. However, it might appear to the car to be something different. There’s empirical evidence demonstrating such sleight on machine learning systems, especially when malware infiltrates them.

What are the Risks Associated to the Vulnerability?

Vulnerability to hackers is a potential landmine for motorists if not promptly and adequately dealt with.

Self-driving cars rely on artificial intelligence systems that use machine learning techniques to collect, analyse, and transmit data. The data is necessary to make decisions that human drivers would make in regular cars. But the IT systems involved are prone to risky vulnerabilities.

The AI systems of a self-driving car are always on to recognize traffic signs, road markings, identify vehicles, determine their speed, and plan the journey ahead. Expectedly there are unintentional threats such as unexpected malfunctions. But, the vulnerability of these systems to intentional malicious attacks is well known. These attacks aim to disrupt the AI system and interfere with safety-critical functions.

Some risks include adding paint to the road to misguide navigation or overlaying stickers on a stop sign to obscure its original meaning. As soon as the AI system begins to misidentify objects, it will classify them wrongly and cause the self-driving car to behave in potentially dangerous ways.

What are Major Concerns Regarding the Cybersecurity of Self Driving Cars?

When in 2015, researchers demonstrated the hack of a Jeep through the internet, most stakeholders were wise enough to treat the issue as more than a one-off. Regular cars have established security issues, but security for self-driving cars is a whole other world.

Hackers can access and control all the sensors of self-driving cars, leading to the possibility of massive incidents on roads. Knowledgeable attackers can employ malware or breach network security, tricking a car to go in an unknown direction, accelerate, or cause its anti-collision systems to go on holiday. Such security breaches can give anyone control of the vehicle.

Since the internet is available on mobile phones, it’s absolutely necessary to view every rider in a self-driving car as a potential threat. If they only plug in their internet device into the car’s OBD2 port, they’ll get the control they need.

Computer security and information security are inseparable from such technology, so automakers are wary of cybercriminals that could potentially stall the progress of self-driving cars.

If motorists don’t feel safe taking self-driving cars because of the potential cybersecurity downsides, there’ll be little incentive for manufacturers to invest in their innovation and production.

Many will echo the position of Professor Kirda, who feels the technology will be more mature around 2027. Until then, he’ll prefer a hybrid car that’s self-driving but allows human control to take over if the situation becomes close to tricky. It’s an interesting scenario, so when you decide to buy a car, make sure to understand the pros and cons of your decision.

What are the Companies doing to tackle this Problem?

General Motors CEO Mary Barra says a cyber incident doesn’t have isolated consequences for an automaker as it is a problem affecting every automaker in the world. Research supports her position with various efficient hacks against current automakers. More than five years ago, a team of the University of California, San Diego researchers published a series of papers demonstrating hacks that activated the brakes while a car travelled. Charlie Miller repeated the feat at the 2015 Black Hat USA.

In response, automakers are actively pursuing defensive techniques that can prevent attacks against their systems. Keeping security vulnerabilities to a minimum makes software less vulnerable to hackers; therefore, carmakers wisely channel their efforts towards designing reliable and secure systems. In a move reminiscent of passenger airplanes, modern self-driving cars would run on multiple distinct computer networks so that compromising one network will not impact the car’s remaining sensitive computer networks.

The automotive industry is proactive in this regard and has created a series of Automotive Cybersecurity Best Practices. The Automotive Information Sharing and Analysis Center (Auto-ISAC) issued the Automotive Best Practices to guide the implementation of a previous “Enhance Automotive Cybersecurity” Principle by independent auto manufacturers.

The Automotive Best Practices addresses organizational and technical components of automotive cybersecurity. It includes collaboration with appropriate third parties, governance, incident response training, risk management, security by design, and, quite importantly, threat detection.

Besides, the Automotive Best Practices ensure that participating members improve the security of self-driving cars and similar vehicles by managing cybersecurity at the product level.

Government regulation is an essential piece in the jigsaw of hacker prevention. However, even as more than two-thirds of US states have some type of legislation in place for autonomous vehicles, the issue of cybersecurity is still more on the back burner.

The Department of Transportation recently released guidelines for the development of self-driving cars. Professor Kirda argues that regulation alone might not be enough to solve the security problems of self-driving vehicles. He advocates government involvement; however, he urges the government to ensure manufacturers adhere to predefined secure coding practices while ensuring they implement specific security precautions.

What Can You Do To Prevent Hacking?

Cybersecurity experts and malicious hackers have been at war since the beginning of connected computing times. The activities of hackers have hampered the digital landscape for many years, and doubtless, it has slowed what should have been exploding adoption in the self-driving car industry.

Various automakers are trying to prevent hacking in unique ways. Still, we need to be clear that it’s utterly impossible to prevent every person from attempting to probe for exploitable vulnerabilities within a self-driving car system.

Some automakers have opted to pay big money to anyone who can hack their car’s AV system through incentive programs. Tesla is doing this through their Pwn2Own bug bounty competition. Fiat Chrysler’s Bug Bounty Program is similar and pays $1,500 per hack. The money could provide enough motivation for many would-be hackers, and it allows automakers to identify, analyse, and fix vulnerabilities in their systems.

Besides testing and analysis, carmakers are still in the infant stages of assessing how secure their systems are. Moreover, the small number of self-driving cars on the road doesn’t provide a pool robust enough to determine how the typical threat actor might approach a hack job.

Final Thoughts

Self-driving car technology is growing in leaps and bounds. While the cybersecurity issues appear insurmountable, this ENISA (European Union Agency for Cybersecurity) report recommends regular assessment of AI components throughout their lifecycle. Such systematic validation of artificial intelligence models and data is necessary to build cars that always make the right decisions even in the face of unexpected situations or malicious attacks.

David Lukić is an information privacy, security and compliance consultant at IDstrong.com. The passion to make cyber security accessible and interesting has led David to share all the knowledge he has.

Not an NMA Member yet?

Join today and get these great benefits!

Leave a Comment