Newswise — In acts of terrorism, vehicles have been deployed as killing machines. These incidents involved human operators, but another sinister possibility looms: a vehicle cyber hack intended to cause human harm. While this kind of terrorist attack has not yet occurred, in the realm of security research, it’s been demonstrated how hackers could gain control over car systems like the brakes, steering and engine.

Using machine learning techniques, American University Computer Science Professor Nathalie Japkowicz and her co-authors, Adrian Taylor of Defence R&D Canada and Sylvain Leblanc of the Royal Military College of Canada, designed a way to detect unusual activity in a car’s computer system. Unusual activity could signal a cyberattack, so the findings have implications for the search for tools to respond to cybersecurity threats in vehicles.

“We are catching abnormal activity on the network which needs to be analyzed further. We just know that it is different from usual and may lead to a dangerous situation,” said Japkowicz. At AU, where it concerns cybersecurity research, the current focus of the computer science department is on “vulnerability management,” or detecting attacks during normal system operations.  Computer science research into cybersecurity threats and vehicles plays an important role in informing how the automotive industry and national governments will address cyberattacks. 

Car computer network systems have many vulnerabilities. They are made up of many small, linked computer units, (or electronic control units) that communicate with each other. Newer car models can be hacked through wireless and cellular connections. Even USB and iPod ports provide access points for a potential hack. Newer vehicles, especially, are vulnerable because they are constantly connected to the internet.  

A cyberattack will disrupt the normal patterns of a car. Japkowicz and her co-authors’ focused on detecting anomalies on the computer network system and the point at which a car, having been hacked, is made to do things differently from what it’s programmed to do. For example, a change in a message in the computer network system from "steer right" to "steer left," Japkowicz explains.

To detect unusual activity, the researchers experimented with two machine learning techniques, called Long Short-Term Memory and Gated Recurrent Units, to learn normal data patterns in a car, Japkowicz explains. In particular, they used that technology to analyze network data on a 2012 Subaru Impreza for vehicle traffic and driving in various conditions.

Both techniques involved a “neural network” trained on normal traffic patterns so that it could recognize anomalies. A neural network is an important part of machine learning. Neural networks are computer algorithms built on data inputs. Neural networks proceed in a way analogous to how humans learn through neural processes. Japkowicz explains how an artificial neural network works: information gets transmitted from artificial neuron to artificial neuron through highly parallel connections that get stronger and stronger as similar patterns are observed.   

The researchers created an attack framework that allowed them to test a wide range of cyberattacks representative of real ones.  They did so by reviewing every example they could find of published cyberattacks in vehicles and integrating these examples and their generalizations into their framework. In doing so, not only did they create a robust framework within which to test their own work, but they believe that the research community involved in the same kind of research can also benefit from it, Japkowicz said.

Although cars are unique, the research findings should apply to other car models because cyberattacks on car models are similar, Japkowicz said. In the future, she will test the detection system in other cars and refine the technique, making the neural network smarter with more varied data inputs.

The research has published in a special issue on data mining and cybersecurity in IEEE Intelligent Systems.