A group of hackers has managed to trick Tesla’s first-generation Autopilot into accelerating from 35 to 85 mph with a modified speed limit sign that humans would be able to read correctly.
Hackers at McAfee Advanced Threat Research conducted the experiment.
They explain what they set out to do in a blog post:
McAfee Advanced Threat Research (ATR) has a specific goal: identify and illuminate a broad spectrum of threats in today’s complex landscape. With model hacking, the study of how adversaries could target and evade artificial intelligence, we have an incredible opportunity to influence the awareness, understanding and development of more secure technologies before they are implemented in a way that has real value to the adversary.
With that in mind, they decided to target MobilEye’s camera system since it’s deployed in over 40 million vehicles, including Tesla’s first-generating Autopilot vehicles, which were used for this specific experiment.
They decided to try to modify speed limit signs in ways that a human would be able to still know the limit, but the automated system could get confused:
Ultimately, they were able to make a Tesla vehicle on Autopilot accelerate by 50 mph over the limit:
The ultimate finding here is that we were able to achieve the original goal. By making a tiny sticker-based modification to our speed limit sign, we were able to cause a targeted misclassification of the MobilEye camera on a Tesla and use it to cause the vehicle to autonomously speed up to 85 mph when reading a 35-mph sign. For safety reasons, the video demonstration shows the speed start to spike and TACC accelerate on its way to 85, but given our test conditions, we apply the brakes well before it reaches target speed. It is worth noting that this is seemingly only possible on the first implementation of TACC when the driver double taps the lever, engaging TACC. If the misclassification is successful, the autopilot engages 100% of the time. This quick demo video shows all these concepts coming together.
They released a quick video of one of the experiments:
McAfee confirmed that it disclosed its findings to both Tesla and MobilEye before making them public:
McAfee disclosed the findings to Tesla on September 27th, 2019 and MobilEye on October 3rd, 2019. Both vendors indicated interest and were grateful for the research but have not expressed any current plans to address the issue on the existing platform. MobilEye did indicate that the more recent version(s) of the camera system address these use cases.
In previous instances of vulnerabilities being exposed by white-hat hackers, Tesla has been fairly quick to fix them.
In 2016, we reported on a Chinese white-hat hacker group, the Keen Security Lab at Tencent, managing to remotely hack the Tesla Model S through a malicious Wi-Fi hotspot. It is believed to be the first remote hack of a Tesla vehicle.
The hackers reported the vulnerability to Tesla before going public and the automaker pushed an update fairly quickly.
While this is technically a vulnerability, it’s a harder one to fix since it basically requires a better image recognition system.
I bet Tesla is not in a hurry to fix it since it only affects the first generation Autopilot, but I think it’s important for owners to at least be aware that it could happen.
What do you think? Let us know in the comment section below.
FTC: We use income earning auto affiliate links. More.