Skip to main content

Here’s what Tesla Autopilot can see with the new v9 neural net

Some of our favorite Tesla hackers are back at it with new footage to help us understand what Autopilot can see with the new neural net that Tesla deployed in its version 9 software update.

Earlier this summer, we already had a rare look at what Tesla Autopilot can see and interpret from Tesla hackers ‘verygreen’ and DamianXVI.

They did it again last month with a new Tesla drive in Paris through the eyes of Autopilot.

Now they’ve done it again, but this time using Tesla’s latest version of Autopilot in the new version 9 software.

As we previously reported, version 9 includes a massive new Autopilot neural net with impressive new capabilities.

One of the main new capabilities is the fact that the neural net is camera agnostic, which enabled Tesla to deploy it on all cameras around the car.

Of course, capturing all those feeds can be difficult. Verygreen explained his solution in a Reddit post:

“The solution I chose was to just limit framerate on all the cameras but main one (the storage driver on ape can handle only at most 80MB/sec). 9fps is used for the side cameras and 6 fps for fisheye (the car actually gets 36fps feeds from all cameras bu backup where it gets 30fps) and even that tends to overwhelm the drive from time to time and this is visible in still frames here and there. One of the problem is the backup camera is actually quite a bit unreliable and in many runs there’s no output captured from it. As such I decided not to collect it at all for now. (you know it’s not working on your cam when on a trip the cars come behind your car real close at a traffic light and nothing shows on the IC, surprisingly the CID backup cam display still works, so Tesla decided to just paper over the old “freeze frame” issue but not hte autopilot problem of the same).”

It’s also difficult to display the feeds in a way that is easily viewable.

They decided to show 6 camera angles at once due to the resolution limitation. Here’s the result:

Electrek’s Take

This is really exciting stuff. We can clearly see all the cameras contributing to the vehicle’s understanding of its environment.

You can see some pretty smooth tracking of roadside objects and Autopilot appears to be keeping a close eye on pedestrians all around the vehicle.

While I can understand the frustration of Autopilot 2.0 owners about the lack of new features, I think it’s clear that this new neural net is going to enable more features soon.

You enable the Summon feature to take advantage of this computer vision on top of its use of the ultrasonic sensors and it will clearly be able to navigate more complex situations, as promised under Smart Summon.

The same thing goes for other promised features like the On Ramp/Off Ramp – a version of which is expected to be released any day now in the form of Navigate on Autopilot.

It’s truly an exciting time for the Autopilot program. But despite all the improvements, please make sure that you keep your hands on the steering wheel and to always be ready to take control at all times.

FTC: We use income earning auto affiliate links. More.

You’re reading Electrek— experts who break news about Tesla, electric vehicles, and green energy, day after day. Be sure to check out our homepage for all the latest news, and follow Electrek on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our YouTube channel for the latest reviews.



Avatar for Fred Lambert Fred Lambert

Fred is the Editor in Chief and Main Writer at Electrek.

You can send tips on Twitter (DMs open) or via email:

Through, you can check out Fred’s portfolio and get monthly green stock investment ideas.