Start your day with intelligence. Get The OODA Daily Pulse.
Tesla just delivered a software update to my car that is so dramatically different that it is like having a new car. This post provides a bit more on why and extrapolates some lessons relevant to the future of robotics across multiple sectors.
The bottom line up front: The best robots give physical form to AI.
Background:
Tesla cars come with some pretty good cruise control and driver assist capabilities right out of the box. But they also have an upgrade called FSD short for “Full Self Driving”. Turns out FSD is a long way from really being full self-driving. Consider that term to be aspirational.
There are two flavors of FSD. The standard version which is really good, and a beta version that comes with an opportunity to help test out even more advanced features.
The Upgrade:
This version of the software, FSD 12.3, is a dramatic upgrade. The architecture is based completely on a single neural network, meaning an AI that is based on the model of how a human brain functions. It is just math, but the math is done in processing units called “nodes” that pass data on to each other similar to the ways brains do. Software based on this type of architecture is able to learn without hard coded instructions. This architecture enables “deep learning” where they can draw conclusions from unlabeled data without human intervention. You no doubt recall early famous deep learning successes, like the ability to tell if a dog or cat is in a picture. Now it is almost ubiquitous in the domain of enterprise AI.
Previous versions of the FSD software were full of hard coded routines that engineers used to spell out exact car behavior in a given situation. Imagine a situation where the Tesla approaches a stop sign. The car might be hard coded to stop a certain distance from the sign, look around, then make decisions based on what other cars are seen and which way the car or driver wants to go. With the new version that hard coding is gone. Instead the end-to-end neural network has been trained on millions of video clips and real car data. The car learned how to behave at intersections with stop signs by watching how good drivers behave at intersections with stop signs.
The new FSD software has replaced over 300,000 lines of explicit C++ code with AI driven decision-making. This change allows for a better user experience, the car handles with more confidence and politeness to other drivers, and can take on more challenging drives all by itself, with little or no human intervention.
Indications of What Comes Next:
This is still a beta version. But early feedback has been positive. I sure enjoyed my first drive. My car took me around the city smoothly with no need for me to take over control. Really an enjoyable experience.
The “so what?” for me personally is that I aim to be more productive on my drives. Even with the previous version of FSD I was able to pay more attention to podcasts and audio books while still monitoring the safety of my car and standing ready to take over control if needed. This new software will make for even more relaxing commutes.
The “so what?” for the overall economy is going to be something bigger. This approach of using an end-to-end neural network for a software stack and then training it by watching video can clearly be applied to multiple other use cases. Here are a few:
This is a good time for all of us to start thinking of the very near future and how it will change our lives and the economy.