Start your day with intelligence. Get The OODA Daily Pulse.

Tesla just delivered a software update to my car that is so dramatically different that it is like having a new car. This post provides a bit more on why and extrapolates some lessons relevant to the future of robotics across multiple sectors.

The bottom line up front: The best robots give physical form to AI. 

Background:

Tesla cars come with some pretty good cruise control and driver assist capabilities right out of the box. But they also have an upgrade called FSD short for “Full Self Driving”.  Turns out FSD is a long way from really being full self-driving. Consider that term to be aspirational.  

There are two flavors of FSD. The standard version which is really good, and a beta version that comes with an opportunity to help test out even more advanced features.

The Upgrade:

This version of the software, FSD 12.3, is a dramatic upgrade. The architecture is based completely on a single neural network, meaning an AI that is based on the model of how a human brain functions. It is just math, but the math is done in processing units called “nodes” that pass data on to each other similar to the ways brains do. Software based on this type of architecture is able to learn without hard coded instructions. This architecture enables “deep learning” where they can draw conclusions from unlabeled data without human intervention. You no doubt recall early famous deep learning successes, like the ability to tell if a dog or cat is in a picture. Now it is almost ubiquitous in the domain of enterprise AI. 

Previous versions of the FSD software were full of hard coded routines that engineers used to spell out exact car behavior in a given situation. Imagine a situation where the Tesla approaches a stop sign. The car might be hard coded to stop a certain distance from the sign, look around, then make decisions based on what other cars are seen and which way the car or driver wants to go.  With the new version that hard coding is gone. Instead the end-to-end neural network has been trained on millions of video clips and real car data. The car learned how to behave at intersections with stop signs by watching how good drivers behave at intersections with stop signs. 

The new FSD software has replaced over 300,000 lines of explicit C++ code with AI driven decision-making. This change allows for a better user experience, the car handles with more confidence and politeness to other drivers, and can take on more challenging drives all by itself, with little or no human intervention. 

Indications of What Comes Next:

This is still a beta version. But early feedback has been positive. I sure enjoyed my first drive. My car took me around the city smoothly with no need for me to take over control. Really an enjoyable experience. 

The “so what?” for me personally is that I aim to be more productive on my drives. Even with the previous version of FSD I was able to pay more attention to podcasts and audio books while still monitoring the safety of my car and standing ready to take over control if needed. This new software will make for even more relaxing commutes.

The “so what?” for the overall economy is going to be something bigger.  This approach of using an end-to-end neural network for a software stack and then training it by watching video can clearly be applied to multiple other use cases. Here are a few:

  • This architecture can optimize transportation. Not just in cars but in commercial transportation including trucking, aviation, shipping, and can do so in ways that reduce environmental impact while being more economical/efficient for businesses. 
  • The humanoid robots being fielded today can benefit from this end-to-end neural network approach. Soon any humanoid robot will be able to learn from watching YouTube videos or watching humans in the work environment.  
  • The approach may also soon have national security ramifications.  Will the robot fighter pilot of the future be trained by watching video clips of air to air combat training? Will automated tanks learn from watching human driven tanks?
  • This approach and improved robotics can have an impact on urban planning and infrastructure. For example, when cars with these capabilities become totally robotic, there will be less need for garages and parking. 
  • Imagine a world of fewer accidents, less loss of life, less injury, fewer hospitalizations due to accidents. 
  • These types of approaches will provide new levels of mobility for those unable to drive. This could dramatically improve independence and improve quality of life. 
  • For those of us who must commute, we may find more productive quality time on our hands. Maybe two hours extra a day to learn or retrain for new tasks or to dictate content or be productive in other ways. 

This is a good time for all of us to start thinking of the very near future and how it will change our lives and the economy. 

Bob Gourley

About the Author

Bob Gourley

Bob Gourley is an experienced Chief Technology Officer (CTO), Board Qualified Technical Executive (QTE), author and entrepreneur with extensive past performance in enterprise IT, corporate cybersecurity and data analytics. CTO of OODA LLC, a unique team of international experts which provide board advisory and cybersecurity consulting services. OODA publishes OODALoop.com. Bob has been an advisor to dozens of successful high tech startups and has conducted enterprise cybersecurity assessments for businesses in multiple sectors of the economy. He was a career Naval Intelligence Officer and is the former CTO of the Defense Intelligence Agency.