A new study by Tencent’s Keen Security Lab underscores that recent warnings by artificial intelligence (AI) experts about the risks of adversarial machine learning are more than justified.
After studying how the Enhanced Autopilot driver-assistance system used in Tesla vehicles reads and processes environmental data in order to determine when it needs to change lanes, researchers were able to trick the Autosteer feature merely by painting interference patches on the road. This means that threat actors could get a Tesla car to drive into oncoming traffic without needing to hack into the vehicle.
Read more: Researchers Trick Tesla to Drive into Oncoming Traffic