Start your day with intelligence. Get The OODA Daily Pulse.
AI-vision models have improved dramatically over the past decade. Yet these gains have led to neural networks that, though effective, don’t share many characteristics with human vision. For example, convolutional neural networks (CNN) are often better at noticing texture, while humans respond more strongly to shapes. A paper recently published in Nature Human Behaviour has partially addressed that gap. It describes a novel All-Topographic Neural Network (All-TNN) that, when trained on natural images, developed an organized, specialized structure more like human vision. The All-TNN better mimicked human spatial biases, like expecting to see an airplane closer to the top of an image than the bottom, and operated on a significantly lower energy budget than other neural networks used for machine vision. “One of the things you notice when you look at the way knowledge is ordered in the brain, is that it’s fundamentally different to how it is ordered in deep neural networks, such as convolutional neural nets,” said Tim C. Kietzmann, professor at the Institute of Cognitive Science in Osnabrück, Germany and cosupervisor of the paper.