While ChatGPT created a new way to use our devices, technology is more than just what we see on our screens. Research in the realm of spatial artificial intelligence and robotics is developing as we speak, and it’s bringing the technological focus back into the physical world. “When we’re talking about AI, people are mostly talking about chatbots and generating images,” said Rev Lebaredian, vice president of omniverse and simulation technology at Nvidia. “All of it is very important, but the things we take for granted around us in the physical world are actually far more important.” With Nvidia’s Omniverse platform, Lebaredian spends his days building physically accurate worlds in the digital space, otherwise known as digital twins. “Creating robot brains is not going to be possible unless we can first take the world around us and represent it inside a computer, such that we can train these robot brains in the place where they’re born, which is inside a computer,” he said. Spatial AI allows models to understand and interact with the physical world in ways previously limited to human cognition, and Nvidia is not the only one building on it. Stanford researcher and professor Fei-Fei Li recently brought her company World Labs out of stealth mode. It’s a spatial intelligence company building “large world models” to understand, interact with, and build on the three-dimensional world around us. Backed by Andreessen Horowitz, World Labs posits that initial use cases are geared toward professional artists, designers, developers and engineers.
Full story : For Nvidia, spatial AI and the ‘omniverse’ entering physical world may be the next big thing.