Start your day with intelligence. Get The OODA Daily Pulse.

Home > Briefs > Google DeepMind develops a ‘solidly amateur’ table tennis robot

Google DeepMind develops a ‘solidly amateur’ table tennis robot

Sports have long served as an important test for robots. The best-known example of the phenomenon may be the annual RoboCup soccer competition, which dates back to the mid-1990s. Table tennis has played a key role in benchmarking robot arms since a decade prior. The sport requires speed, responsiveness and strategy, among other things. In a newly published paper titled “Achieving Human Level Competitive Robot Table Tennis,” Google’s DeepMind Robotics team is showcasing its own work on the game. The researchers have effectively developed a “solidly amateur human-level player” when pitted against a human component. During testing, the table tennis bot was able to beat all of the beginner-level players it faced. With intermediate players, the robot won 55% of matches. It’s not ready to take on pros, however. The robot lost every time it faced an advanced player. All told, the system won 45% of the 29 games it played. “This is the first robot agent capable of playing a sport with humans at human level and represents a milestone in robot learning and control,” the paper claims. “However, it is also only a small step towards a long-standing goal in robotics of achieving human level performance on many useful real world skills. A lot of work remains in order to consistently achieve human-level performance on single tasks, and then beyond, in building generalist robots that are capable of performing many useful tasks, skillfully and safely interacting with humans in the real world.” The system’s biggest shortcoming is its ability to react to fast balls. DeepMind suggests the key reasons for this are system latency, mandatory resets between shots and a lack of useful data.

Full story : Google DeepMind develops a ‘solidly amateur’ table tennis robot.