Google DeepMind Introduces Method for Improving Machine Learning Speed
Researchers at Google's DeepMind wrote in a paper published online Thursday that they had achieved a leap in the speed and performance of a machine learning system. Deepmind's paper "Reinforcement Learning with Unsupervised Auxiliary Tasks" introduces a method for greatly improving the learning speed and final performance of agents. DeepMind?s new system -- named Unsupervised Reinforcement and Auxiliary Learning agent, or Unreal -- learned to master a three-dimensional maze game called Labyrinth 10 times faster than the existing best AI software. It can now play the game at 87 percent the performance of expert human players, the DeepMind researchers said.
"Our agent is far quicker to train, and requires a lot less experience from the world to train, making it much more data efficient," DeepMind researchers Max Jaderberg and Volodymyr Mnih said. They added that Unreal would allow DeepMind?s researchers to experiment with new ideas much faster because of the reduced time it takes to train the system. DeepMind has already seen its AI products achieve highly respected results teaching itself to play video games, notably the retro Atari title Breakout.
Labyrinth is a game environment that DeepMind developed, loosely based on the design style used by the popular video game series Quake. It involves a machine having to navigate routes through a maze, scoring points by collecting apples.
This style of game is an important area for artificial intelligence research because the chance to score points in the game, and thus reinforce "positive" behaviors, occurs less frequently than in some other games. Additionally, the software has only partial knowledge of the maze?s layout at any one time.
One way the researchers achieved their results was by having Unreal replay its own past attempts at the game, focusing especially on situations in which it had scored points before. The researchers equated this in their paper to the way "animals dream about positively or negatively rewarding events more frequently."
The researchers also helped the system learn faster by asking it to maximize several different criteria at once, not simply its overall score in the game.
DeepMind achieved what is considered a major breakthrough in the field earlier this year when its AlphaGo software beat one of the world?s reigning champions in the ancient strategy game Go.
DeepMind?s Unreal system also mastered 57 vintage Atari games, such as Breakout, much faster -- and achieved higher scores -- than the company?s existing software. The researchers said Unreal could play these games on average 880 percent better than top human players, compared to 853 percent for DeepMind?s older AI agent.
But on the most complex Atari games, such as Montezuma?s Revenge, the new system made bigger leaps in performance.The prior AI system scored zero points, while Unreal achieved 3,000 -- greater than 50 percent of an expert human?s best effort.