To date, Go has thwarted AI researchers; computers still only play Go as well as amateurs. Traditional AI methods don?t have a chance in Go.
Google DeepMind, the London research group behind the project, built a system, AlphaGo, that combines an advanced tree search with deep neural networks. These neural networks take a description of the Go board as an input and process it through 12 different network layers containing millions of neuron-like connections. One neural network, the "policy network," selects the next move to play. The other neural network, the "value network," predicts the winner of the game. Google's researhers trained the neural networks on 30 million moves from games played by human experts, until it could predict the human move 57 percent of the time. But our goal is to beat the best human players, not just mimic them. To do this, AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks, and adjusting the connections using a trial-and-error process known as reinforcement learning.
Google held a tournament between AlphaGo and the other top programs at the forefront of computer Go. AlphaGo won all but one of its 500 games against these programs. So the next step was to invite the reigning three-time European Go champion Fan Hui - an elite professional player who has devoted his life to Go since the age of 12 - to Google' London office for a challenge match. In a closed-doors match last October, AlphaGo won by 5 games to 0. It was the first time a computer program has ever beaten a professional Go player. You can find out more in this paper, which was published in Nature today.
In March, AlphaGo will face its ultimate challenge: a five-game challenge match in Seoul against the legendary Lee Sedol - the top Go player in the world over the past decade.