Despite its very simple rules, for a long time the 3,000 year old game of Go has withstood all attempts of developing a computer algorithm which can beat a professional human player. Since mastering the emergent patterns in Go requires inherently human skills such as creativity and judgement, this was seen as one of the great challenges of Artifical Intelligence. Using a combination of neural networks and Monte Carlo Tree Search (MCTS), the AlphaGo program was able to beat one of the World's strongest player at 4:1 in March 2016. An updated version of the algorithm has now achieved unrivalled playing strength using tabula-rasa learning. Based on two very accessible Nature papers [1,2] I will explain the basic rules of Go, training of the key neural networks with Stochastic Gradient Descent and the application of MCTS to focus on the most promising moves. [1] D Silver et al. Nature 529, 484–489 (2016) doi:10.1038/nature16961 [2] D Silver et al. Nature 550, 354–359 (2017) doi:10.1038/nature24270