← Return to search results
Back to Prindle Institute
Technology

The Artificial Intelligence of Google’s AlphaGo

By Meredith McFadden
30 May 2017

Last week, Google’s AlphaGo program beat Ke Jie, the Go world champion. The victory is a significant one, due to the special difficulties of developing an algorithm that can tackle the ancient Chinese game. It differs significantly from the feat of DeepBlue, the computer that beat then-chess world champion Garry Kasparov in 1997, largely by brute force calculations of the possible moves on the 8×8 board. The possible moves in Go far eclipse those of chess, and for decades most researchers didn’t consider it possible for a computer to defeat a champion-level Go player, because designing a computer with such complexity would amount to such great leaps towards creative intuition on the computer’s part.

In chess, a player typically has about 35 plays available to her at a given time, whereas in Go that number is typically about 200. Because of this dramatic change in the options that need to be considered, Go remained elusive, while chess and other games that have a fixed number of available options – like Scrabble, Othello, and checkers – all have long been conquered by computers. With this vast number of options to consider, pattern recognition, or intuiting which strategy should be adopted or how particular advances will proceed, becomes crucial. Humans can’t explicitly see ahead to the future possibilities as well as in chess, and thus human champions are good at recognizing patterns and making qualitative or intuitive judgments about what will be best to do in new situations.

The complexity involved in determining what play to execute in Go led to difficulty devising a strategy in designing a computer that could equal a champion’s strategizing. Algorithms are superior to human reasoning at a variety of tasks – they can go through calculations at a much faster rate, and are free from the distractions our pesky emotions and complex priorities make us prone to. When designed well, they have much fewer ways they can err in reaching their conclusions, while we can make mistakes in seemingly endless ways – even in our specialties.

The reason that the game of Go has been a locus of such attention to those with an interest in AI is that to truly conquer the game, it seems that a player needs to be able to do something humans beat computers at every time: to be a Go champion, you need to recognize patterns. You need to sift through the sea of possible moves and counter-moves and use informed intuition.

As far back as 1965, mathematician I.J. Good framed the particularly difficult task of designing a computer that either has all the possible strategies of playing Go explicitly programmed into it, or alternatively program a self-learning program. AlphaGo took the latter route to go from a novice a year-and-a-half ago to the de facto world champion this week – and the training involved in AlphaGo’s rise will have lasting repercussions in our understanding of natural and artificial cognition.   

The way that people improve at Go has seemed similar to other distinctively human strengths that have been assumed to be safe from artificial encroachment: we can sift through a bunch of information and tell what is relevant, notice the salient, and pick out patterns. For example, when a doctor sees a patient, does an exam, and hears symptoms, often she is able to sift through the myriad possibilities and make a judgment about what’s going on with remarkably little articulable information. We often reach pretty accurate conclusions on little explicit information; for instance, we can judge that the reason a friend isn’t at a party is because she’s stuck in traffic, when of course there are a vast number of possibilities. To program a computer to diagnose a patient, or predict the reason for a friend’s absence at a party, is a task beyond programming it to calculate a vast number of problems.

AlphaGo’s success this week marks a leap forward in our ability to train computers. The deep-machine learning techniques involved in bringing AlphaGo from the beginner to the expert included reinforcement learning – it played games against itself and saw which moves paid off. The algorithm assigned weights to moves instead of brute positive and negative outputs, as DeepBlue did when cranking out its calculations against Kasparov.

The training that AlphaGo underwent produced results. In the second game of a best-of-five series last year, AlphaGo made a move that flummoxed commentators and its opponent, Go expert Lee Sedol. The algorithm made a play that it recognized no expert would make (it calculated that a human would make the move in about 1 in 10,000 instances), and it was able to override that recognition. In taking such a novel path, AlphaGo went beyond the information that had been programmed into it. Dan Silver of Google’s DeepMind analyzed the processing surrounding this novel move and said, “It really discovered this for itself, through its own process of introspection and analysis.”

The sort of programming that led to AlphaGo’s ability to play Go in ways that can beat the best has already been put to use in analyzing images (identifying faces on your Facebook wall, noting possible obstructions in your car’s rearview camera) and speech (your phone knows what words you say). This “neural networking” gets better at performing a task the more input it receives – so if you say a number of words to a program, it will get better at distinguishing amongst the words and identifying them. If you show a number of different pictures to a program, it will be able to tell when the picture includes a particular person.

AlphaGo went farther than being able to accurately identify some input, however, and this is what is so exciting, or worrying. If enough Go moves are fed into the algorithm, it can learn to play – just as algorithms can learn to identify faces, or sounds, or objects. However, AlphaGo doesn’t just differentiate between legal or illegal moves in Go, for instance. AlphaGo strategizes and wins at a game that has a vast number of possible routes to the finish.

The ability to play moves that have never been shown to the algorithm, like the 37th move in the second game against Lee Sedol, suggests that we may be farther along the road to creative processing than some would like. Going down the path of developing better processors that can create novel solutions to problems, and model intuition better and better, brings the time that we will have to address thorny ethical questions even closer.

When will such processors’ abilities be so close to the abilities we value in humans that we can’t avoid granting them some sort of moral standing? What sort of novel solutions will they need to produce before we stop calling them “it”, for instance?

Who can own the patent to an algorithm that, at a certain point in its training, may become creative, intuitive, or at some point, conscious? What are the norms involved in creating such entities? Is it immoral to bring into existence, for instance, artificial intelligences that have the sole purpose of making human lives better?

A practical set of concerns will arise as well, for as technological advances press on, inevitably this will impact the occupations that no longer require the participation of human efforts.  

How will these technological developments influence our understanding of ourselves and our own consciousness? As we learn more about how we can artificially model learning and creativity, we will be challenged by algorithms like AlphaGo. Already the game has been changed by this novel perspective – the play at move 37 and the release of some of AlphaGo’s training games will no doubt improve the game and be a mark of progress. Other developments in artificial processes can press on our perspectives and broaden our approaches, perhaps, in parallel ways. If machines can come up with answers to questions that no human would have thought to say, it’s hard to know where that could lead.

Meredith is an Assistant Professor at the University of Wisconsin, Whitewater. She earned her PhD at the University of California, Riverside, with a research focus in Philosophy of Action and Practical Reasoning and continues to explore the relationship between reason and value. Her current research consists of investigating modes of agential endorsement: how an agent's understanding of what is good, what is reasonable, what she desires, and who she is, informs what she does. Meredith is also committed to public philosophy and applied ethics; in particular, she is invested in illuminating debates in biomedical ethics, ethics of technology, and philosophy of law. Her website can be found at: https://mermcfadden.wixsite.com/philosopher.
Related Stories