Checkmate, Yahtzee, 11010101 - I Resign! ..... Game playing machines

Author:  Charley Wang

Institution:  University of Toronto
Date:  February 2008

This is the story of the underdog.

This is the story of beating unfathomable odds, and of playing the perfect game.

This is the story of the machine, and how we taught it to challenge and defeat the best of us.




Very recently, a University of Alberta team led by Jonathan Schaefer announced that they had ‘weakly’ solved the game of checkers. But don’t let the ‘weakly’ fool you – their program, called Chinook, can play every game of checkers to a win or draw.

In other words, it plays a perfect game, every time.

But this is not where the story starts.

The story starts in 1770, with the invention of the world’s first game-playing machine. They called it “The Turk.”

The Turk was essentially a wooden man mounted on a wooden box. It looked no more complex than other devices of the time, except that it played chess – and played it well. For over 80 years the Turk toured all of Europe, beating chess masters and chess enthusiasts alike.

The Turk captivated chess masters and enthusiasts alike, and drew the attention of great men like Benjamin Franklin and Napoleon Bonaparte.

But the Turk also drew many critics during its long life. They reasoned that no machine could possibly achieve the imagination required to beat a human player at chess. Therefore, they said, the Turk must be a hoax.

They were right. After many years of service the Turk was finally retired and revealed by its owners to have been an elaborate hoax. (Standage, 2002)

Machines do not have imagination - you can disassemble the biggest, most sophisticated supercomputers on Earth and you won’t find a single trace of imagination. Yet, thanks to research into Artificial Intelligence (AI) even a cell phone can play chess.

According to Merriam Webster’s dictionary, artificial intelligence is “the capability of a machine to imitate intelligent human behaviour.” This definition, though, is outdated.

“In the early days of AI research, the easiest path to achieving high performance was seen to be emulating the human approach,” says Jonathan Schaefer, creator of Chinook. “This was fraught with difficulty, especially the problems of capturing and encoding human knowledge. Human-like strategies are not necessarily the best computational strategies.”


Strangely enough, it was this change from a human to a mechanical approach that allows machines to compete with humans.

Computers have a natural talent for numbers, and so a computer’s idea of playing is to calculate all the possible moves and positions.

But hang on, if a computer can just calculate every possible move, isn’t it obvious that it will win every time?

Not exactly…

The Antagonists

The idea of “every possible move” is nowhere near as simple as it sounds.

In Tic-Tac-Toe, one of the world’s simplest ‘board’ games, there are 435 legal combinations of pieces, or ‘positions’. Compare that to checkers, where there are 5x1020 positions. Or to chess with 1050, or even to Go with over 10170. (Weisstein, 2006)

It is difficult to convey just how large those numbers are..

With 5x1020 dollars you could give every person on the planet 60 billion dollars and you’d still have enough left over to buy every single thing in North America. The number 1050 is more than number of atoms in the entire galaxy. And 10170 is simply beyond big. You could square the number of subatomic particles in the known universe and you still wouldn’t even come close to 10170.

Those are the astronomical odds standing against a simple computer. They make the computational power of even the world’s most powerful supercomputers seem paltry in comparison. Take, for example, Deep Blue, IBM’s high profile chess program. It ‘thinks’ about 14 moves ahead, which translates to over 4x1021 positions.

In contrast, the world’s fastest computer, IBM’s Blue Gene/P, has a top speed of 3x1015 Floating-point Operations (FLOP’s) per second - the equivalent of 100,000 high end laptops. (IBM, 2007)

Now, even if you could find and evaluate an entire position with just one FLOP, it would take the supercomputer more than two weeks to make a single move. Yet Deep Blue, in its series against grandmaster Garry Kasparov, was only allowed an average of 3 minutes per move.

This is a tremendous victory over the odds, and there are many, many innovations behind it.

Acting like a machine

One early innovation in logic game AI’s was the Alpha-Beta search. Alpha-Beta search is based on the minimax principle.

Here’s how it works:

Assume we have two perfect players: Min and Max. Each position can then be assigned a value, with positive numbers representing good positions for Max, and negative numbers representing good positions for Min. Max’s goal is therefore to maximize the score, and Min’s to minimize it.

Now, pick a move and perform a full search of the leaves in that branch of our search tree, and use the smallest as the ‘result’ for that branch. We use the smallest value because a perfect player by definition will always make the best possible counter-move.

Then we move on to the next branch. Like the first one, it splits into thousands of possibilities. But we pick out a single leaf. By chance, it’s lower than the result of the first branch.

Because Min is perfect, we cannot possibly get a value higher than this leaf – in effect, this value sets a ceiling on the results of this branch. So now we can discard the entire branch and move on.

With this algorithm, an impossibly large search tree can be pruned into a scraggly search bush.

A second, important innovation is the use of memory, particularly in the form of hash tables. A hash table is a list of positions that have already been evaluated, stored according to a unique value called a hash key. Then, when a position comes up, we can very quickly check if it has already been encountered.

Memory, however, is the difference between a ‘strong’ and ‘weak’ solution to a game. By definition, a ‘weak’ solution relies on memory – on extrapolating backwards from winning positions. It is, in the language of mathematics, a ‘numerical’ solution to the game. Chinook, the virtual, undefeatable god of checkers, is therefore a ‘weak’ program. (Schaeffer et al., 2007)

A ‘strong’ solution, on the other hand, is algorithm based. If a memory-based solution can be considered numerical, an algorithm-based one is like a mathematical proof for the game itself – an unbreakable link from the first move to victory.

That is, in some ways, the ‘holy grail’ of game AI, the end product of research into a game.

The End

But it’s not the end of the story. There are countless more search algorithms, memory access functions and time-saving innovations, and there are countless more to come. Machines have overcome great odds to get to where they are today, and will overcome many more.

But tomorrow, there will be even more innovation, more new ways to tackle old problems, more attempts to beat ever greater odds. However, one question persists: ‘Why?’

To some, the resources spent on cracking a board game are… ridiculous. They argue that energy devoted to game-specific innovations is better used elsewhere.

But we are only human, we only excel at the things that inspire us. At the end of the day, watching pieces flow across a virtual board is more captivating than watching numbers dribble across the screen.

Beyond that, game-playing AI’s also provide a test platform for any program that works with large data sets. In a game, it’s easy to tell if an algorithm is working correctly or if it has gone horribly wrong.


In the centuries since the Turk’s performance, game playing machines have evolved greatly. But, more than that, these things once considered to be soulless automations have sparked some creation of their own.

The Alpha-Beta algorithm led to the development of heuristic search, which quickly analyzes complicated data and, in some respects, is a good simulation of human thought. (Nau, 1999)

Memory storage and access algorithms from Chinook have found use in DNA search programs, allowing researchers to traverse incredibly large sequences of codons. (Nelson, 2007)

Beyond that, game playing machines have drawn popular attention to developments in computer technology.

For though machines have no imagination of their own, they incite great imagination in our minds, in the minds of their makers.

Recommended Readings:

Standage, T. “The Turk: The Life and Times of the Famous Eighteenth-Centry Chess-Playing Machine.” (2003) Published by: Berkley (TRD).

Weisstein, Eric W. "Chess." From MathWorld--A Wolfram Web Resource.

“IBM System Blue Gene Solution.” IBM.

Chan, P.Y., Choi, H.Y., Xiao, Z. “Game Trees, Alpha-Beta Search.” University of Alberta. March 6, 1997.

Schaeffer, J., Burch, N., Bjornsson, A., Muller, M. Lake, R., Lu, P., Sutphen, S. “Checkers is Solved.” Sciencexpress. Published July 19 2007.

Nau, D. S. “AI Game-Playing Techniques: Are They Useful for Anything Other Than Games? A Synopsis of the Panel Discussion at IAAI-98.” American Association for Artificial Intelligence, Spring 1999. Pages 116-117.

Nelson, B. “Checkers Computer Becomes Invincible.” MSNBC Science and Technology. July 19, 2007.

Written by Charley Wang

Reviewed by Justin Chakma, Pooja Ghatalia

Published by Pooja Ghatalia