Wednesday, March 11, 2009

Humans no match for Go bots

Humans No Match for Go Bot Overlords

By Brandon Keim

Wired Science, March 10, 2009 7:10:18 PM

Categories: Artificial Intelligence, Brain and Behavior, Cognition, Games

For the last two decades, human cognitive superiority had a distinctive sound: the soft click of stones placed on a wooden Go board.

But once again, artificial intelligence is asserting its domination over gray matter. Just a few years ago, the best Go programs were routinely beaten by skilled children, even when given a head start. Artificial intelligence researchers routinely said that computers capable of beating our best were literally unthinkable. And so it was. Until now.

"It's a silly human conceit that such a domain would exist, that there's something only we can figure out with our wetware brains," said David Doshay, a University of California at Santa Cruz computer scientist. "Because at the same time, another set of humans is just as busily saying, 'Yes, but we can knock this problem into another domain, and solve it using these machines.'"

In February, at the Taiwan Open — Go's popularity in East Asia roughly compares to America's enthusiasm for golf — a program called MoGo
beat two professionals. At an exhibition in Chicago, the Many Faces program beat another pro. The programs still had a head start, but the trend is clear. Arrayed by opposing players trying to capture space on its lined 19x19 grid, the black and white Go stones can end a game in 10171 possible ways — about 1081 times more configurations than there are elementary particles in the known universe.

Faced with such extraordinary complexity, our brains somehow find a path, navigating the possibilities using mechanisms only dimly understood by science. Both of the programs that have recently defeated humans used variations on mathematical techniques originally developed by Manhattan Project physicists to coax order from pure randomness.

Called the
Monte Carlo method, it has driven computer programs to defeat ranking human players six times in the last year. That's a far cry from chess, the previous benchmark of human cognitive prowess, in which Deep Blue played Garry Kasparov to a panicked defeat in 1997, and Deep Fritz trounced Vladimir Kramnik in 2006. To continue the golf analogy, computer Go programs beat the equivalents of Chris Couch rather than Tiger Woods, and had a multi-stroke handicap. But even six victories was inconceivable not too long ago, and programmers say it won't be long before computer domination is complete.

There is, however, an asterisk to the programs' triumphs. Compared to the probabilistic foresight of our own efficiently configured biological processor — sporting 1015 neural connections, capable of 1016 calculations per second, times two — computer Go programs are inelegant. They rely on what Deep Blue designer Feng-Hsiung Hsu
called the "substitution of search for judgment." They crunch numbers.

"People hoped that if we had a strong Go program, it would teach us how our minds work. But that's not the case," said
Bob Hearn, a Dartmouth College artificial intelligence programmer. "We just threw brute force at a program we thought required intellect."

If only we knew what our own brains were doing.
Inasmuch as human Go prowess is understood, it's explained in terms of pattern recognition and intuition. "When there are groups of stones arranged in certain ways, you can build visual analogies that work very well. You can think, 'This configuration radiates influence to that part of the board' — and it turns out it's a useful concept," said Hearn. "The revolutionary people in the field have an intuitive sense, and can look at things completely differently from other people."

Image-based neuroscience supports this explanation, albeit vaguely. When researchers led by University of Minnesota cognitive neuroscientist Michael Atherton
scanned the brains of people playing chess and compared them to Go-playing brains, he found heightened activation in the Go players' parietal lobes, a region responsible for processing spatial relationships. But these observations, said Atherton, were rudimentary. "The higher-level stuff, we didn't figure out," he said. .
In a more recent brain-scanning study, Japanese researchers
compared professional and amateur Go players as they contemplated opening- and end-stage moves. Both displayed parietal lobe activity. During the end stages, however, professionals had extremely high activity in their precuneus and cerebellum regions, where the brain integrates a sense of space with our bodies and motions.

Put another way, professionals fuse their consciousness into the decision tree of the game.
Go players have an ability "to think creatively and prune the search tree in an aesthetic sense," said Atherton. "They have a feel for the game."

Artificial intelligence researchers historically tried to harness this pattern-based approach, however poorly understood, to their Go programs. It wasn't easy. "When I've talked to Go professionals about how they come to their decisions, it's been difficult for them to describe why a move is right," said Doshay at UCSC, who designed a Go computer program called
SlugGo. "Go is a game of living things, and you talk about it that way, as if the patterns might be alive."

But if turning cryptic statements from Go masters into working algorithms for determining the statistical health of game patterns was impossible, there didn't seem to be any other way of doing it. "It was possible to sidestep the cognitive issues by throwing brute force at chess," said Hearn, "but not at Go."

Compared to the challenge posed to a Go program, Deep Blue's computations — possible moves in response to a move, carried 12 cycles into the future — are back-of-the-napkin scribblings. "If you look at the game trees, there's about 30 possible moves you can make from a typical position. In Go, it's about 300. Right away, you get exponential scaling," said Hearn.

With every anticipated move, the possibilities continue to scale exponentially — and unlike chess, where captured pieces are counted immediately, Go territory can switch hands until the game's end. Running a few branches down the tree is useless: take one step, and it needs to be pursued, exponential scale by scale, until the game end.

According to Doshay,

the number of Go's end-states — 10171 — is almost inconceivably smaller than the 101100 different ways of getting there. Without patterns to eliminate whole swaths of choices from the outset, computers simply can't cope with it, at least not within time frames contained by the universe's remaining existence. .

But to Doshay, guiding computers with human-rules patterns was wrong from the beginning.
"If you want computers to do something well, you concentrate on the ways computers do things well," he said. "Computers can generate enormous quantities of random numbers very rapidly."
Enter the Monte Carlo method, named by its Manhattan Project pioneers for the casinos where they gambled. It consists of random simulations repeated again and again until patterns and probabilities emerge: the characteristics of an atomic bomb explosion, phase states in quantum fields, the outcome of a Go game. Programs like MoGO and Many Faces simulate random games from start to finish, over and over and over again, with no concern for figuring out which of any given move is best.

"At first, I was dismissive," said Hearn. "I didn't think there was anything to be gained from random playouts." But the programmers had one extra trick: they crunched the accumulated statistics, too. Once a few million random games are modeled, probabilities take form. Thus informed, the programs devote extra processing power to promising branches, and less power to less-promising alternatives.

The resulting game style looks human, but aside from a few rough human heuristics, the patterns articulated by our intuitions are unnecessary.

"The surprising, mysterious thing to me is that these algorithms work at all," said Hearn. "It's very puzzling."

Puzzling it might be, but the game is almost over. Hearn and others say that, having started to beat human professionals, Monte Carlo-based programs will only get better. They'll incorporate the results of earlier games to their heuristic arsenal, and within a few years — a couple decades at the most — be able to beat our best.

What is the larger significance of this? When computers finally triumphed at chess, the world was shocked. To some, it seemed that human cognition was less special than before. But to others, the competition is an illusion.After all, behind every machine is the hand that made it.

"There's a strong tendency in humans to have a conceit about how far we've advanced," said Doshay. "But we've only really started programming computers."


1. Flickr/Sigurdga

2. David Doshay, with a 24-CPU Go-playing cluster. He's since expanded it to 72 CPUs running multiple Go modules. One module, still under development, is patterned after his Go teacher.

See Also:
Supercomputers Break Petaflop Barrier, Transforming Science
I See Your Petaflop and Raise You 19 More
Mouse Versus Supercomputer: No Contest
Brandon Keim's Twitter stream and feed; Wired Science on Facebook.

No comments: