What computer science can teach economics

(麻豆淫院Org.com) -- Computer scientists have spent decades developing techniques for answering a single question: How long does a given calculation take to perform? Constantinos Daskalakis, an assistant professor in MIT鈥檚 Computer Science and Artificial Intelligence Laboratory, has exported those techniques to game theory, a branch of mathematics with applications in economics, traffic management -- on both the Internet and the interstate -- and biology, among other things. By showing that some common game-theoretical problems are so hard that they鈥檇 take the lifetime of the universe to solve, Daskalakis is suggesting that they can鈥檛 accurately represent what happens in the real world.
Game theory is a way to mathematically describe strategic reasoning 鈥 of competitors in a market, or drivers on a highway or predators in a habitat. In the last five years alone, the Nobel Prize in economics has twice been awarded to game theorists for their analyses of multilateral treaty negotiations, price wars, public auctions and taxation strategies, among other topics.
In game theory, a 鈥済ame鈥 is any mathematical model that correlates different player strategies with different outcomes. One of the simplest examples is the penalty-kick game: In soccer, a penalty kick gives the offensive player a shot on goal with only the goalie defending. The goalie has so little reaction time that she has to guess which half of the goal to protect just as the ball is struck; the shooter tries to go the opposite way. In the game-theory version, the goalie always wins if both players pick the same half of the goal, and the shooter wins if they pick different halves. So each player has two strategies 鈥 go left or go right 鈥 and there are two outcomes 鈥 kicker wins or goalie wins.
It鈥檚 probably obvious that the best strategy for both players is to randomly go left or right with equal probability; that way, both will win about half the time. And indeed, that pair of strategies is what鈥檚 called the 鈥淣ash equilibrium鈥 for the game. Named for John Nash 鈥 who taught at MIT and whose life was the basis for the movie A Beautiful Mind 鈥 the Nash equilibrium is the point in a game where the players have found strategies that none has the incentive to change unilaterally. In this case, for instance, neither player can improve her outcome by going one direction more often than the other.
Of course, most games are more complicated than the penalty-kick game, and their Nash equilibria are more difficult to calculate. But the reason the Nash equilibrium is associated with Nash鈥檚 name 鈥 and not the names of other mathematicians who, over the preceding century, had described Nash equilibria for particular games 鈥 is that Nash was the first to prove that every game must have a Nash equilibrium. Many economists assume that, while the Nash equilibrium for a particular market may be hard to find, once found, it will accurately describe the market鈥檚 behavior.
Daskalakis鈥檚 doctoral thesis 鈥 which won the Association for Computing Machinery鈥檚 2008 dissertation prize 鈥 casts doubts on that assumption. Daskalakis, working with Christos Papadimitriou of the University of California, Berkeley, and the University of Liverpool鈥檚 Paul Goldberg, has shown that for some games, the Nash equilibrium is so hard to calculate that all the computers in the world couldn鈥檛 find it in the lifetime of the universe. And in those cases, Daskalakis believes, human beings playing the game probably haven鈥檛 found it either.
In the real world, competitors in a market or drivers on a highway don鈥檛 (usually) calculate the Nash equilibria for their particular games and then adopt the resulting strategies. Rather, they tend to calculate the strategies that will maximize their own outcomes given the current state of play. But if one player shifts strategies, the other players will shift strategies in response, which will drive the first player to shift strategies again, and so on. This kind of feedback will eventually converge toward equilibrium: in the penalty-kick game, for example, if the goalie tries going in one direction more than half the time, the kicker can punish her by always going the opposite direction. But, Daskalakis argues, feedback won鈥檛 find the equilibrium more rapidly than computers could calculate it.
The argument has some empirical support. Approximations of the Nash equilibrium for two-player poker have been calculated, and professional poker players tend to adhere to it 鈥 particularly if they鈥檝e read any of the many books or articles on game theory鈥檚 implications for poker. The Nash equilibrium for three-player poker, however, is intractably hard to calculate, and professional poker players don鈥檛 seem to have found it.
How can we tell? Daskalakis鈥檚 thesis showed that the Nash equilibrium belongs to a set of problems that is well studied in computer science: those whose solutions may be hard to find but are always relatively easy to verify. The canonical example of such a problem is the factoring of a large number: The solution seems to require trying out lots of different possibilities, but verifying an answer just requires multiplying a few numbers together. In the case of Nash equilibria, however, the solutions are much more complicated than a list of prime numbers. The Nash equilibrium for three-person Texas hold 鈥檈m, for instance, would consist of a huge set of strategies for any possible combination of players鈥 cards, dealers鈥 cards, and players鈥 bets. Exhaustively characterizing a given player鈥檚 set of strategies is complicated enough in itself, but to the extent that professional poker players鈥 strategies in three-player games can be characterized, they don鈥檛 appear to be in equilibrium.
Anyone who鈥檚 into computer science 鈥 or who read 鈥淓xplained: P vs. NP鈥 on the MIT News web site last week 鈥 will recognize the set of problems whose solutions can be verified efficiently: It鈥檚 the set that computer scientists call NP. Daskalakis proved that the Nash equilibrium belongs to a subset of NP consisting of hard problems with the property that a solution to one can be adapted to solve all the others. (The cognoscenti will infer that it鈥檚 the set called NP-complete; but the fact that the Nash equilibrium always exists disqualifies it from NP-completeness. In fact, it belongs to a different set, called PPAD-complete.)
That result 鈥渋s one of the biggest yet in the roughly 10-year-old field of algorithmic game theory,鈥 says Tim Roughgarden, an assistant professor of computer science at Stanford University. It 鈥渇ormalizes the suspicion that the Nash equilibrium is not likely to be an accurate predictor of rational behavior in all strategic environments.鈥
Given the Nash equilibrium鈥檚 unreliability, says Daskalakis, 鈥渢here are three routes that one can go. One is to say, We know that there exist games that are hard, but maybe most of them are not hard.鈥 In that case, Daskalakis says, 鈥測ou can seek to identify classes of games that are easy, that are tractable.鈥
The second route, Daskalakis says, is to find mathematical models other than Nash equilibria to characterize markets 鈥 models that describe transition states on the way to equilibrium, for example, or other types of equilibria that aren鈥檛 so hard to calculate. Finally, he says, it may be that where the Nash equilibrium is hard to calculate, some approximation of it 鈥 where the players鈥 strategies are almost the best responses to their opponents鈥 strategies 鈥 might not be. In those cases, the approximate equilibrium could turn out to describe the behavior of real-world systems.
As for which of these three routes Daskalakis has chosen, 鈥淚鈥檓 pursuing all three,鈥 he says.
Provided by Massachusetts Institute of Technology ( : )