The Evolution of Cooperation* *
Professor of Political Science and Public Policy, University of Michigan, Ann Arbor. Dr. Axelrod is a member of the American National Academy of Sciences and the American Academy of Arts and Sciences. His honors include a MacArthur Foundation Fellowship for the period 1987 through 1992.
Under what conditions will cooperation emerge in a world ofegoists without central authority? This question has intrigued people for a long time. We all know that people are not angels, and that they tend to look after themselves and their own first. Yet we also know that cooperation does occur and that our civilization is based upon it. A good example of the fundamental problem of cooperation is the case where two industrial nations have erected tradebarriers to each other’s exports. Because of the mutual advantages of free trade, both countries would be better off if these barriers were eliminated. But if either country were to eliminate its barriers unilaterally, it would find itself facing terms of trade that hurt its own economy. In fact, whatever one country does, the other country is better off retaining its own trade barriers. Therefore,the problem is that each country has an incentive to retain trade barriers, leading to a worse outcome than would have been possible had both countries cooperated with each other.
Adapted from Robert Axelrod, The Evolution of Cooperation. New York: Basic Books, 1984. Reprinted by permission.
2 / Process of Change
The Computer Tournament This basic problem occurs when the pursuitof self-interest by each leads to a poor outcome for all. To understand the vast array of specific situations like this, we need a way to represent what is common to them without becoming bogged down in the details unique to each. Fortunately, there is such representation available: the famous Prisoner’s Dilemma game, invented about 1950 by two Rand Corporation scientists. In this game there aretwo players. Each has two choices, namely “cooperate” or “defect.” The game is called the Prisoner’s Dilemma because in its original form two prisoners face the choice of informing on each other (defecting) or remaining silent (cooperating). Each must make the choice without knowing what the other will do. One form of the game pays off as follows:
Player’s Choice If both players defect: If bothplayers cooperate: If one player defects while the other player cooperates: Payoff Both players get $1. Both players get $3. The defector gets $5 and the cooperator gets zero.
One can see that no matter what the other player does, defection yields a higher payoff than cooperation. If you think the other player will cooperate, it pays for you to defect (getting $5 rather than $3). On the otherhand, if you think the other player will defect, it still pays for you to defect (getting $1 rather than zero). Therefore the temptation is to defect. But, the dilemma is that if both defect, both do worse than if both had cooperated. To find a good strategy to use in such situations, I invited experts in game theory to submit programs for a computer Prisoner’s Dilemma tournament – much like acomputer chess tournament. Each of these strategies was paired off with each of the others to see which would do best overall in repeated interactions. Amazingly enough, the winner was the simplest of all candidates submitted. This was a strategy of simple reciprocity which cooperates on the first move and then does whatever the other player did on the previous move. Using an American colloquialphrase, this strategy was named Tit for Tat. A second round of the tournament was conducted in which many more entries were submitted by amateurs and professionals alike, all of whom were aware of the results of the first round. The result was another victory for simple reciprocity. The analysis of the data from these tournaments reveals four properties which tend to make a strategy successful:...
Lire le document complet
Veuillez vous inscrire pour avoir accès au document.