The Prisoner’s Dilemma
By James Tobin
The Prisoner's Dilemma captured the essence of the tension between doing what is good for the individual and what is good for everyone.– Robert Axelrod
-
Chapter 1 Two Rats in a Jail
Two bank robbers happen to meet. They decide to pull a job together.
The cops nab them, but without enough evidence to convict. They need a confession. And they know both robbers are unlikely to talk, since if neither implicates the other, the cops can keep them in jail for only 30 days.
So they put the two in separate cells. They go to the first prisoner and say:
“If you rat on your partner and he stays mum, we’ll let you go and he’ll do ten years.”
“If you both rat on each other, you’ll both do eight years.”
Then they go to the second prisoner and say the same thing.
The first prisoner thinks it over.
“If he rats on me and I don’t rat on him, then I lose big-time. If I rat on him and he doesn’t rat on me, then I win big-time. Either way, the smart move is to rat on him. I’ll just hope he’s a sucker and doesn’t rat on me.”
The second prisoner reasons the same way.
So they rat on each other, and the cops get their two convictions.
If the prisoners had cooperated, both would have gotten off easy. Instead, the rational pursuit of self-interest has put them both in a world of pain.
That is the Prisoner’s Dilemma. It is the most famous puzzle in the scientific field called game theory, the mathematical analysis of strategic interactions between rivals. The puzzle was invented in 1950 by two scientists at the RAND Corporation.
Robert Axelrod, professor emeritus in U-M’s Ford School of Public Policy and the Department of Political Science, encountered it as a young man in the 1960s.
He could scarcely have imagined how far his exploration of the Prisoner’s Dilemma would go.
-
Chapter 2 A Global Gameboard
Axelrod had been a very brainy youngster — so brainy that he was one of 40 students selected as the nation’s most promising future scientists in the Westinghouse Science Talent Search. (The winners met with President John F. Kennedy in 1961. Another winner that year was Axelrod’s future colleague at U-M, President Mary Sue Coleman.)
As an undergraduate at the University of Chicago, he majored in mathematics. But in the fall of 1962, the Cuban Missile Crisis turned his attention to international relations.
It was crazy, he thought, that millions of lives hung on the calculations of two players on a global gameboard. Maybe he could use math to make some small contribution to a solution.
So he earned a Ph.D. in political science at Yale, and in 1974, after several years at the University of California at Berkeley, he joined the Michigan faculty.
The problem of conflict between powerful actors, whether in the cloakrooms of Capitol Hill or in the global Cold War, remained very much on his mind.
Were there conditions, he wondered, in which powerful rivals could move past mutual assured destruction toward mutual cooperation?
Could you crystallize those conditions in a mathematical model? Could you somehow weed out all the noisy factors of history and emotion to find insights on the choices that lead to peace or war?
Axelrod’s father had made his living as a commercial artist. In his spare time he painted watercolors of outdoor scenes. He told his son that his paintings were not like photographs. He wasn’t recording every visible detail. He was painting only what mattered to him in the scene, or how the scene affected him.
As he thinks back on it now, Axelrod said recently, he realizes that his own mathematical modeling of human behavior drew something from what his father said about painting. He isn’t trying to reproduce every factor in a complex interaction. He’s trying to model the things that matter most.
“To me,” he would write later, “the Prisoner’s Dilemma captured the essence of the tension between doing what is good for the individual and what is good for everyone.”
Yet as he read the writings of experts on the Prisoner’s Dilemma in graduate school, he had grown frustrated. “There was no clear answer to the question of how to avoid conflict, or even how an individual (or country) should play the game.”
After he got settled in Ann Arbor, as he was planning his next research projects, his mind returned to the problem. He remembered how, as a kid, he had fooled around with versions of checkers played on a computer. You could write programs to test strategies, then see which strategy was the best.
In 1977, the two ideas came together. He imagined a Prisoner’s Dilemma tournament waged by lines of computer code. Then he invited experts in game theory to submit their strategies.
Just maybe, he thought, some contestant would play the Prisoner’s Dilemma in a way that showed a path out of the trap of self-interest.
-
Chapter 3 How to WIn?
The scenario of the bank robbers is the Prisoner’s Dilemma in its pure form. It isn’t a game. It’s just a brain-teaser.
Game theorists wanted to do more with it, and they wanted something closer to the real world. In everyday life, from business to politics to diplomacy, selfish actors — “egoists,” in game-theory lingo — compete not just once but often. They get to know each other and act on that knowledge.
So the gamers began to play what they called the iterated (meaning repeated) prisoner’s dilemma. Here the match between the prisoners is staged again and again, like kids playing rock-paper-scissors.
A contestant gets points for each match-up — low points if you cooperate and your opponent betrays you; high points if you betray them and they cooperate; medium points if you both cooperate — and your scores are tallied over many match-ups.
You can’t communicate with the other guy. But you do learn their moves. So game by game, if you pay attention, you can try to figure out how they act and use that knowledge to your advantage.
So how do you win?
It’s not like chess, where it’s winner-take-all. In the iterated Prisoner’s Dilemma, as Axelrod writes, “the interests of the players are not in total conflict. Both players can do well by getting the reward for mutual cooperation or both can do poorly by getting the punishment for mutual defection… So unlike in chess, in the Prisoner’s Dilemma it is not safe to assume that the other player is always out to get you.
“In fact…the strategy that works best depends directly on what strategy the other player is using and, in particular, on whether this strategy leaves room for the development of mutual cooperation.”
-
Chapter 4 The First Tournament
He reached out to experts in game theory and the Prisoner’s Dilemma. He invited them to submit entries and explained the rules:
— Each contestant would submit lines of code in either FORTRAN or BASIC, the era’s main computer languages. Each submission was a “decision rule” — a program that chooses cooperation or betrayal on each move, based on knowledge of all the opponent’s moves in the game so far.
— It would be a round robin, with every contestant matched against every other contestant in 200 instances of the Prisoner’s Dilemma. Each “decision rule” would also be matched against its own twin and against a program called “Random,” which betrayed or cooperated at random.
— Contestants could not communicate with each other.
— Points would be awarded in each match-up as follows: Three for mutual cooperation; one for mutual betrayal; five to the player who betrays when the other player cooperates.
When it was over, each player’s total points would be tallied and a winner declared.
One by one, 14 entries arrived in Axelrod’s campus mailbox. They came from game-theory practitioners in psychology, economics, political science, mathematics and sociology.
He bundled them up and walked them over to the University Computer Center on North University.
He had to wait overnight while the entries were processed and the University’s mainframe chugged through its work.
The next day he went over to pick up the results. They came on a thick set of printouts — columns of figures representing all the points won and lost in all the various match-ups.
He did the tallies, stared at them, checked them. Then he sat back in some astonishment.
-
Chapter 5 Tit for Tat
He had expected the winning strategy to be at least as complex as the super-sophisticated skeins of code that typically won in computer chess tournaments.
But the strategy with the highest overall score in his Prisoner’s Dilemma tournament was the simplest one submitted.
The winning “decision rule” contained just two instructions:
- In the first match-up, cooperate.
- In every match-up after that, do what the opponent did in the preceding match-up.
Cooperate immediately after they cooperate. Betray immediately after they betray.
Tit for tat.
It had been submitted by a mathematician and social scientist named Anatol Rapoport. His victory in a game of cold strategic calculation was remarkable in part because he was best known as a pioneer in the field of peace research.
Rapoport’s interests were astonishingly broad. Originally trained as a classical musician, he had shifted to mathematics, then applied math to the study of biology, psychology and human interaction. He was deeply engaged in game theory and made important contributions to the field. But he also challenged game theorists to heed the role of conscience in strategic decisions.
From 1955 to 1970 he had been a professor at Michigan. In disillusionment over U.S. policy in Vietnam, he had moved to Canada, where he joined the faculty of the University of Toronto.
Later, Rapoport told Axelrod he didn’t much care for the Tit for Tat strategy. It was a little harsh for a peace researcher. But he had wanted to see how it would do.
It had done very well. But Axelrod figured some bright game theorist was bound to beat it with a more sophisticated decision rule.
So he organized a second tournament.
-
Chapter 6 Testers and Tranquilizers
He placed ads in magazines for computer enthusiasts and he offered more information. He told prospective competitors that Tit for Tat had won the first tournament, and he offered his analysis of why Tit for Tat and other high-scoring strategies had done well and others had done poorly.
He was setting up Tit for Tat with a big bull’s-eye on its chest — a goad to challengers. He wanted to see what they would come up with.
The second tournament drew 62 entries from the United States, Canada, Great Britain, Norway, Switzerland and New Zealand, including a 10-year-old computer whiz and professors in computer science, physics, economics, psychology, mathematics, sociology, political science and evolutionary biology.
The winner: The peacenik Anatol Rapoport, playing Tit for Tat.
“What…again?” Axelrod asked himself.
Tit for Tat had fought off clever and certainly more complex competitors. To name just a few, there were:
— “Tester,” a sneaky decision rule that scouts the territory for the soft-hearted, exploits perceived weaknesses, but retreats to Tit for Tat when it encounters a tough response.
— “Downing” (named for its creator, Leslie L. Downing, an authority on the psychology of extremism). The strategy is meant to figure out the behavior of its opponent, then adapt to that behavior to achieve the best long-term score. Sounds super-smart — but it came in 40th.
— “Tranquilizer,” which tries to lull competitors into repeated cooperation, then slams them with betrayals.
— “Joss,” another sneaky strategy, the work of a Swiss mathematician. It acts mostly like Tit for Tat, but betrays in response to cooperation in 10 percent of its turns. It came in 29th.
— “Friedman,” brainchild of an economist, a tough decision rule that is never the first to betray, but once it’s betrayed, betrays in turn on every remaining move. It came in 52nd.
Why had Tit for Tat done so well? And what did it mean?
-
Chapter 7 Be Nice
Tit for Tat won because it was nice.
That was Axelrod’s term of art for Tit for Tat’s behavior. To be nice in the Prisoner’s Dilemma means never to be the first to betray a rival.
And niceness was good not only for Tit for Tat. It was the defining characteristic of the eight top entries, which departed from Tit for Tat in various ways.
But Tit for Tat was not a wimp. It did not turn the other cheek. If struck, it struck back.
Then it forgave, or was ready to.
Be nice. Be ready to forgive. But don’t be a pushover.
That’s advice that many a parent has given many a child.
In fact, the victories of Tit for Tat might be taken as scientific confirmation of those traditional nostrums.
No, not quite, Axelrod says.
“I wouldn’t use the word ‘confirmation,'” he said. “That’s a little strong. That would be used for confirmation of Einstein’s theory of this or that. I’d use the word ‘support.’ It does support that. You can also observe that turning the other cheek has its problems, because it actually encourages the other guy to take advantage of you if you do too much of that.”
One strategy would have beaten Tit for Tat in the second tournament. Axelrod had served it up to all the competitors as an example of how to do their submissions. But no one had used it.
It was called Tit for Two Tats. Where Tit for Tat retaliates immediately against a betrayal, Tit for Two Tats lets the opponent get away with two betrayals in a row — then strikes back.
That’s even nicer.
-
Chapter 8 Four Rules
After careful analysis and more testing, Axelrod moved toward a conclusion that turned on its head an age-old suspicion of philosophers and kings — that selfish human beings, whether acting alone in a bar or in great aggregations, are likely to wind up in a state of war.
In the measured, undramatic language of a political scientist, Axelrod puts it this way in the first of his books to follow the tournaments, The Evolution of Cooperation (1984): “The analysis of the tournaments indicates that there is a lot to be learned about coping in an environment of mutual power. Even expert strategists from political science, sociology, economics, psychology, and mathematics made the systematic errors of being too competitive for their own good, not being forgiving enough, and being too pessimistic about the responsiveness of the other side.”
In fact, Axelrod argues, under the right conditions it is entirely possible for independent actors, each acting in pursuit of their own interests, to evolve toward a state of cooperation in which all can thrive.
The implications reach into practically every sphere where humans operate.
Axelrod’s analysis suggests that cooperation is possible when people follow four rules:
- Avoid unnecessary conflict by cooperating as long as your opponent does;
- If your opponent betrays you without provocation, respond in kind — once;
- Then forgive the betrayal and cooperate again;
- Be clear and predictable. That is, always follow steps 1, 2 and 3, so your opponent comes to know how you act and can plan on that basis.
That behavior can foster human cooperation.
“The most fascinating point,” he would write, “was that Tit for Tat won the tournaments even though it could never do better than the player it was interacting with. Instead it won by its success at eliciting cooperation.”
In other words, the encouraging thing about Tit for Tat was not that it was good for itself. It was that it was good for everybody.
-
Chapter 9 Just the Beginning
All that was really just the beginning of Axelrod’s work.
He and others began to explore the extended implications of what he had found in the computer tournaments.
Signs of Tit for Tat and other “nice” strategies of competition were discovered in unexpected places. For example, game theorists saw Tit for Tat in the behavior of British and German soldiers facing each other in the trenches of World War I, when informal cease-fires, even gift exchanges, developed between opposing units who tried the simple tactic of not shooting at each other.
Axelrod went beyond the Prisoner’s Dilemma to analyze the evolution of cooperative strategies in a variety of circumstances, with special attention to strategies for coping with error, misunderstanding and “noise.”
In a collaboration that became famous in scientific circles, Axelrod and the U-M evolutionary biologist William D. Hamilton developed insights on the evolution of cooperation in the relations of fish, birds, and apes. Evidence of cooperation was discovered among tumor cells.
Studies of how cooperative behavior might evolve among selfish actors have been conducted in economics, marketing, networked computers, international relations and the conduct of warfare, among other fields.
And in academe, Axelrod’s papers became some of the most frequently cited works in any discipline.
He was asked for advice by corporations and non-profits. He consulted with the U.S. Defense Department, the U.S. Navy and U.S. Cyber Command, as well as the World Bank and the United Nations.
And once, at a conference, a woman approached Axelrod to say Tit for Tat was doing wonders for her as she negotiated her divorce settlement.
More than half a century after he accepted an award for science from President Kennedy, Axelrod was invited back to the White House.
In 2014, President Barack Obama awarded him the President’s National Medal of Science “for interdisciplinary work on the evolution of cooperation, complexity theory, and international security, and for the exploration of how social science models can be used to explain biological phenomena.”
In the spring of 2019, the Board of Regents awarded professor emeritus status to Axelrod to cap his 45-year career at Michigan.
Sources included: Robert Axelrod, The Evolution of Cooperation (1984), The Complexity of Cooperation (1997); and “Launching ‘The Evolution of Cooperation,'” Journal of Theoretical Biology 299 (2012); “Cooperation Without Genes,” U-M Research News (1980); Robert Hoffmann, “Twenty Years on: The Evolution of Cooperation Revisited,” Journal of Artificial Societies and Social Simulation (March 2000); James Gleick, “Prisoner’s Dilemma Has Unexpected Applications,” New York Times, 6/17/1986.
Learn more: Anatol Rapoport and the “Tit for Tat” strategy.