Prisoner's Dilemma Game

From GM-RKB
Jump to navigation Jump to search

A Prisoner's Dilemma Game is a limited-information competitive non-sequential adversarial non-zero-sum game.



References

2013

  • http://en.wikipedia.org/wiki/Prisoner%27s_dilemma
    • The prisoner's dilemma is a canonical example of a game analyzed in game theory that shows why two individuals might not cooperate, even if it appears that it is in their best interests to do so. It was originally framed by Merrill Flood and Melvin Dresher working at RAND in 1950. Albert W. Tucker formalized the game with prison sentence rewards and gave it the name "prisoner's dilemma" (Poundstone, 1992), presenting it as follows:
      • Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of speaking to or exchanging messages with the other. The police admit they don't have enough evidence to convict the pair on the principal charge. They plan to sentence both to a year in prison on a lesser charge. Simultaneously, the police offer each prisoner a Faustian bargain. If he testifies against his partner, he will go free while the partner will get three years in prison on the main charge. Oh, yes, there is a catch … If both prisoners testify against each other, both will be sentenced to two years in jail.
    • In this classic version of the game, collaboration is dominated by betrayal; if the other prisoner chooses to stay silent, then betraying them gives a better reward (no sentence instead of one year), and if the other prisoner chooses to betray then betraying them also gives a better reward (two years instead of three). Because betrayal always rewards more than cooperation, all purely rational self-interested prisoners would betray the other, and so the only possible outcome for two purely rational prisoners is for them both to betray each other. The interesting part of this result is that pursuing individual reward logically leads the prisoners to both betray, but they would get a better reward if they both cooperated. In reality, humans display a systematic bias towards cooperative behavior in this and similar games, much more so than predicted by simple models of "rational" self-interested action.[1][2][3][4]

      There is also an extended "iterative" version of the game, where the classic game is played over and over between the same prisoners, and consequently, both prisoners continuously have an opportunity to penalize the other for previous decisions. If the number of times the game will be played is known to the players, then (by backward induction) two purely rational prisoners will betray each other repeatedly, for the same reasons as the classic version. In an infinite or unknown length game there is no fixed optimum strategy, and Prisoner's Dilemma tournaments have been held to compete and test algorithms.

      In casual usage, the label "prisoner's dilemma" may be applied to situations not strictly matching the formal criteria of the classic or iterative games: for instance, those in which two entities could gain important benefits from cooperating or suffer from the failure to do so, but find it merely difficult or expensive, not necessarily impossible, to coordinate their activities to achieve cooperation.

  1. Fehr, Ernst; Fischbacher, Urs (Oct 23, 2003). "The Nature of human altruism". Nature (Nature Publishing Group) 425 (6960): 785–791. doi:10.1038/nature02043. PMID 14574401. http://www.iwp.jku.at/born/mpwfst/04/nature02043_f_born.pdf. Retrieved February 27, 2013. 
  2. Tversky, Amos; Shafir, Eldar (2004). Preference, belief, and similarity: selected writings.. Massachusettes Institute of Technology Press. ISBN 9780262700931. http://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Preference,%20Belief,%20and%20Similarity%20Selected%20Writings%20(Bradford%20Books).pdf. Retrieved February 27, 2013. 
  3. Toh-Kyeong, Ahn; Ostrom, Elinor; Walker, James (Sept 5, 2002). "Incorporating Motivational Heterogeneity into Game-Theoretic Models of Collective Action". Public Choice 117 (3–4). http://www.indiana.edu/~workshop/seminars/papers/ahnostromwalker_092402.pdf. Retrieved February 27, 2013. 
  4. Oosterbeek, Hessel; Sloof, Randolph; Van de Kuilen, Gus (Dec 3, 2003). "Cultural Differences in Ultimatum Game Experiments: Evidence from a Meta-Analysis". Experimental Economics (Springer Science and Business Media B.V) 7 (2): 171–188. doi:10.1023/B:EXEC.0000026978.14316.74. http://www.econ.nagoya-cu.ac.jp/~yhamagu/ultimatum.pdf. Retrieved February 27, 2013. 

1965