Behavioral game theory
Behavioral game theory analyzes interactive strategic decisions and behavior using the methods of game theory,[1] experimental economics, and experimental psychology. Experiments include testing deviations from typical simplifications of economic theory such as the independence axiom[2] and neglect of altruism,[3] fairness,[4] and framing effects.[5] As a research program, the subject is a development of the last three decades.[6] Traditional game theory focuses on mathematical equilibriums, utility maximizing, and rational choice; in contrast, behavioral game theory focuses on choices made by participants in studies and is game theory applied to experiments.[7] Choices studied in behavioral game theory are not always rational and do not always represent the utility maximizing choice.[8]
History
Behavioral game theory began with the work of Allais in 1953 and Ellsberg in 1961. They discovered the Allais paradox and the Ellsberg paradox, respectively.[7] Both paradoxes show that choices made by participants in a game do not reflect the benefit they expect to receive from making those choices. In the 1970s the work of Vernon Smith showed that economic markets could be examined experimentally rather than only theoretically.[7] At the same time, several economists conducted experiments that discovered variations of traditional decision-making models such as regret theory, prospect theory, and hyperbolic discounting.[7] These discoveries showed that actual decision makers consider many factors when making choices. For example, a person may seek to minimize the amount of regret they will feel after making a decision and weigh their options based on the amount of regret they anticipate from each. Because they were not previously examined by traditional economic theory, factors such as regret along with many others fueled further research.
Beginning in the 1980s experimenters started examining the conditions that cause divergence from rational choice. Ultimatum and bargaining games examined the effect of emotions on predictions of opponent behavior. One of the most well known examples of an ultimatum game is the television show Deal or No Deal in which participants must make decisions to sell or continue playing based on monetary ultimatums given to them by “the banker.” These games also explored the effect of trust on decision-making outcomes and utility maximizing behavior.[9] Common resource games were used to experimentally test how cooperation and social desirability affect subject's choices. A real life example of a common resource game might be a party guest's decision to take from a food platter. The guests decisions would not only be affected by how hungry they are, but they would also be affected by how much of the shared resource, the food, is left and if the guest believes others would judge them for taking more. Experimenters during this period regarded behavior that did not maximize utility as the result of participant's flawed reasoning.[7] By the turn of the century economists and psychologists expanded this research. Models based on the rational choice theory were adapted to reflect decision maker preferences and attempt to rationalize choices that did not maximize utility.[7]
Comparison to traditional game theory
Traditional Game theory uses theoretical models to determine the most beneficial choice of all players in a game.[10] Game theory uses rational choice theory along with assumptions of players' common knowledge in order to predict utility-maximizing decisions.[10] It also allows for players to predict their opponents' strategies.[11] Traditional game theory is a primarily normative theory as it seeks to pinpoint the decision rational players should choose, but does not attempt to explain why that decision was made.[11] Rationality is a primary assumption of game theory, so there are not explanations for different forms of rational decisions or irrational decisions.[11]
Behavioral game theory is a primarily positive theory rather than a normative theory.[11] A positive theory is objective and based on facts. Positive theories must be testable and can be proven true or false. A normative theory is subjective and based on opinions. Because of this, normative theories cannot be proven true or false. Behavioral game theory attempts to explain decision making using experimental data.[11] The theory allows for rational and irrational decisions because both are examined using real-life experiments. Specifically, behavioral game theory attempts to explain factors that influence real world decisions.[11] These factors are not explored in the area of traditional game theory, but can be postulated and observed using empirical data.[11] Findings from behavioral game theory will tend to have higher external validity and can be better applied to real world decision-making behavior.[11]
Examples of games used in behavioral game theory research
- Signaling game
- Dictator Game
- Ultimatum Game
- Keynesian beauty contest
- Normal form game
- Cooperative game
Factors that affect rationality in games
Beliefs
Beliefs about other people in a decision-making game are expected to influence ones ability to make rational choices. However, beliefs of others can also cause experimental results to deviate from equilibrium, utility-maximizing decisions. In an experiment by Costa-Gomez (2008) participants were questioned about their first order beliefs about their opponent's actions prior to completing a series of normal-form games with other participants.[12] Participants complied with Nash Equilibrium only 35% of the time. Further, participants only stated beliefs that their opponents would comply with traditional game theory equilibrium 15% of the time.[12] This means participants believed their opponents would be less rational than they really were. The results of this study show that participants do not choose the utility-maximizing action and they expect their opponents to do the same.[12] Also, the results show that participants did not choose the utility-maximizing action that corresponded to their beliefs about their opponent's action.[12] While participants may have believed their opponent was more likely to make a certain decision, they still made decisions as if their opponent was choosing randomly.[12] Another study that examined participants from the TV show Deal or No Deal found divergence from rational choice.[13] Participants were more likely to base their decisions on previous outcomes when progressing through the game.[13] Risk aversion decreased when participants' expectations were not met within the game. For example a subject that experienced a string of positive outcomes was less likely to accept the deal and end the game. The same was true for a subject that experienced primarily negative outcomes early in the game.[13]
Social cooperation
Social behavior and cooperation with other participants are two factors that are not modeled in traditional game theory, but are often seen in an experimental setting. The evolution of social norms has been neglected in decision-making models, but these norms influence the ways in which real people interact with one another and make choices.[9] One tendency is for a person to be a strong reciprocator.[9] This type of person enters a game with the predisposition to cooperate with other players. They will increase their cooperation levels in response to cooperation from other players and decrease their cooperation levels, even at their own expense, to punish players who do not cooperate.[9] This is not utility-maximizing behavior, as a strong reciprocator is willing to reduce their payoff in order to encourage cooperation from others.
Dufwenberg and Kirchsteiger (2004) developed a model based on reciprocity called the sequential reciprocity equilibrium. This model adapts traditional game theory logic to the idea that players reciprocate actions in order to cooperate.[14] The model had been used to more accurately predict experimental outcomes of classic games such as the prisoner's dilemma and the centipede game. Rabin (1993) also created a fairness equilibrium that measures altruism's effect on choices.[15] He found that when a player is altruistic to another player the second player is more likely to reciprocate that altruism.[15] This is due to the idea of fairness.[15] Fairness equilibriums take the form of mutual maximum, where both players choose an outcome that benefits both of them the most, or mutual minimum, where both players choose an outcome that hurts both of them the most.[15] These equilibriums are also Nash equilibriums, but they incorporate the willingness of participants to cooperate and play fair.
Incentives, consequences, and deception
The role of incentives and consequences in decision-making is interesting to behavioral game theorists because it affects rational behavior. Post (2008) analyzed Deal or no Deal contestant behavior in order to reach conclusions about decision-making when stakes are high.[13] Studying the contestant's choices formed the conclusion that in a sequential game with high stakes decisions were based on previous outcomes rather than rationality.[13] Players who face a succession of good outcomes, in this case they eliminate the low-value cases from play, or players who face a succession of poor outcomes become less risk averse.[13] This means that players who are having exceptionally good or exceptionally bad outcomes are more likely to gamble and continue playing than average players. The lucky or unlucky players were willing to reject offers of over one hundred percent of the expected value of their case in order to continue playing.[13] This shows a shift from risk avoiding behavior to risk seeking behavior. This study highlights behavioral biases that are not accounted for by traditional game theory. Riskier behavior in unlucky contestants can be attributed to the break-even effect, which states that gamblers will continue to make risky decisions in order to win back money.[13] On the other hand, riskier behavior in lucky contestants can be explained by the house-money effect, which states that winning gamblers are more likely to make risky decisions because they perceive that they are not gambling with their own money.[13] This analysis shows that incentives influence rational choice, especially when players make a series of decisions.
Incentives and consequences also play a large role in deception in games. Gneezy (2005) studied deception using a cheap talk sender-receiver game.[16] In this type of game player one receives information about the payouts of option A and option B. Then, player one gives a recommendation to player two about which option to take. Player one can choose to deceive player two, and player two can choose to reject player one's advice. Gneezy found that participants were more sensitive to their gain from lying than to their opponent's loss.[16] He also found that participants were not wholly selfish and cared about how much their opponents lost from their deception, but this effect diminished as their own payout increased.[16] These findings show that decision makers examine both incentives to lie and consequences of lying in order to decide whether or not to lie. In general people are averse to lying, but given the right incentives they tend to ignore consequences.[16] Wang (2009) also used a cheap talk game to study deception in participants with an incentive to deceive.[17] Using eye tracking he found that participants who received information about payoffs focused on their own payoff twice as often as their opponents.[17] This suggests minimal strategic thinking. Further, participants' pupils dilated when they sent a deceiving, and they dilated more when telling a bigger lie.[17] Through these physical cues Wang concluded that deception is cognitively difficult.[17] These findings show that factors such as incentives, consequences, and deception can create irrational decisions and affect the way games unfold.
Group decisions
Behavioral game theory considers the effects of groups on rationality. In the real world many decisions are made by teams, yet traditional game theory uses an individual as a decision maker. This created a need to model group decision-making behavior. Bornstein and Yaniv (1998) examined the difference in rationality between groups and individuals in an ultimatum game.[18] In this game player one (or group one) decides what percentage of a payout to give to player two (or group two) and then player two decides whether to accept or reject this offer. Participants in the group condition were put in groups of three and allowed to deliberate on their decisions.[18] Perfect rationality in this game would be player one offering player two none of the payout, but that is almost never the case in observed offers. Bornstein and Yaniv found that groups were less generous, willing to give up a smaller portion of the payoff, in the player one condition and more accepting of low offers in the player two condition than individuals.[18] These results suggest that groups are more rational than individuals.[18]
Kocher and Sutter (2005) used a beauty contest game to study and compare individual and group behavior.[19] A beauty contest game is one where all participants choose a number between zero and one hundred. The winner is the participant who chooses a number closest to two thirds of the average number. In the first round the rational choice would be thirty-three, as it is two thirds of the average number, fifty. Given an infinite number of rounds all participants should choose zero according to game theory. Kocher and Sutter found that groups did not perform more rationally than individuals in the first round of the game.[19] However, groups performed more rationally than individuals in subsequent rounds.[19] This shows that groups are able to learn the game and adapt their strategy faster than individuals.
See also
References
- ↑ R. J. Aumann (2008). "game theory," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
- ↑ Camerer, Colin; Ho, Teck-Hua (March 1994). "Violations of the betweenness axiom and nonlinearity in probability". Journal of Risk and Uncertainty. Springer. 8 (2): 167–196. doi:10.1007/bf01065371.
- ↑ James Andreoni et al. (2008). "altruism in experiments," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
- ↑ H. Peyton Young (2008). "social norms," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
- ↑ Camerer, Colin (1997). "Progress in behavioral game theory". Journal of Economic Perspectives. Caltech. 11 (4): 172. doi:10.1257/jep.11.4.167. Pdf version.
- ↑ • Camerer, Colin (2003). Behavioral game theory: experiments in strategic interaction. New York, New York Princeton, New Jersey: Russell Sage Foundation Princeton University Press. ISBN 9780691090399. Description, preview ([ctrl]+), and ch. 1 link.
* _____, George Loewenstein, and Matthew Rabin, ed. (2003). Advances in Behavioral Economics, Princeton. 1986–2003 papers. Description, contents, and preview.
* Drew Fudenberg (2006). "Advancing Beyond Advances in Behavioral Economics," Journal of Economic Literature, 44(3), pp. 694–711.
* Vincent P. Crawford (1997). "Theory and Experiment in the Analysis of Strategic Interaction," in Advances in Economics and Econometrics: Theory and Applications, pp. 206–242. Cambridge. Reprinted in Camerer et al. (2003), Advances in Behavioral Economics, Princeton, ch. 12.
* Martin Shubik (2002). "Game Theory and Experimental Gaming," in R. Aumann and S. Hart, ed., Handbook of Game Theory with Economic Applications, Elsevier, v. 3, pp. 2327–2351. Abstract.
• Charles R. Plott and Vernon L. Smith, ed. (2008). Handbook of Experimental Economics Results, v. 1, Elsevier, Part 4, Games preview and ch. 45–66 preview links.
* Games and Economic Behavior, Elsevier. Aims and scope. - 1 2 3 4 5 6 Gintis, H. (2005). Behavioral game theory and contemporary economic theory. Analyse & Kritik, 27(1), 48-72.
- ↑ Camerer, C. (2003). Behavioral game theory: Experiments in strategic interaction. Princeton University Press.
- 1 2 3 4 Gintis, H. (2009). The bounds of reason: Game theory and the unification of the behavioral sciences. Princeton University Press.
- 1 2 Osborne, M. J., & Rubinstein, A. (1994). A course in game theory. MIT press.
- 1 2 3 4 5 6 7 8 Colman, A. M. (2003). Cooperation, psychological game theory, and limitations of rationality in social interaction. Behavioral and brain sciences, 26(02), 139-153.
- 1 2 3 4 5 Costa-Gomes, M. A., & Weizsäcker, G. (2008). Stated beliefs and play in normal-form games. The Review of Economic Studies, 75(3), 729-762.
- 1 2 3 4 5 6 7 8 9 Post, T., Van den Assem, M. J., Baltussen, G., & Thaler, R. H. (2008). Deal or no deal? Decision making under risk in a large-payoff game show. The American economic review, 38-71.
- ↑ Dufwenberg, M., & Kirchsteiger, G. (2004). A theory of sequential reciprocity. Games and economic behavior, 47(2), 268-298.
- 1 2 3 4 Rabin, M. (1993). Incorporating fairness into game theory and economics. The American economic review, 1281-1302. (Incorporates social motives into game theory decision making)
- 1 2 3 4 Gneezy, U. (2005). Deception: The role of consequences. American Economic Review, 384-394.
- 1 2 3 4 Wang, J. T. Y., Spezio, M., & Camerer, C. (2009). Pinocchio's pupil: Using eyetracking and pupil dilation to understand truth-telling and deception in sender-receiver game. American Economic Review, Forthcoming.
- 1 2 3 4 Bornstein, G., & Yaniv, I. (1998). Individual and group behavior in the ultimatum game: Are groups more “rational” players?. Experimental Economics, 1(1), 101-108.
- 1 2 3 Kocher, M. G., & Sutter, M. (2005). The Decision Maker Matters: Individual Versus Group Behaviour in Experimental Beauty‐Contest Games*. The Economic Journal, 115(500), 200-223.