You are on page 1of 15

Journal of Economic Behavior & Organization 106 (2014) 352366

Contents lists available at ScienceDirect

Journal of Economic Behavior & Organization


journal homepage: www.elsevier.com/locate/jebo

Strategic complexity and cooperation:


An experimental study
Matthew T. Jones
Federal Trade Commission

a r t i c l e

i n f o

Article history:
Received 5 November 2013
Received in revised form 8 July 2014
Accepted 14 July 2014
Available online 22 July 2014
JEL classication:
C93
C73
D03
Keywords:
Complexity
Prisoners dilemma
Repeated game
Bounded rationality
Finite automata

a b s t r a c t
This study investigates whether cooperation in an indenitely repeated prisoners dilemma
is sensitive to the complexity of cooperative strategies. An experimental design which
allows manipulations of the complexity of these strategies by making either the cooperate
action or the defect action state-dependent is used. Subjects are found to be less likely to use
a cooperative strategy and more likely to use a simpler selsh strategy when the complexity of cooperative strategies is increased. The robustness of this effect is supported by the
nding that cooperation falls even when the defect action is made state-dependent, which
increases the complexity of punishment-enforced cooperative strategies. A link between
subjects standardized test scores and the likelihood of cooperating is found, indicating that
greater cognitive ability makes subjects more likely to use complex strategies.
Published by Elsevier B.V.

1. Introduction
To implement a strategy in a repeated game, a player must process and respond to information she receives from her
environment such as the behavior of opponents, the state of nature, etc. Intuitively, one can say that the complexity of
a repeated game strategy depends on the amount of information that must be processed to implement it. For example,
consider a repeated oligopoly pricing game in which rms set a price in each stage after receiving information about demand
conditions and the prices set by rivals. To use a competitive pricing strategy, a rm sets its price equal to a constant marginal
cost in each stage. To use a collusive pricing strategy, a rm sets its price conditional on the demand state as well as the
prices set by rival rms. Hence, the collusive pricing strategy can be called more complex because implementing it involves
processing more information. If there are costs associated with this information processing in the form of management
compensation, operating costs, etc., they can affect the rms pricing strategy choice and make a relatively complex collusive
strategy less likely to be used. Similarly, cognitive costs associated with information processing may inuence repeated game
strategy choice on the individual level, yielding important consequences for cooperation and efciency.

This work is supported by the NSF under Grant No. SES-1121085. Any opinions, ndings and conclusions or recommendations expressed are those of
the author and do not necessarily reect the views of the Federal Trade Commission or the NSF.
Correspondence to: 600 Pennsylvania Avenue, NW, Mail Drop HQ-238, Washington, DC, United States. Tel.: +1 202 326 3539.
E-mail address: mjones1@ftc.gov
http://dx.doi.org/10.1016/j.jebo.2014.07.005
0167-2681/Published by Elsevier B.V.

M.T. Jones / Journal of Economic Behavior & Organization 106 (2014) 352366

353

The theoretical literature suggests that strategic complexity is a practical equilibrium selection criterion in repeated
games with both cooperative and selsh equilibria. Rubinstein (1986) shows that incorporating strategic complexity costs
into the preferences of players in the innitely repeated prisoners dilemma causes the efcient cooperative equilibrium to
unravel. Hence, in repeated games where efciency depends on players adopting relatively complex cooperative strategies
rather than simple selsh strategies, cognitive costs associated with implementing complex strategies may discourage cooperation and harm efciency. Accounting for strategic complexity can also have important implications in the study of market
games. Fershtman and Kalai (1993) show that collusion in a multi-market duopoly may be unsustainable when strategic
complexity is bounded. Gale and Sabourian (2005) consider a market game with a nite number of sellers, which normally
has both competitive and non-competitive equilibria, and show that only the competitive equilibria remain with strategic
complexity costs. These results demonstrate that limitations on strategic complexity can have important consequences, but
to my knowledge a theoretical model accounting for strategic complexity has not heretofore been tested empirically or
experimentally.
In this paper, I present the results of an experiment designed to test how behavior in an indenitely repeated prisoners
dilemma depends on the complexity of available strategies. Cooperation in the indenitely repeated prisoners dilemma has
been the subject of many experimental studies,1 but to my knowledge none has studied how strategy choice in this game
might be affected by limitations on strategic complexity. I investigate this question using a design which allows manipulations of the implementation complexity of strategies by making either the cooperate or defect action state-dependent. Both
of these manipulations increase the complexity of cooperative equilibrium strategies, and both make subjects less likely to
use a cooperative strategy and more likely to use a simpler selsh strategy. These results provide evidence that cognitive
costs associated with strategic complexity can have an impact on cooperation and efciency.
In this experiment, the complexity of strategic implementation is increased through random switching between permutations of a three-by-three version of the prisoners dilemma within each repeated game. Each treatment employs two
payoff tables with a strictly dominated action choice added to the cooperate and defect actions, with the position of the
dominated action varied between tables. Before each stage of a repeated game, one of the two payoff tables is drawn randomly and publicly announced to apply in that stage. This feature of the design can be viewed as increasing complexity by
requiring subjects to condition their action choices on observable changes in the state of nature in order to use certain types
of strategies.
In one treatment, the positions of the cooperate and dominated actions are permuted between the two payoff tables.
Because cooperating requires subjects to account for random switching between tables in order to choose the correct
action, cooperative strategies are more complex in this treatment than in a baseline treatment in which the positions
of the cooperate, defect and dominated actions are the same in both tables. I nd that increasing strategic complexity in this
way reduces cooperation, as subjects have a greater tendency to adopt a simple selsh strategy in this treatment than in the
baseline. The robustness of this effect is supported by the results of another treatment which increases the complexity of
cooperative strategies in a different way. In this treatment, the dominated action is permuted with the defect action instead
of the cooperate action so that defecting requires subjects to account for random switching between payoff tables. Relative
to the baseline, this treatment increases the implementation complexity of strategies that support cooperation through
the threat of punishment. Though the manipulation affects cooperation in a less obvious way, this treatment also reduces
cooperation compared to the baseline.
The idea that cognitive costs of strategic complexity affect cooperation is further supported by data on subjects American
College Test (ACT) and Scholastic Aptitude Test (SAT) scores, which indicate a positive relationship between cognitive ability
and cooperation. A correlation between average SAT scores in the subject pool and aggregate cooperation was found by
Jones (2008) in a metastudy of prisoners dilemma experiments, but to my knowledge this is the rst study to nd such
a relationship at the individual level in a repeated prisoners dilemma. This relationship is consistent with the idea that
cognitive costs of strategic complexity affect strategy choice because cooperative strategies are generally more complex
than playing selshly, and subjects with greater cognitive ability should be more able to bear the cognitive cost associated
with this complexity.
The paper proceeds as follows. Section 2 describes the experimental design, Section 3 denes the research questions, and
Section 4 reports the experimental results. Section 5 concludes.
2. Experimental design
The experiment includes three main treatments. Each session of these treatments is broken into two phases. Subjects are
paid their cumulative earnings from both phases at a conversion rate of $0.004 per Experimental Currency Unit (ECU), plus a

1
Roth and Murnighan (1978), Murnighan and Roth (1983), and Blonski et al. (2011) nd that cooperation in this game depends on the payoffs and
continuation probability, while Dal Bo and Frechette (2011a) nd that subgame perfection and risk dominance are necessary but not sufcient conditions
for cooperation. Dal Bo (2005), Camera and Casari (2009), and Duffy and Ochs (2009) provide evidence that the indenitely repeated prisoners dilemma
fosters cooperation because it allows players to use punishment-enforced cooperative strategies. Others have studied cooperation in related environments,
such as indenitely repeated oligopoly games (Holt, 1985; Feinberg and Husted, 1993) and public goods games (Palfrey and Rosenthal, 1994) as well as
prisoners dilemmas with costly punishment (Dreber et al., 2008), imperfect monitoring (Aoyagi and Frechette, 2009) and noisy implementation of intended
actions (Fudenberg et al., 2012).

354

M.T. Jones / Journal of Economic Behavior & Organization 106 (2014) 352366

Fig. 1. Payoff tables.

participation fee of $6. The focus of the paper is on the rst phase of these sessions, in which subjects play one repeated game
at a time.2 Subjects participate in a series of seven indenitely repeated prisoners dilemma games, with an 80% continuation
probability in each stage of a repeated game. Subjects are matched with the same opponent for the duration of each repeated
game, and matches are determined randomly and independently of matches in previous games. The number of stages in
each repeated game was drawn randomly prior to the rst session, and the same round lengths are used in each session.3
Each treatment uses two of the four payoff tables shown in Fig. 1, which are symmetric, 3-by-3 versions of the prisoners
dilemma. For reasons explained in Section 3, to the standard 2-by-2 prisoners dilemma I add a third action choice (1 in table
Z, 3 in table Z , and 2 in tables X and Y), which is always strictly dominated by the defect action and at least weakly dominated
by the cooperate action.4 Treatment BASE uses tables X and Y, so cooperate is action 1, defect is action 3, and the dominated
action is action 2 in both tables of this treatment. Treatment C-SWITCH uses Y and Z, so defect is 3 in both tables but the
positions of the cooperate and dominated actions are permuted between 1 and 2 between the two tables in this treatment.
Treatment D-SWITCH uses Y and Z , so cooperate is 1 in both tables but the positions of the defect and dominated actions
are permuted between 2 and 3 between the two tables in this treatment.
Of the two payoff tables in use in a particular treatment, one is randomly chosen to apply in each stage of a repeated
game, and the chosen table is publicly announced before the stage begins.5 Before the beginning of each repeated game,
subjects are told which of the two payoff tables applies in the rst stage. Each stage of a repeated game is broken into three
parts, which correspond to three screens that appear on subjects computer monitors.6
Suppose the current stage is stage t. The rst screen asks for an action choice (1, 2 or 3) in stage t. The payoff table that
applies in stage t has been announced previously, but it is not shown at this time. Subjects can see both possible payoff
tables for that treatment on their instructions at all times, but which of the two payoff tables applies in stage t is not visible
on their computer screens after it is announced. The second screen in stage t announces which of the two payoff tables will
apply in stage t + 1 if the game continues beyond stage t (whether it will continue or not is not yet revealed). The third screen
in stage t reports the action selected by the opponent in stage t along with the players payoff in that stage.
Each of these screens is viewable for 20 seconds, for a total time limit of 60 seconds per stage. A default choice is automatically entered if subjects fail to enter a choice before time expires in each stage. In the rst stage of a repeated game the
default choice is the defect action, while in all stages after the rst the default is the action choice entered in the previous
stage. This feature is included so that time constraints in the experiment are binding, as in practice complexity costs are

2
In the second phase, subjects play multiple repeated games simultaneously. This part of the experiment is discussed in Section A of the Online Appendix.
Sessions of a fourth treatment were also conducted to control for order effects on behavior in the rst and second phases. Discussion of this treatment is
also relegated to Section A of the Online Appendix.
3
In the experimental instructions (see Section D of the Online Appendix), subjects are told that the number of stages (or periods as they are called in
the instructions) in each round is random, with an 80% continuation probability in each stage. The sequence of repeated game lengths, which was randomly
determined as described to subjects, is 4, 2, 6, 1, 8, 7 and 4 in each session. In addition, each session begins with a 5-stage trial game which does not count
towards payment.
4
Dominance is weak in table X and strict in tables Y, Z and Z .
5
The random sequence of payoff tables is not the same in all sessions, but drawn independently in each session.
6
The experimental software is programmed in zTree (Fishbacher, 2007). See Figs. D.1D.3 in Section D of the Online Appendix for screenshots of the
three stage screens.

M.T. Jones / Journal of Economic Behavior & Organization 106 (2014) 352366

355

always linked somehow or another to time constraints. Its effect is to reinforce the relative ease of implementing simple
constant-action strategies which require no information processing during a repeated game, compared to strategies that
condition action choices on the opponents action or payoff table announcement, which require information processing in
each stage.7
The above cycle repeats for each stage of a repeated game. Because the payoff table that applies in stage t + 1 is announced
before that stage begins and not shown when stage t + 1 choices are entered, strategies that condition action choices on the
payoff table announcement require additional information processing. Subjects are allowed to take notes on paper to help
them recall information, but they are not instructed to do so.8 The payoff table that applies in stage t + 1 is revealed before the
opponents action in stage t so that strategies which condition on opponent behavior require additional contingent planning,
further increasing information processing requirements.
3. Research questions
This experiment is designed to determine whether the level of cooperation in an indenitely repeated prisoners dilemma
is sensitive to the complexity of strategies that sustain cooperation. In this section, I explain the theory behind the hypothesized effects of treatments C-SWITCH and D-SWITCH, which increase the complexity of cooperative strategies compared to
the control treatment, BASE. I also motivate my analysis of how subjects ACT and SAT scores are related to their propensity
to use cooperative strategies.
The concept of strategic complexity I adopt in designing and analyzing this experiment is the theory of games played by
nite state automata (see Chatterjee and Sabourian, 2009 for a recent survey).9 This approach measures the implementation
complexity of a repeated game strategy by the number of states in the minimal nite automaton which implements the
strategy, or the neness of the partition of the game history required to implement a strategy which conditions on that
history. Another interpretation of this denition is due to Kalai and Stanford (1988), who show that the number of states in
the minimal nite automaton implementing a strategy is equal to the number of different subgame strategies the original
strategy induces. This concept of complexity does not capture the complexity associated with computing optimal actions
and strategies, nor does it address the complexity associated with decisions under uncertainty because it is dened in a
complete information environment. Instead, it measures the complexity of information processing or monitoring required
to implement a strategy. Increasing the amount of information that must be processed means that the strategy-implementing
automaton must contain more states to keep track of its environment and follow a given plan of how to react to incoming
information.
A repeated game strategy is modeled in terms of nite automata as follows. A strategy consists of a nite number of
machine states, Q, an initial state, q0 Q, a behavior function,  : Q A, mapping from states into the set of possible actions,
A, and a state transition function,  : Q S Q, mapping from states and the opponents last observed action, s S, into
states.10 The simple measure of strategic complexity provided by this model is the minimal number of states that can be
contained in Q such that the automaton can implement the strategy. For example, consider a strategy of Tit-for-Tat played
in a standard prisoners dilemma in which the set of possible actions is A = S = {C, D} where C is cooperate and D is defect.
The automaton implementing Tit-for-Tat in this game is represented by the set of states, Q = {1, 2}, where 1 is the initial
state, the behavior function dened by (1) = C, (2) = D, and the state transition function dened by (1, C) = (2, C) = 1, (1,
D) = (2, D) = 2. The automaton implementing Always Defect in this game is represented by the set of states, Q = {1}, where
1 is the initial state, the behavior function, (1) = D, and the state transition function, (1, D) = (1, C) = 1. Since Tit-for-Tat
has two states and Always Defect only one, Tit-for-Tat is a more complex strategy.11
The nite automata denition of strategic complexity has been used by Rubinstein (1986) and Abreu and Rubinstein
(1988) to show that the set of equilibria of the innitely repeated prisoners dilemma is drastically reduced when complexity
costs are added to players preferences. An automaton implementing a cooperative equilibrium strategy is more complex
than one implementing a strategy of Always Defect because more states are needed for an automaton to enforce cooperation
by monitoring the opponent and punishing defection. The efcient cooperative equilibrium unravels when lexicographic
complexity costs are added, and the distance between this outcome and the most efcient achievable outcome under this
concept increases as the discount factor falls. These studies demonstrate that the standard folk theorem results may not be
robust to strategic complexity costs, which can affect the strategies players adopt.

7
Overall, the default action was triggered because time expired for only a small number of action choices: 1.6% of choices in BASE, 3.2% of choices in
SWITCH-C, and 3.4% of choices in SWITCH-D.
8
Roughly 5060% of subjects in each treatment took notes, but these are not incorporated into the data.
9
This approach was rst suggested by Aumann (1981). See Hopcroft and Ullman (1979) for a standard reference on automata theory. See Johnson (2006a)
and Salant (2011) for applications of nite automata in models of boundedly rational individual choice.
10
This type of nite automaton, in which the transition function takes as its domain the cross-product of states and only the opponents observed action
rather than the actions of both players, is known as a Moore machine.
11
In terms of the nite automaton model, the default action choice feature of the experimental design (see Section 2) reects the nite automaton
remaining in the same state when the strategy it represents calls for no change in the action choice between stages. After the rst stage of a repeated
game, subjects may passively continue to take the same action, and must enter a new action choice only when the automaton representing their strategy
transitions between states.

356

M.T. Jones / Journal of Economic Behavior & Organization 106 (2014) 352366

Fig. 2. Directed graphs of selected automaton strategies in BASE. Nodes represent the states of the automaton, which are labeled by the action choice (1, 2,
or 3) that is the output of the behavior function when taking that state as the input. The circled node is the initial state. State transitions can occur when the
payoff table for the next stage is announced or when the opponents action in the current stage is revealed. Edges represent the state transition function
taking the current state and either the opponents action (1, 2, or 3) or the payoff table announcement (X or Y) as inputs and mapping to the output state.

Others have used the nite automata model to rene equilibrium concepts or to provide a more nuanced measure of
strategic complexity. Rather than showing which equilibria are robust to complexity costs and which unravel, Baron and
Kalai (1993) use a nite automata approach to provide an equilibrium selection criterion in repeated games: when multiple
equilibria exist, that with the simplest equilibrium strategies is the one selected. They show that in a majority-rule division
game, where all possible divisions can be subgame perfect equilibria, the equilibrium selected by this criterion is efcient,
symmetric and intuitive. Regarding complexity measures, Banks and Sundaram (1990) propose that the number of state
transitions in an automaton should be considered as well as the number of states, which would reect more precisely the
amount of monitoring necessary in implementing a strategy. Lipman and Srivastava (1990) measure complexity by a strategys sensitivity to small perturbations in the history of play. Johnson (2006b) denes the complexity of a strategy in terms
of the algebraic properties of its minimal automaton representation. The evolutionary tness of repeated game strategies
played by nite automata has also been studied analytically and computationally with mixed results,12 highlighting the
need for empirical evidence to inform further research on the implications of strategic complexity costs.13
Though the nite automata model is a popular approach to strategic complexity around which my experimental design is
built, it should not be taken too literally. Finite automata can be thought of as a metaphor for the number of different states
of mind that are possible when implementing a particular repeated game strategy. The treatments described below increase
the complexity of cooperative strategies in terms of nite automata, but their effect on subjects may not be captured exactly
by this model. There is a more general sense in which these treatments increase the cognitive load or amount of information
processing required of subjects, which the number of states in an automaton may not fully represent. Though it may not
be completely accurate, an objective measure of complexity is needed to formalize comparisons between treatments and
provide testable predictions, and the nite automata model is the most appropriate model available.
The design of this experiment and the timing with which information is given to subjects make it possible to manipulate
the complexity of cooperative equilibrium strategies, as measured by the number of states in their minimal nite automata
representations. To allow a meaningful comparison between treatment and control, the parameters of the game are set to
encourage a moderate level of baseline cooperation. Cooperation through the use of Grim Trigger (GT) and Tit-for-Tat (TFT)
strategies is supportable as an equilibrium, and these strategies are risk-dominant compared to a strategy of Always Defect
(AD).14
To maintain the overall structure of the experiment across treatments and provide a valid control, BASE includes two
possible payoff tables and an announcement of which applies before each stage of a repeated game. In this treatment the
payoff table manipulation is designed to be innocuous, such that it is not expected to affect strategy choice. In both of its
payoff tables (X and Y), the cooperate action is 1, the dominated action is 2 and the defect action is 3, so the action to take
in each stage for a given strategy is the same for both tables. The tables differ only in the payoff if the action prole (2,2) is
selected, but action 2 is strictly dominated by defect and at least weakly dominated by cooperate in both tables. Hence, in
terms of nite automata, simple strategies commonly observed in repeated prisoners dilemmas have the same complexity
in this treatment as in the two-by-two prisoners dilemma. Fig. 2 shows the directed graph representations of the minimal
nite automata that implement selected strategies in BASE.15

12
Binmore and Samuelson (1992) consider an evolutionary model of the indenitely repeated prisoners dilemma played by nite automata with lexicographic complexity costs and nd that the equilibrium automata reach the efcient cooperative equilibrium. Cooper (1996) nds that a folk theorem
result is restored with a different denition of evolutionary stability and nite complexity costs. In contrast with these results, Volij (2002) shows that
with lexicographic complexity costs in an evolutionary setting, the only stochastically stable automaton is the one-state automaton which always defects.
A simulation by Ho (1996) nds that convergence towards cooperation depends critically on the specication of complexity costs. In particular, he nds
that a cost associated with the number of states in the automaton harms cooperation, but a cost associated with the number of state transitions does not. In
contrast, a simulation by Linster (1992) using a different algorithm and smaller strategy space shows convergence towards cooperation, with Grim Trigger
as the most successful automaton strategy.
13
A related experimental work is Embrey et al.s (2012) study of strategic commitment in a repeated duopoly game. Players in this experiment are given
the opportunity to construct Moore machines to implement strategies for them. Though the complexity of players strategies is not the focus of that study,
the strategies players choose to be implemented by machines are relatively simple and frequently unconditional on the opponents actions.
14
The concept of risk dominance applies to this game after elimination of the dominated action, which collapses the payoff matrix to 2 2.
15
The four strategies selected include three common equilibrium strategies, Always Defect (AD), Grim Trigger (GT) and Tit-for-Tat (TFT), as well as Selsh
Tit-for-Tat (STFT), which is estimated to explain a signicant proportion of the experimental data. This estimation is explained in detail in Section 4.2.

M.T. Jones / Journal of Economic Behavior & Organization 106 (2014) 352366

357

Fig. 3. Directed graphs of selected automaton strategies in C-SWITCH. Nodes represent the states of the automaton, which are labeled by the action choice
(1, 2, or 3) that is the output of the behavior function when taking that state as the input. The circled node is the initial state. State transitions can occur
when the payoff table for the next stage is announced or when the opponents action in the current stage is revealed. Edges represent the state transition
function taking the current state and either the opponents action (1, 2, or 3) or the payoff table announcement (Y or Z) as inputs and mapping to the output
state.

Question 1. Compared to the baseline, are subjects less likely to adopt cooperative strategies when the cooperate action is
state-dependent?
In C-SWITCH, the cooperate action is action 1 in table Y and action 2 in table Z. Hence, choosing the correct cooperate
action in a given stage requires players to account for which payoff table is announced to apply in that stage. If one thinks
of payoff table switching as random changes in the state of nature, one can say that the cooperate action is state-dependent
in this treatment, while the defect action is not because it is action 3 in both tables. Because the payoff table that applies
in a given stage does not appear on the screen at the time when the action choice is entered, the additional information
processing required to cooperate in this setting is non-trivial.
Fig. 3 shows the minimal nite automata that implement selected strategies in C-SWITCH. Because the cooperate action
is conditional on the payoff table announced in a given stage, which can be thought of as the action of a third player
(nature), additional automaton states are needed to implement strategies that sustain cooperation. These strategies require
a nite automata with at least 3 states in C-SWITCH, whereas they can be implemented by a 2-state automaton in BASE. The
simple selsh strategy, AD, requires only a 1-state automaton in both C-SWITCH and BASE. Hence, any model in which the
likelihood that a strategy is adopted is negatively related to its complexity (as measured by nite automata) would predict
less cooperation in C-SWITCH than in BASE.
Though the nite automata model provides crisp comparisons between strategies by the number of states, the model
should be viewed metaphorically rather than literally; the number of states is not as important as the idea that more
information processing is required to play cooperatively in C-SWITCH than to play cooperatively in BASE or selshly in
either treatment. The nite automata model formalizes this idea. In addition to the direct effect on a subjects strategy choice
due to cognitive costs, this treatment may also reduce cooperation because subjects expect that implementing cooperative
strategies will be prohibitively costly to opponents, or that they will be more likely to make mistakes due to the additional
information processing. These indirect effects would result from the same increase in implementation complexity measured
by the nite automata model.
Question 2.
dependent?

Compared to the baseline, are subjects less likely to adopt cooperative strategies when the defect action is state-

In D-SWITCH, the defect action is action 3 in table Y and action 2 in table Z . Hence, choosing the correct defect action
requires players to account for which payoff table is announced to apply in that stage. One can say that the defect action is
state-dependent in this treatment, while the cooperate action is not because it is action 1 in both tables. This manipulation
may affect the level of cooperation because it increases the implementation complexity of punishment-enforced cooperative
strategies, which are the strategies that support cooperation in equilibrium. Using such a strategy in D-SWITCH requires a
player to account for which payoff table will apply in the next stage in case the opponent defects in the current stage, thus
triggering punishment in the next stage. Because next stages payoff table is announced before the opponents action in the
current stage is revealed, enforcing cooperation through punishment requires contingent planning and, compared to the
control, a greater amount of information processing.
Fig. 4 shows the minimal nite automata that implement selected strategies in D-SWITCH. Because defecting involves
choosing conditional on the payoff table announcement in this treatment, the simplest selsh strategy of AD is more complex
in D-SWITCH (2 states) than in BASE (1 state), but the complexity of the cooperative strategies increases by a greater margin
(4 states vs. 2 states). Hence, a model in which the likelihood of choosing a strategy is negatively related to its complexity
would predict less cooperation in D-SWITCH than in BASE, unless only the relative complexity of available strategies matters.
In other words, a model of convex complexity costs would predict less cooperation in this treatment than in the control.
Again, the exact predictions of the nite automata model are a formalization of the more important main idea: that this

358

M.T. Jones / Journal of Economic Behavior & Organization 106 (2014) 352366

Fig. 4. Directed graphs of selected automaton strategies in D-SWITCH. Nodes represent the states of the automaton, which are labeled by the action choice
(1, 2, or 3) that is the output of the behavior function when taking that state as the input. The circled node is the initial state. State transitions can occur
when the payoff table for the next stage is announced or when the opponents action in the current stage is revealed. Edges represent the state transition
function taking the current state and either the opponents action (1, 2, or 3) or the payoff table announcement (Y or Z) as inputs and mapping to the output
state.

treatment increases the information processing required to play cooperatively, which may reduce cooperation relative to
the baseline due to the associated cognitive costs.
The three treatments in the experiment can be ranked by the complexity of the GT strategy, which has a minimal nite
automaton representation with 2 states in BASE, 3 states in C-SWITCH, and 4 states in D-SWITCH.16 Therefore, to the extent
that subjects are inclined to use GT subject to complexity costs, we would expect a consistent ranking of treatments by the
prevalence of this strategy, with the highest prevalence in BASE and lowest in D-SWITCH. These differences in the prevalence
of GT may in turn lead to differences in cooperation rates between treatments, with the most cooperation in BASE and the
least in D-SWITCH.
Question 3.

Is there a relationship between subjects cognitive ability and their propensity to adopt a cooperative strategy?

Extant experimental evidence suggests that individuals with higher cognitive ability are more likely to cooperate in a
prisoners dilemma. In a meta-analysis of prisoners dilemma experiments, Jones (2008) nds a positive relationship between
aggregate cooperation and the average SAT score of the institution from which subjects are drawn. Burks et al. (2009) nd
that experimental subjects with higher cognitive ability are more patient and also more willing to take calculated risks,
two tendencies that support cooperation. They also nd that such subjects are more cooperative in a trust game similar to a
one-shot prisoners dilemma and that they have more accurate beliefs about how others behave. However, to my knowledge
a relationship between individual-level cognitive ability and cooperation in repeated games has yet to be established in the
literature.17
I test for a relationship between cognitive ability and cooperation in this experiment by obtaining subjects consent to
request their ACT and SAT scores from the Ohio State University registrars ofce. These test scores have been shown by Frey
and Detterman (2004) and Koenig et al. (2008) to be strongly correlated with general intelligence. This analysis contributes
to the broader literature on individual characteristics that are associated with cooperation in repeated games,18 and it also
allows me to test a corollary to the idea that complexity costs inuence strategy choice.
Introducing additional tasks that are cognitively costly in prisoners dilemma experiments has been found to reduce
cooperation. Specically, cooperation has been shown to fall when subjects are required to complete an unrelated memory
task while playing (Milinski and Wedekind, 1998; Duffy and Smith, 2011), when they must rely on memory to track the
actions of multiple opponents in a random sequence (Winkler et al., 2008; Stevens et al., 2011), and when they must play
a different repeated game simultaneously with the prisoners dilemma (Bednar et al., 2012). Though the mechanism for
these results is not precisely identied, they are consistent with the idea that articially increasing the cognitive burden on
subjects reduces their propensity to adopt the relatively complex strategies that support cooperation in a repeated game.

16
Grim Trigger requires 4 states in D-SWITCH because the payoff table announcement for the next stage occurs before the opponents action in the current
stage is revealed. When the opponent is observed to have defected in stage t, the player must remember whether to choose action 2 or 3 in stage t + 1.
Hence, two states that output cooperate are needed to remember the payoff table announcement and respond optimally in case the opponent defects.
17
This experiment is relevant to the literature documenting a broader relationship between cognitive ability and strategic behavior in games. Burnham
et al. (2009), Agranov et al. (2011), Branas-Garza et al. (2011b), and Gill and Prowse (2012) nd a negative relationship between cognitive ability and
guesses in a p-beauty contest, where lower guesses reect greater strategic sophistication. In contrast, Georganas et al. (2012) nd little evidence of a
relationship between strategic sophistication in other types of games and scores in several tests of cognitive ability. Ivanov et al. (2009) and Ivanov et al.
(2013) nd that subjects with high SAT scores in their endogenous-timing investment experiment are more likely to respond as predicted to informational
externalities. Cognitive ability is also found to affect strategies in public good games (Millet and Dewitte, 2007) and the Travelers Dilemma (Branas-Garza
et al., 2011a).
18
This is one of only a few studies in the experimental economics literature to use veried ACT or SAT scores (as opposed to self-reported scores) as a
measure of cognitive ability. See Benjamin and Shapiro (2005), Casari et al. (2007), and Jones (2013) for other examples.

M.T. Jones / Journal of Economic Behavior & Organization 106 (2014) 352366

359

Fig. 5. Stage outcomes and aggregate cooperation by treatment. Difference from BASE signicant at: ***.01 level, **.05 level, *.1 level.

One would expect more cognitively able individuals to have a greater capacity to bear additional cognitive costs, making
them more likely to use complex strategies, other things equal. Evidence of such a relationship is found in an experiment
by Hawes et al. (2012) in which subjects are rewarded for recognizing patterns in computer-generated signals and correctly
guessing the next signal. They nd that the guesses of more intelligent subjects are less inuenced by the immediately
previous signal and more inuenced by earlier signals than their less intelligent counterparts, reducing their prediction
error. These results suggest a correlation between intelligence and the ability to use complex strategies in this repeated
game against a computer.
In a repeated prisoners dilemma, the correlation between cognitive ability and the ability to handle strategic complexity
would lead to a higher frequency of cooperation among more intelligent subjects, other things equal. Strategies that sustain
cooperation in equilibrium are more complex to implement than a simple selsh strategy, and cognitive costs associated
with this complexity may discourage some individuals from cooperating. Though I cannot rule out other possible reasons for
a positive relationship between cognitive ability and cooperation, this result would be consistent with the idea that strategic
complexity is an important factor in strategy choice.
4. Results
Sessions were conducted at the Ohio State University Experimental Economics Lab in the winter and spring of 2011. A
total of 102 subjects participated in the three main treatments of the experiment over six sessions, with two sessions and
34 subjects per treatment.19 Subjects were recruited via email invitations sent randomly to students in a large database of
Ohio State undergraduates of all majors. Sessions lasted between 60 and 90 minutes with average earnings of $18.85.
4.1. Aggregate cooperation
I rst study aggregate cooperation, as measured by the frequency of choices to cooperate among all action choices. I also
study the cooperation rate in the rst stages of repeated games only, which indicates players intentions at the start of a
repeated game before they observe and react to actions of their opponents, and thus indicates what type of strategy they
adopt.20 The distributions of stage-game outcomes on the payoff table are shown in Fig. 5 for all three treatments, both for
all stages and for rst stage outcomes only.21 Fig. 5 also reports the overall cooperation and rst stage cooperation rates
in each treatment. I assess the statistical signicance of differences in aggregate cooperation between treatments using a
probit regression with an indicator variable for one of the treatments and standard errors clustered at the session level to
account for arbitrary correlations between action choices in a session.22
Result 1. Compared to the baseline, less cooperation is observed in the rst stage of a repeated game when the cooperate action
is state-dependent.

19

Two sessions of a fourth treatment, discussed in Section A of the Online Appendix, were also conducted with a total of 34 subjects.
A simple example illustrates this point. Suppose that in a repeated game that lasts 5 stages, one player plays TFT and the other plays AD. This would
lead to a 10% overall cooperation rate, but a 50% rst stage cooperation rate. Hence, the rst stage cooperation rate more accurately reects the fact that
50% of players adopted a cooperative strategy at the start of the game.
21
The frequency of choosing the dominated action is 1.3% in BASE and 3.2% in both C-SWITCH and D-SWITCH, indicating that subjects quite accurately
implement repeated game strategies that choose actions conditional on the payoff table announcement in C-SWITCH and D-SWITCH.
22
I also conduct probit regressions with random effects at the subject level to check whether observed treatment effects are robust to a more structured
correlation between observations of the same subject. The results of these regressions are reported in Table C.1 in Section C of the Online Appendix.
20

360

M.T. Jones / Journal of Economic Behavior & Organization 106 (2014) 352366

Fig. 6. Cooperation by round.

Compared to the rate of cooperation observed in the rst stages of BASE repeated games (55.5%),23 I observe signicantly
less rst stage cooperation in C-SWITCH (41.6%), where the cooperate action is state-dependent. Because rst stage actions
signal players intentions before they begin reacting to their opponents, this result reveals a signicant treatment effect on
the type of strategies players adopt. It indicates that increasing the complexity of cooperative strategies through a statedependent cooperate action makes subjects less likely to use cooperative strategies. I also observe less overall cooperation
in C-SWITCH (35.3%) than in BASE (40.6%), but this difference is not statistically signicant with standard errors clustered
at the session-level.24
Fig. 6 shows levels of overall and rst stage cooperation over the seven repeated games in a session in each of the three
treatments. These pictures demonstrate that the complexity treatment effects on rst stage actions are more pronounced
and persistent over the series of games than the effects on all action choices. There is no clear trend in aggregate cooperation
rates over this series of repeated games, so learning does not appear have a marked effect on behavior in these games. This
point is supported by differences between treatments in cooperation rates observed in the last repeated game, after subjects
have experienced six other repeated games against randomly matched opponents. Treatment differences in the last repeated
game alone are consistent with the results across all repeated games.25
Result 2.

Compared to the baseline, less cooperation is observed when the defect action is state-dependent.

Compared to BASE, I observe signicantly less overall cooperation (29.5% vs. 40.6%) and rst stage cooperation (42.4% vs.
55.5%) in D-SWITCH.26 This result indicates that increasing the complexity of punishment-enforced cooperative strategies
through a state-dependent defect action makes subjects less likely to adopt these strategies. Both D-SWITCH and C-SWITCH
increase the implementation complexity of cooperative strategies, but using different manipulations. Nevertheless, both
manipulations signicantly reduce the frequency with which subjects adopt cooperative strategies, as indicated by rst
stage actions. Hence, results of the experiment support the hypothesis that the complexity of the available strategies in a
repeated game has a signicant impact on which strategies players adopt.
Compared to D-SWITCH, I observe slightly less rst stage cooperation but more overall cooperation in C-SWITCH, indicating that cooperation is more likely to be sustained over the course of the average repeated game in this treatment.27
This difference in overall cooperation rates is consistent with the difference between treatments in the complexity of the
GT strategy, which requires 3 automaton states in C-SWITCH and 4 in D-SWITCH. To the extent that subjects are inclined to

23
The rst stage cooperation rate is similar to the 61.1% rate observed by Dal Bo and Frechette (2011a) in their treatment with parameters most similar
to mine according to the cooperation indices of Murnighan and Roth (1983).
24
According to probit regressions with random effects at the subject level (see Table C.1 in Section C of the Online Appendix) this treatment effect is
signicant for both rst stage cooperation and overall cooperation. Hence, signicance of the effect is robust to within-subject correlation of choices, but
the result is not signicant for all rounds when standard errors are clustered to account for arbitrary correlations at the session-level.
25
The overall rate of cooperation in the last repeated game is 49.3% in BASE, which is greater than the 43.4% and 40.4% rates in C-SWITCH and D-SWITCH,
respectively (signicant at the 0.05 level for D-SWITCH only). The rst stage cooperation rate in the last repeated game is 55.9% in BASE, which is signicantly
greater (at the 0.1 level) than both the 47.1% and 44.1% rates in C-SWITCH and D-SWITCH, respectively. See Section A of the Online Appendix for a discussion
of learning in phase II of the experiment, where multiple games are played simultaneously.
26
Probit regressions with random effects at the subject level (see Table C.1 in Section C of the Online Appendix) also reveal signicant treatment effects.
27
Neither of these differences is statistically signicant according to probit regressions with standard errors clustered at the session-level. According to
a subject-level random effects regression, the difference between overall cooperation rates is statistically signicant at the 0.01 level, but the difference in
rst stage cooperation is not signicant.

M.T. Jones / Journal of Economic Behavior & Organization 106 (2014) 352366

361

Table 1
Maximum likelihood estimates of strategy prevalence: 4 candidate strategies.
Treatment

BASE

C-SWITCH

D-SWITCH

Estimate

S.E.

Estimate

S.E.

Estimate

S.E.

Tit-for-Tat (TFT)
Grim Trigger (GT)
Always Defect (AD)
Selsh Tit-for-Tat (STFT)

0.16*
0.41***
0.26***
0.18***

(0.09)
(0.11)
(0.08)
(0.06)

0.23**
0.16*
0.41**
0.20

(0.09)
(0.09)
(0.16)
(0.14)

0.29***
0.11
0.48***
0.12*

(0.10)
(0.09)
(0.09)
(0.07)

Log-likelihood
Gamma

0.62***

(0.08)

0.62***

(0.08)

0.48***

527.27

524.16

413.36
(0.04)

Signicance level of Wald test for difference from zero:


*
.1 level.
**
.05 level.
***
.01 level.

implement a GT strategy subject to complexity costs, we would expect to see greater overall cooperation in C-SWITCH. In
Section 4.3, I estimate how the prevalence of this and other strategies varies between treatments.
Aside from differences in strategic complexity, there is another explanation which at least partially explains the aggregate
differences in behavior between C-SWITCH and D-SWITCH. In one of the two C-SWITCH sessions, average cooperation does
not show the tendency to decline between the rst stage and subsequent stages of repeated games as it does in all other
sessions of the experiment. In this session, the cooperation rate is 47.6% in the rst stage of repeated games and increases to
48.7% in stages after the rst. Hence, subjects in this session of C-SWITCH are much more likely to stabilize on cooperation
over the course of a repeated game than in other sessions. However, the rst stage cooperation rate in this session is less than
the 55.5% rate in the BASE treatment, indicating that subjects in this session remain less likely to initially adopt a cooperative
strategy than those in the control setting.
4.2. Strategy inference
This study is concerned with the importance of complexity in strategy choice, so aggregate results do not tell the whole
story. Subjects underlying strategies can be estimated from their observed actions by a maximum likelihood technique
developed by El-Gamal and Grether (1995) and extended to repeated game applications by Engle-Warnick and Slonim
(2006) and Engle-Warnick et al. (2007). This technique measures the proportion of each subjects observed actions that can
be explained by candidate repeated game strategies and estimates the prevalence of each candidate strategy by maximizing
a log-likelihood function summing across all subjects and strategies. A similar technique has been used by Aoyagi and
Frechette (2009), Camera et al. (2012), Dal Bo and Frechette (2011a), Dal Bo and Frechette (2011b), and Fudenberg et al.
(2012) to infer strategies from observed actions in their repeated prisoners dilemma experiments.
The maximum likelihood technique works as follows. Each subject is assumed to use the same strategy in each repeated
game. In each stage of a repeated game, there is some probability that a subject deviates from the action prescribed
by the chosen strategy. In stage t of repeated game r, I assume that subject i who uses strategy sk cooperates if the
indicator function yirt (sk ) = 1{sirt (sk ) +  irt 0} takes a value of 1 and defects otherwise, where sirt (sk ) is the action prescribed by strategy sk (1 for cooperate and 1 for defect) given the history of repeated game r up to stage t,  is the
error term,
and  is the variance of the error. The likelihood function of strategy sk for subject i has the logistic form

pi (sk ) =

{1/[1 + exp(sirt (sk )/)]}

yirt

{1/[1 + exp(sirt (sk )/)]}

1yirt

. The resulting log-likelihood function has the form

T
k
k
K p(s )pi (s )),

where K is the set of candidate strategies s1 , . . ., sK and p(sk ) is the proportion of the data explained
entire sequence of actions across repeated games is observed for each subject, and the log-likelihood function is
maximized to estimate the proportion of the data explained by each candidate strategy.
I use two iterations of this technique to analyze the behavior of subjects in the experiment. The rst iteration uses a set
of twenty candidate strategies, which are the same as those used by Fudenberg et al. (2012) in analyzing their experiment
on prisoners dilemmas with exogenously imposed noisy implementation of intended actions.28 In this rst iteration of
estimates, only three of the twenty candidate strategies are estimated to explain a signicant proportion of individual
treatment data: AD, GT and STFT.29 Therefore, I conduct a second iteration, which repeats the same procedure using a set of
only four candidate strategies: the three strategies that were signicantly prevalent in the rst iteration, with the addition
of TFT due to its exceptional general popularity in prisoners dilemma experiments. The estimated coefcients of the second
iteration and their bootstrapped standard errors are reported in Table 1.

I ln(
by sk . The

28
29

See Section B of the Online Appendix for details of the rst iteration and a description of the 20 candidate strategies in Table B.1.
Results of the rst iteration are reported in full in Table B.2 in Section B of the Online Appendix.

362

M.T. Jones / Journal of Economic Behavior & Organization 106 (2014) 352366

Table 2
Marginal effects of test scores on cooperation, by treatment.
BASE
Estimate

S.E.

C-SWITCH
Estimate

S.E.

D-SWITCH
Estimate

S.E.

All Stages
Mean
ACT Top 5%
ACT <Top 20%
ACT Percentile (continuous)
Observations

0.401
0.209***
0.212***
.0067***

(0.038)
(0.007)
(.0009)

0.369
0.023
0.079***
.0019

(0.231)
(0.021)
(.0092)

0.294
0.048***
0.083
.0067***

(0.011)
(0.055)
(.0021)

1st Stage
Mean
ACT Top 5%
ACT <Top 20%
ACT Percentile (continuous)
Observations

0.550
0.359***
0.377***
.0099***

1056

992

(0.006)
(0.082)
(.0026)
231

0.442
0.107
0.065
.0062

1024

(0.193)
(0.067)
(.0065)
217

0.429
0.057
0.134**
.0084

(0.046)
(0.062)
(.0070)
224

Excludes subjects with no ACT or SAT scores reported.


Standard errors clustered at session level.
* Signicant at 0.1 level.
**
Signicant at 0.05 level.
***
Signicant at 0.01 level.

Result 3. Subjects are more likely to use a simple selsh strategy of Always Defect in treatments with increased complexity of
cooperative strategies. Subjects are less likely to use a Grim Trigger strategy as the complexity of this strategy increases between
treatments.
According to a Wald test for joint differences, the sets of coefcients estimated in each treatment are not jointly signicantly different between treatments.30 However, I nd some interesting differences in individual coefcients between
treatments. AD is signicantly prevalent in all three treatments. Furthermore, the estimated prevalence of the simple AD
strategy is 22 percentage points greater in D-SWITCH than in BASE (marginally signicant) and 15 percentage points greater
in C-SWITCH than in BASE (not signicant). These estimates are consistent with the prediction that subjects are more likely
to adopt a simple selsh strategy when the complexity of implementing cooperative strategies increases. TFT is signicantly
prevalent in all three treatments (at the 0.1 level), while STFT is signicant in BASE and D-SWITCH only, but differences in
the prevalence of these strategies between treatments are not statistically signicant.
Interestingly, variation in the estimated prevalence of GT between treatments is consistent with a ranking of treatments
by the complexity of this strategy. The estimated prevalence of GT is signicantly greater (at the 0.1 level) in BASE, where its
minimal nite automaton has 2 states, than in C-SWITCH, where its minimal nite automaton has 3 states. The estimated
prevalence of GT is signicantly greater than zero in both of these treatments. GT is most complex in D-SWITCH, where
its minimal nite automaton has 4 states. Though GTs estimated prevalence in D-SWITCH is not signicantly different
from its estimated prevalence in the other treatments, its point estimate is smallest in this treatment and not signicantly
different from zero. As the estimated prevalence of GT declines across treatments, the estimated prevalence of AD increases,
suggesting that as the complexity of GT increases subjects switch from this strategy to the simpler AD strategy. This effect
on strategies, which is consistent with a model of costly strategic complexity, provides an explanation for the aggregate
differences in overall cooperation between BASE and C-SWITCH and between C-SWITCH and D-SWITCH.
4.3. Analysis of ACT/SAT scores
In this section, I test for a relationship between cognitive ability and cooperation using probit regressions with the choice
to cooperate as the dependent variable and regressors related to the subjects ACT or SAT-ACT concordance score. I use two
specications: one with indicator variables for whether a subject has a test score in the top 5% of all test-takers or below the
top 20% of all test-takers,31 and one with the ACT percentile as a continuous variable. ACT scores were obtained for 62 of
the 102 subjects who participated in the three main treatments of the experiment. SAT scores were obtained for 34 of the
remaining 40 subjects, and SAT-ACT concordance scores were used for these subjects.32
For subjects reporting a test score, Table 2 shows the results of probit regressions with the choice to cooperate in any
stage of a repeated game and in the rst stage only as the dependent variables, with separate regressions for each treatment.

30

I also conduct Wald tests for differences of individual coefcients from zero and differences in individual coefcients between treatments.
I use ACT percentile as a measure of cognitive ability because ACT scores are based on a rank-order scale and not an additive scale. These categories
were selected because they represent roughly symmetric tails of the distribution of ACT scores for subjects in this experiment. Other reasonable score
cutoffs do not produce symmetric tails. Summary statistics are reported in Table C.2 in Section C of the Online Appendix.
32
See http://professionals.collegeboard.com/profdownload/act-sat-concordance-tables.pdf for SAT-ACT concordance tables. Six subjects were transfer
students who reported neither test score.
31

M.T. Jones / Journal of Economic Behavior & Organization 106 (2014) 352366

363

Table 3
Marginal effects of treatments, test scores, and history of play on cooperation.
Variable

(1)

All stages
C-SWITCH
D-SWITCH
ACT Top 5%
ACT Top 5% C-SWITCH
ACT Top 5% D-SWITCH
ACT <Top 20%
ACT <Top 20% C-SWITCH
ACT <Top 20% D-SWITCH
Coop.-1st Stage, Prev. Rd.
Opp. Coop.-1st Stage, Prev. Rd.
# of Stages-Prev. Rd.
Coop.-1st Stage, Rd. 1
Opp. Coop.-1st Stage, Rd. 1
Observations
1st stage
C-SWITCH
D-SWITCH
ACT Top 5%
ACT Top 5% C-SWITCH
ACT Top 5% D-SWITCH
ACT <Top 20%
ACT <Top 20% C-SWITCH
ACT <Top 20% D-SWITCH
Coop.-1st Stage, Prev. Rd.
Opp. Coop.-1st Stage, Prev. Rd.
# of Stages-Prev. Rd.
Coop.-1st Stage, Rd. 1
Opp. Coop.-1st Stage, Rd. 1
Observations

(2)

(3)

(4)

Coeff.

S.E.

Coeff.

S.E.

Coeff.

S.E.

Coeff.

S.E.

0.033
0.107***

(0.111)
(0.018)

0.009
0.079***

(0.103)
(0.016)

0.043
0.087***
0.188***
0.134
0.109***
0.218***
0.368***
0.137*

(0.138)
(0.028)
(0.033)
(0.161)
(0.027)
(0.022)
(0.032)
(0.079)

0.265***
0.049
0.019***
0.105***
0.033

(0.047)
(0.041)
(0.006)
(0.037)
(0.042)

0.004
0.069**
0.085**
0.077
0.047
0.114*
0.240***
0.057
0.258***
0.046
0.019***
0.084**
0.024

(0.124)
(0.027)
(0.038)
(0.160)
(0.039)
(0.064)
(0.086)
(0.129)
(0.045)
(0.038)
(0.006)
(0.043)
(0.032)

2688
0.124**
0.143***

2688
(0.057)
(0.022)

576

2688

0.085
0.125**

(0.062)
(0.049)

0.518***
0.115**
0.019
0.248***
0.015

(0.072)
(0.045)
(0.013)
(0.047)
(0.065)

0.089
0.083***
0.385***
0.301*
0.349***
0.344***
0.242***
0.180*

576

2688
(0.103)
(0.031)
(0.005)
(0.132)
(0.025)
(0.058)
(0.074)
(0.099)

0.020
0.040
0.260***
0.276*
0.335***
0.126**
0.107
0.101
0.495***
0.124***
0.019
0.263***
0.029

576

(0.106)
(0.054)
(0.031)
(0.145)
(0.043)
(0.053)
(0.072)
(0.116)
(0.075)
(0.043)
(0.013)
(0.049)
(0.071)
576

Excludes subjects with no ACT or SAT scores reported and results of rst repeated game (used as regressor).
Standard errors clustered at session level.
*
Signicant at 0.1 level.
**
Signicant at 0.05 level.
***
Signicant at 0.01 level.

Result 4.

In the baseline setting, there is a positive relationship between ACT percentile and cooperation.

For the BASE data, both regressions indicate that having a test score in the top 5% of all test-takers increases the likelihood
of cooperation signicantly compared to those with a score in the top 20% but not the top 5%, the omitted category. I also
nd that having a test score below the top 20% decreases the likelihood of cooperation compared to the omitted category
in this treatment. Results of the specication using ACT percentile as a continuous variable are similar: ACT percentile has
a signicantly positive effect on cooperation overall and in the rst stage. Hence, ACT scores provide strong evidence of a
positive relationship between cognitive ability and cooperation in the standard repeated prisoners dilemma environment.
Because cooperative strategies are relatively complex in this environment and players with high cognitive ability should
be more able to bear the cognitive costs of using complex strategies, this evidence is consistent with the idea that strategy
choice is inuenced by cognitive costs of strategic complexity.
For the treatments which increase the complexity of cooperative strategies beyond the baseline, the results are weaker
and sometimes inconsistent with the above hypothesis. Most of the estimated coefcients for C-SWITCH and D-SWITCH
have the expected sign, but only some of these are statistically signicant. In the regressions using all stages of C-SWITCH,
two estimates are inconsistent with the expected relationship: the estimated effect of having a score below the top 20% is
positive and signicant, while the coefcient on ACT percentile as a continuous measure is negative (but not signicant).
Hence, these regressions fail to show strong evidence of a correlation between cognitive ability and cooperation in treatments
that increase complexity of cooperative strategies relative to the baseline.
In order to isolate the relationship between test scores and cooperation and test for differences in this relationship
between treatments, I conduct a series of four additional probit regressions which pool data from all three treatments and
include a variety of controls. Results of these regressions are reported in Table 3. Specication (1) includes only indicator
variables for the C-SWITCH and D-SWITCH treatments. Specication (2) adds a set of regressors that measure the effect of
the history of play on the likelihood of cooperation, including the subjects action and the action of the subjects opponent
(1 if cooperate and 0 otherwise) in the rst stage of the previous repeated game and the rst stage of the rst repeated
game, as well as the number of rounds in the previous repeated game. Specication (3) includes indicator variables for the
complexity treatments and having an ACT score in the top 5% or below the top 20%, as well as terms for interactions between

364

M.T. Jones / Journal of Economic Behavior & Organization 106 (2014) 352366

the treatment and test score variables. Specication (4) includes all of the regressors from (2) and (3). All four of these
specications are estimated separately for the full data and for rst stage actions only. Because they are used as explanatory
variables in some specications, choices in the rst repeated game are excluded from the set of observations, as are the
choices of subjects with no test scores reported.
Results of specication (1) conrm that the average effects of both complexity treatments on cooperation are negative
and, for rst stage actions, statistically signicant.33 The effect in D-SWITCH remains signicant when additional regressors
are included in specication (2), although the effect in C-SWITCH does not. Hence, increasing the complexity of cooperative
strategies by making the defect action state-dependent has a robust cooperation-reducing effect. Making the cooperate
action state-dependent has a consistent, though less statistically robust, effect. The likely reason for this difference between
C-SWITCH and D-SWITCH is the greater tendency of subjects to stabilize on cooperation in one of the C-SWITCH sessions
(discussed in detail below and in Section 4.1), which is not explained by test scores.
Specications (2) and (4) reveal that the propensity to cooperate in a given stage is correlated to a large and signicant
extent with whether the subject cooperated in the rst stages of the previous repeated game and the rst repeated game.
In contrast, the behavior of the subjects opponent in these past stages does not have a signicant effect on the propensity
to cooperate. Together, these results indicate that a subjects choice to use a cooperative or a selsh strategy tends to be
relatively stable across repeated games. The number of stages in the previous repeated game is estimated to have a small
but signicant positive effect on the propensity to cooperate.34
Result 5. The relationship between ACT scores and cooperation found in the baseline setting is mitigated in treatments that
increase the complexity of cooperative strategies.
In both specications (3) and (4), having an ACT score above the top 5% signicantly increases the propensity to cooperate,
and having a score below the top 20% signicantly decreases the propensity to cooperate. Hence, in the baseline setting the
estimated relationship between cognitive ability and cooperation is robust to controlling for the history of play. However, in
most instances the C-SWITCH and D-SWITCH treatments interact signicantly with the test score indicator variables with
the opposite sign of the main test score effect. These results suggest that the relationship between the scores and cooperation
is attenuated in the complexity treatments.
Because the experiment was primarily designed to seek evidence of a general relationship between strategic complexity
and cooperation, the data do not permit a precise explanation for the difference in the effect of ACT scores on cooperation
across treatments. One interpretation would be that individuals with high cognitive ability are more likely to use cooperative
strategies in a standard prisoners dilemma environment, but further increasing the implementation complexity of these
strategies eliminates the advantage for these individuals. In terms of the minimal nite automata that implement these
strategies, the relationship between cognitive ability and cooperation appears to hold when cooperative strategies are
implemented by 2-state automata, but it disappears when they require automata of 3 or more states. Of course, this objective
measure should not be taken too literally. The important implication of this result is that additional complexity attenuates
the relationship between cognitive ability and cooperation in the standard prisoners dilemma, which is observed on the
aggregate level by Jones (2008) and on the individual level in this experiment. Deeper understanding of what drives this
result is an avenue for future research.
Because the complexity treatment effects are attenuated when ACT scores are included in these regressions, an alternative
interpretation of the main experimental results seems plausible: if the proportion of subjects with high cognitive ability
varies between sessions of the experiment, their greater tendency to cooperate may lead to differences between treatments
independent of any complexity effect. While I cannot completely rule out this alternative explanation, several features of
the data are inconsistent with this hypothesis. First, even when controlling for ACT scores, the D-SWITCH treatment effect
remains signicant. Second, a breakdown of ACT scores and cooperation rates by session (see Table C.3 in Section C of
the Online Appendix) does not support this explanation. Session 2 of BASE and session 1 of D-SWITCH have similar ACT
score distributions, but lower cooperation rates prevail in the D-SWITCH session. Compared to these sessions, session 2 of
C-SWITCH has a higher proportion of subjects with ACT scores in the top 5% but lower cooperation rates. The session with
the highest overall cooperation rate is C-SWITCH session 1, which is also the session with the lowest proportion of subjects
with an ACT score in the top 5%. However, in all sessions of C-SWITCH and D-SWITCH the rst stage cooperation rate is
lower than in either session of BASE, indicating that subjects in the treatment sessions are less likely to adopt cooperative
strategies than those in the control. Therefore, though I cannot rule out that differences in cognitive ability may be partially
responsible for differences in cooperation rates between sessions, the complexity treatments appear to have a clearer and
more consistent effect on strategy choice.

33
Results of specication (1) differ somewhat from the overall treatment effects presented in Fig. 5 because these regressions exclude the results of the
rst repeated game (used as a regressor), while Fig. 5 summarizes all action choices.
34
Because the same sequence of repeated game lengths is used in all sessions of this experiment, this effect on cooperation does not confound the
complexity interpretation of between-treatment differences.

M.T. Jones / Journal of Economic Behavior & Organization 106 (2014) 352366

365

5. Conclusion
In this paper, I study whether cooperation in the indenitely repeated prisoners dilemma is sensitive to cognitive costs
associated with strategic complexity. Strategic complexity in this game is increased through random switching between
permutations of the payoff table during repeated games. This manipulation increases the information processing necessary
to implement strategies supporting cooperation. Results indicate that increasing the complexity of cooperative strategies
makes subjects less likely to adopt them. The effect appears robust because cooperation is reduced regardless of whether
the cooperate action or the defect action is manipulated to increase the complexity of cooperative strategies.
Analysis of subjects ACT and SAT scores reveals evidence of a link between cooperation and cognitive ability. A correlation
between average SAT scores in the subject pool and aggregate cooperation levels was previously reported by Jones (2008)
in a metastudy of prisoners dilemma experiments, but to my knowledge this study is the rst to identify a link between
cognitive ability and cooperation at the individual level in the repeated game. Because cooperative strategies are relatively
complex and players with high cognitive ability should be more able to bear the cognitive costs of implementing complex
strategies, this relationship supports the idea that strategy choice is inuenced by cognitive costs of strategic complexity.
This experiment contributes to the literature on how cognitive costs and abilities affect behavior in games by adapting
a well-developed model of boundedly rational strategy selection to a familiar experimental setting. Prior research has
established that additional cognitive costs such as external memory tasks (Milinski and Wedekind, 1998; Duffy and Smith,
2011) and playing other games simultaneously (Bednar et al., 2012) can affect behavior in a repeated prisoners dilemma, but
the mechanism for this effect remains unknown. This experimental design is grounded in an existing theoretical framework
describing a potential mechanism for the above results: cognitive costs of information processing associated with strategic
complexity. The model provides testable predictions which this experiment conrms with results that resemble those of
other studies, suggesting that the complexity of repeated game strategies carries information processing costs that interact
with other sources of cognitive burden in inuencing strategy choice.
On a practical level, experimental evidence such as this may help to improve the applicability of game-theoretic predictions to real world problems. The results suggest that the cognitive cost of implementation complexity can inuence
strategy choice and ultimately the efciency of outcomes, and they may be particularly relevant to some specic applications. Sustaining cooperation in the complex world in which we live often requires individuals to condition their actions
not only on the behavior of others, but also on the observable state of nature. The experimental design simulates this source
of complexity and shows that it can have an impact on cooperation. For example, consider collusion in a duopoly with a
uctuating but publicly observable demand state, where only the collusive price (if the competitive price is determined by
a constant cost) or only the competitive price (if the collusive price is a constant focal point price) depends on demand
uctuations. Results of this experiment suggest that in either case, sustaining collusion is less likely than in an environment
with relatively constant demand.
These results suggest several possible lines of future experimental research. A natural extension would attempt to describe
in more detail the relationship between cooperation and strategic complexity using similar manipulations to iteratively
increase the complexity of cooperative strategies until cooperation stops. A similar design could also be used to study the
importance of strategic complexity in other games (for instance, public goods or network formation games). The present work
addresses the importance of implementation complexity in strategy choice, but the importance of a different but equally
important type of complexity, computational complexity, also deserves investigation. Finally, the results found using data
on subjects ACT and SAT scores highlight the value of collecting such data in experimental research and point to another
line of future work exploring in more depth the link between cognitive ability and cooperation found in this study.
Acknowledgements
The author would like to thank Dan Levin and James Peck for valuable guidance and support. This work beneted from
comments by Asen Ivanov, Mark R. Johnson, John Kagel, Ehud Kalai, Brandon Restrepo, Lixin Ye, participants of the theory/experimental reading group at Ohio State, anonymous referees, and seminar participants at the 2011 ESA International
Meeting, the 2011 PEA Conference, and the Kent State University business school.
Appendix A. Supplementary data
Supplementary data associated with this article can be found, in the online version, at http://dx.doi.org/10.1016/j.jebo.
2014.07.005.
References
Abreu, D., Rubinstein, A., 1988. The structure of Nash equilibrium in repeated games with nite automata. Econometrica 56 (6), 12591281.
Agranov, M., Caplin, A., Tergiman, C., 2011. The process of choice in guessing games (Working paper).
Aoyagi, M., Frechette, G., 2009. Collusion as public monitoring becomes noisy: experimental evidence. J. Econ. Theor. 144 (3), 11351165.
Aumann, R.J., 1981. Survey of repeated games. In: Aumann, R.J. (Ed.), Essays in Game Theory and Mathematical Economics in Honor of Oskar Morganstern.
Bibliographisches Institute, Mannheim, Vienna, Zurich.
Banks, J.S., Sundaram, R.K., 1990. Repeated games, nite automata, and complexity. Games Econ. Behav. 2 (2), 97117.

366

M.T. Jones / Journal of Economic Behavior & Organization 106 (2014) 352366

Baron, D., Kalai, E., 1993. The simplest equilibrium of a majority rule division game. J. Econ. Theor. 61 (2), 290301.
Bednar, J., Chen, Y., Liu, T.X., Page, S., 2012. Behavioral spillovers and cognitive load in multiple games: an experimental study. Games Econ. Behav. 74 (1),
1231.
Benjamin, D.J., Shapiro, J.M., 2005. Does Cognitive Ability Reduce Psychological Bias? (Working paper).
Binmore, K.G., Samuelson, L., 1992. Evolutionary stability in repeated games played by nite automata. J. Econ. Theor. 57 (2), 278305.
Blonski, M., Ockenfels, P., Spagnolo, G., 2011. Equilibrium selection in the repeated prisoners dilemma: axiomatic approach and experimental evidence.
Am. Econ. J.: Microecon. 3 (3), 164192.
Branas-Garza, P., Espinosa, M.P., Rey-Biel, P., 2011a. Travelers types. J. Econ. Behav. Organ. 78 (12), 2536.
Branas-Garza, P., Garcia-Muno, T., Hernan, R., 2011b. Travelers types. Cognitive effort in the beauty contest game (Working paper).
Burks, S.V., Carpenter, J.P., Goette, L., Rustichini, A., 2009. Cognitive skills affect economic preferences, strategic behavior, and job attachment. Proc. Natl.
Acad. Sci. U. S. A. 106 (19), 77457750.
Burnham, T., Cesarini, D., Johannesson, M., Lichtenstein, P., Wallace, B., 2009. Higher cognitive ability is associated with lower entries in a p-beauty contest.
J. Econ. Behav. Organ. 72 (1), 171175.
Camera, G., Casari, M., 2009. Cooperation among strangers under the shadow of the future. Am. Econ. Rev. 99 (3), 9791005.
Camera, G., Casari, M., Bigoni, M., 2012. Cooperative strategies in anonymous economies: an experiment. Games Econ. Behav. 75 (2), 570586.
Casari, M., Ham, J.C., Kagel, J.H., 2007. Selection bias, demographic effects, and ability effects in common value auction experiments. Am. Econ. Rev. 97 (4),
12781304.
Cason, T.N., Gangadharan, L., 2012. Cooperation spillovers and price competition in experimental markets. Econ. Inq. 51 (3), 17151730.
Cason, T.N., Savikhin, A., Sheremeta, R.M., 2010. Behavioral spillovers in coordination games. Eur. Econ. Rev. 56 (2), 233245.
Chatterjee, K., Sabourian, H., 2009. Game theory and strategic complexity. In: Meyers, R.A. (Ed.), In: Encyclopedia of Complexity and System Science, vol. I.
Springer, New York.
Cooper, D.J., 1996. Supergames played by nite automata with nite costs of complexity in an evolutionary setting. J. Econ. Theor. 68 (1), 266275.
Dal Bo, P., 2005. Cooperation under the shadow of the future: experimental evidence from innitely repeated games. Am. Econ. Rev. 95 (5), 15911604.
Dal Bo, P., Frechette, G., 2011a. The evolution of cooperation in repeated games: experimental evidence. Am. Econ. Rev. 101 (1), 411429.
Dal Bo, P., Frechette, G., 2011b. Strategy Choice in the Innitely Repeated Prisoners Dilemma (Working paper).
Dreber, A., Rand, D.G., Fudenberg, D., Nowak, M.A., 2008. Winners dont punish. Nature 452, 348351.
Duffy, J., Ochs, J., 2009. Cooperative behavior and the frequency of social interaction. Games Econ. Behav. 66 (2), 785812.
Duffy, S., Smith, J., 2011. Cognitive Load in the Multi-player Prisoners Dilemma Game: Are There Brains in Games? (Working paper).
El-Gamal, M.A., Grether, D.M., 1995. Are people Bayesian? Uncovering behavioral strategies. J. Am. Stat. Assoc. 90, 11371145.
Embrey, M., Mengel, F., Peeters, R., 2012. Strategic commitment and cooperation in experimental games of strategic complements and substitutes (Working
paper).
Engle-Warnick, J., McCausland, W.J., Miller, J.H., 2007. The Ghost in the Machine: Inferring Machine-Based Strategies from Observed Behavior (Working
paper).
Engle-Warnick, J., Slonim, R.L., 2006. Inferring repeated-game strategies from actions: evidence from trust game experiments. Econ. Theor. 28 (3), 603632.
Feinberg, R.M., Husted, T.A., 1993. An experimental test of discount-rate effects on collusive behaviour in duopoly markets. J. Ind. Econ. 41 (2), 153160.
Fershtman, C., Kalai, E., 1993. Complexity considerations and market behavior. Rand J. Econ. 24 (2), 224235.
Fishbacher, U., 2007. z-Tree: Zurich Toolbox for ready-made economic experiments. Exp. Econ. 10 (2), 171178.
Frey, M.C., Detterman, D.K., 2004. Scholastic assessment or g? The relationship between the SAT and general cognitive ability. Psychol. Sci. 15 (6), 373378.
Fudenberg, D., Rand, D.G., Dreber, A., 2012. Slow to anger and fast to forget: leniency and forgiveness in an uncertain world. Am. Econ. Rev. 102 (2), 720749.
Gale, D., Sabourian, H., 2005. Complexity and competition. Econometrica 73 (3), 739769.
Georganas, S., Healy, P.J., Weber, R.A., 2012. On the Persistence of Strategic Sophistication (Working paper).
Gill, D., Prowse, V., 2012. Cognitive ability and learning to play equilibrium: a level-k analysis (Working paper).
Guth, W., Hager, K., Kirchkamp, O., Schwalbach, J., 2010. Testing Forbearance Experimentally: Duopolistic Competition of Conglomerate Firms (Working
paper).
Hauk, E., 2003. Multiple prisoners dilemma games with(out) an outside option: an experimental study. Theor. Decis. 54 (3), 207229.
Hauk, E., Nagel, R., 2001. Choice of partners in multiple two-person prisoners dilemma games: an experimental study. J. Conict Resol. 45 (6), 770793.
Hawes, D.R., DeYoung, C.G., Gray, J.R., Rustichini, A., 2012. Intelligence Moderates Responses to Gains and Losses by Enriching Task Representations (Working
paper).
Ho, T.-H., 1996. Finite automata play repeated prisoners dilemma with information processing costs. J. Econ. Dynam. Control 20, 173207.
Holt, C.A., 1985. An experimental test if the consistent-conjectures hypothesis. Am. Econ. Rev. 75 (3), 314325.
Hopcroft, J.E., Ullman, J.D., 1979. Introduction to Automata Theory, Languages, and Computation. Addison-Wesley, Reading, MA.
Ivanov, A., Levin, D., Peck, J., 2009. Hindsight, foresight, and insight: an experimental study of a small-market investment game with common and private
values. Am. Econ. Rev. 99 (4), 14841507.
Ivanov, A., Levin, D., Peck, J., 2013. Behavioral biases in endogenous-timing herding games: an experimental study. J. Econ. Behav. Organ. 87, 2534.
Johnson, M.R., 2006a. Economic Choice Semiautomata: Structure, Complexities and Aggregations (Working paper).
Johnson, M.R., 2006b. Algebraic Complexity of Strategy-implementing Semiautomata for Repeated-Play Games (Working paper).
Jones, G., 2008. Are smarter groups more cooperative? Evidence from prisoners dilemma experiments, 19592003. J. Econ. Behav. Organ. 68, 489497.
Jones, M., 2013. Nobody goes there anymore its too crowded: Level-k Thinking in the Restaurant Game (Working paper).
Kalai, E., Stanford, W., 1988. Finite rationality and interpersonal complexity in repeated games. Econometrica 56 (2), 397410.
Koenig, K.A., Frey, M.C., Detterman, D.K., 2008. ACT and general cognitive ability. Intelligence 36 (2), 153160.
Linster, B.G., 1992. Evolutionary stability in the innitely repeated prisoners dilemma played by two-state Moore machines. Southern Econ. J. 58 (4),
880903.
Lipman, B.L., Srivastava, S., 1990. Informational requirements and strategic complexity in repeated games. Games Econ. Behav. 2 (3), 273290.
Milinski, M., Wedekind, C., 1998. Working memory constrains human cooperation in the prisoners dilemma. Proc. Natl. Acad. Sci. U. S. A. 95 (23),
1375513758.
Millet, K., Dewitte, S., 2007. Altruistic behavior as a costly signal of general intelligence. J. Res. Pers. 41 (2), 316326.
Murnighan, J.K., Roth, A.E., 1983. Expecting continued play in prisoners dilemma games. J. Conict Resol. 27 (2), 279300.
Palfrey, T.R., Rosenthal, H., 1994. Repeated play, cooperation and coordination: an experimental study. Rev. Econ. Stud. 61 (3), 545565.
Roth, A.E., Murnighan, J.K., 1978. Equilibrium behavior and repeated play of the prisoners dilemma. J. Math. Psychol. 17 (2), 189198.
Rubinstein, A., 1986. Finite automata play the repeated prisoners dilemma. J. Econ. Theor. 39 (1), 8396.
Salant, Y., 2011. Procedural analysis of choice rules with applications to bounded rationality. Am. Econ. Rev. 101 (2), 724748.
Savikhin, A., Sheremeta, R.M., 2012. Simultaneous decision-making in competitive and cooperative environments. Econ. Inq. 51 (2), 13111323.
Stevens, J.R., Volstorf, J., Schooler, L.J., Rieskamp, J., 2011. Forgetting constrains the emergence of cooperative decision strategies. Front. Psychol. 1, 112.
Volij, O., 2002. In defense of DEFECT. Games Econ. Behav. 39 (2), 309321.
Winkler, I., Joseph, K., Rudolph, U., 2008. On the usefulness of memory skills in social interactions: modifying the iterated prisoners dilemma. J. Conict
Resol. 52 (3), 375384.

You might also like