Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Theory of Games and Statistical Decisions
Theory of Games and Statistical Decisions
Theory of Games and Statistical Decisions
Ebook608 pages9 hours

Theory of Games and Statistical Decisions

Rating: 4 out of 5 stars

4/5

()

Read preview

About this ebook

Evaluating statistical procedures through decision and game theory, as first proposed by Neyman and Pearson and extended by Wald, is the goal of this problem-oriented text in mathematical statistics. First-year graduate students in statistics and other students with a background in statistical theory and advanced calculus will find a rigorous, thorough presentation of statistical decision theory treated as a special case of game theory.
The work of Borel, von Neumann, and Morgenstern in game theory, of prime importance to decision theory, is covered in its relevant aspects: reduction of games to normal forms, the minimax theorem, and the utility theorem. With this introduction, Blackwell and Professor Girshick look at: Values and Optimal Strategies in Games; General Structure of Statistical Games; Utility and Principles of Choice; Classes of Optimal Strategies; Fixed Sample-Size Games with Finite Ω and with Finite A; Sufficient Statistics and the Invariance Principle; Sequential Games; Bayes and Minimax Sequential Procedures; Estimation; and Comparison of Experiments.
A few topics not directly applicable to statistics, such as perfect information theory, are also discussed. Prerequisites for full understanding of the procedures in this book include knowledge of elementary analysis, and some familiarity with matrices, determinants, and linear dependence. For purposes of formal development, only discrete distributions are used, though continuous distributions are employed as illustrations.
The number and variety of problems presented will be welcomed by all students, computer experts, and others using statistics and game theory. This comprehensive and sophisticated introduction remains one of the strongest and most useful approaches to a field which today touches areas as diverse as gambling and particle physics.

LanguageEnglish
Release dateJun 14, 2012
ISBN9780486150895
Theory of Games and Statistical Decisions

Related to Theory of Games and Statistical Decisions

Titles in the series (100)

View More

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Theory of Games and Statistical Decisions

Rating: 4 out of 5 stars
4/5

3 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Theory of Games and Statistical Decisions - David A. Blackwell

    THEORY

    OF GAMES

    AND STATISTICAL

    DECISIONS

    DAVID

    BLACKWELL

    Professor of Statistics

    University of California, Berkeley

    M.A.

    GIRSHICK

    Late Professor of Statistics

    Stanford University

    Dover Publications, Inc.

    New York

    Copyright © 1954 by David Blackwell and Paula Girshick Ben-Amos.

    All rights reserved under Pan American and International Copyright Conventions.

    Published in Canada by General Publishing Company, Ltd., 30 Lesmill Road, Don Mills, Toronto, Ontario.

    This Dover edition, first published in 1979, is an unabridged and unaltered republication of the work originally published in 1954 by John Wiley & Sons, Inc.

    International Standard Book Number: 978-0-486-15089-5

    Library of Congress Catalog Card Number: 79-87808

    Manufactured in the United States of America

    Dover Publications, Inc.

    31 East 2nd Street, Mineola, N.Y 11501

    Dedicated

    To the Memory of

    ABRAHAM WALD

    Preface

    DECISION THEORY APPLIES TO STATISTICAL PROBLEMS THE PRIN-ciple that a statistical procedure should be evaluated by its consequences in various circumstances, a principle first clearly emphasized by Neyman and Pearson in their theory of testing hypotheses. The extension of this principle to all statistical problems was proposed by A. Wald in 1939, and developed by him in a series of papers culminating in his book, Statistical Decision Functions, Wiley, 1950. The importance of Wald’s approach has been widely recognized, and the decision-theory viewpoint has been adopted in much recent research in statistics. This book is intended primarily as a textbook in decision theory for first-year graduate students in statistics.

    Wald’s mathematical model for decision theory is a special case of that for game theory, as introduced by E. Borel in 1921 and more generally by J. von Neumann in 1928, and expanded in the definitive book by von Neumann and 0. Morgenstern, Theory of Games and Economic Behavior, Princeton, 1944. Many of the results in von Neumann and Morgenstern’s book, e.g., the reduction of games to normal form, the minimax theorem, and the utility theorem, and much of the research stimulated by that book are of basic importance for decision theory, so that our book begins with a treatment of the relevant parts of game theory. Occasionally we have permitted ourselves the luxury of discussing topics in game theory not strictly relevant to statistics: perfect-information games, for example. For an excellent treatment of numerous aspects of game theory not treated here, the reader is referred to J. C. C. McKinsey’s Introduction to the Theory of Games, McGraw-Hill, 1952.

    Many have contributed to the development of game theory and decision theory, and in most cases the reader will find in the references indicated at the end of the book a more complete treatment of the topics presented here.

    The mathematical prerequisite for reading this book is a course in elementary analysis, covering such topics as limits of sequences, uniform convergence, the Riemann integral, and the Heine-Borel theorem. For a few sections, some knowledge of matrices, determinants, and linear dependence is required. The statistical concepts used are defined in the book, but the reader who has no previous knowledge of statistics may find some parts difficult. Only discrete distributions appear in the formal development, but continuous distributions, e.g., the normal, are used for illustrative purposes. For a fuller treatment of the probability and statistical concepts used here, the reader is referred to two excellent books: W. Feller, An Introduction to Probability Theory and Its Applications, Wiley, 1950; and A. M. Mood, Introduction to the Theory of Statistics, McGraw-Hill, 1950.

    We are indebted to the Office of Naval Research, for its continuing support which made it possible for us to write this book. We are also indebted to our former colleagues at the Rand Corporation, countless discussions with whom helped to clarify our ideas, to Herman Rubin and Patrick Suppes, whose participation amounts practically to co-authorship, to J. C. C. McKinsey for numerous improvements, particularly in Chapters 1 and 2, to Oscar Wesler for introducing a great deal of clarity and rigor in the final revision of this book, to Rosedith Sitgreaves and Russell N. Bradt for substantial contributions, to Gladys Garabedian for preparing the figures for the draftsman, and to many other people for assistance and criticisms. We are grateful to Phyllis Winkler, who typed two versions of this book, performing 1-1 transformations of notation and deciphering cryptic instructions from the authors, without ever losing her sunny disposition.

    DAVID BLACKWELL

    M. A. GIRSHICK

    February 1954

    Contents

    1      GAMES IN NORMAL FORM

    1.1   Introduction

    1.2   The Concept of a Strategy

    1.3   The Normal Form

    1.4   Equivalent Games

    1.5   Illustrative Examples

    1.6   Lower and Upper Pure Value

    1.7   Perfect-Information Games

    1.8   Mixed Strategies

    2      VALUES AND OPTIMAL STRATEGIES IN GAMES

    2.1   Introduction

    2.2   Convex Sets and Convex Functions

    2.3   Games with a Value

    2.4   S Games

    2.5   Games with Convex Payoff

    2.6   Extended Mixed Strategies

    2.7   Solving Games

    3      GENERAL STRUCTURE OF STATISTICAL GAMES

    3.1   Introduction

    3.2   The Sample Space

    3.3   The Space of Pure Strategies for the Statistician

    3.4   The Notion of a Random Variable

    3.5   Statistician’s Space of Strategies for Single Experiments

    3.6   Mixed Strategies for Single-Experiment Games

    3.7   Games Involving Densities

    3.8   Preliminary Remarks Concerning Sequential Games

    3.9   Space of Statistician’s Strategies in Truncated Sequential Games

    3.10 Definition of Truncated Sequential Games

    3.11 Some Further Theorems on Probability

    4      UTILITY AND PRINCIPLES OF CHOICE

    4.1   Introduction

    4.2   Utility

    4.3   Principles of Choice

    5      CLASSES OF OPTIMAL STRATEGIES

    5.1   Introduction

    5.2   Complete Classes of Strategies in S Games

    5.3   The Class of Games

    5.4   Definition of Classes of Optimal Strategies

    5.5   Set-Theoretic Relations among the Classes of Strategies

    5.6   Conditions under Which the Classes of Strategies Are Complete

    5.7   Completeness of the Class of Admissible Strategies

    6      FIXED SAMPLE-SIZE GAMES WITH FINITE Ω

    6.1   Introduction

    6.2   Complete Classes of Strategies in Games with Finite Ω

    6.3   Bayes Solutions for Finite Ω

    6.4   Illustrative Examples of Fixed Sample-Size Statistical Games in Which Ω Is Finite

    7      FIXED SAMPLE-SIZE GAMES WITH FINITE A

    7.1   Introduction

    7.2   On the Equivalence of Two Methods of Randomization

    7.3   Bayes Solutions in Fixed Sample-Size Games with Finite A

    7.4   Fixed Sample-Size Multidecision Games Where Is the Exponential Class

    7.5   Minimax Strategies in Fixed Sample-Size Multidecision Games

    7.6   The Completeness of the Classes , , and

    7.7   Tests of Composite Hypotheses

    8      SUFFICIENT STATISTICS AND THE INVARIANCE PRINCIPLE IN STATISTICAL GAMES

    8.1   Introduction

    8.2   Partitions of Z Which Are Sufficient on

    8.3   The Principle of Sufficiency

    8.4   Minimal Sufficient Partitions

    8.5   Sufficient Statistics for Densities

    8.6   Principle of Invariance for Finite Groups

    8.7   Application of the Invariance Principle to Sampling from a Finite Population

    8.8   A Special Case of the Invariance Principle with an Infinite Group

    9      SEQUENTIAL GAMES

    9.1   Introduction

    9.2   Bayes Procedures for Sequential Games

    9.3   Bayes Sequential Procedures for Constant Cost and Identically and Independently Distributed Observations

    9.4   Bayes Sequential Procedures for Finite Ω

    10      BAYES AND MINIMAX SEQUENTIAL PROCEDURES WHEN BOTH Ω AND A ARE FINITE

    10.1   Introduction

    10.2   Method for Determining the Boundaries of the Stopping Regions for Truncated Sequential Dichotomies

    10.3   Method for Determining the Boundaries of the Stopping Regions for Non-Truncated Sequential Dichotomies

    10.4   Some Theory in Sequential Analysis

    10.5   Approximation for and for (n)

    10.6   The Determination of the Stopping Regions for a Special Class of Di-chotomies

    10.7   Examples of Trichotomies, i.e., Ω = (1, 2, 3), A = (1, 2, 3)

    10.8   Minimax Strategies in Sequential Games with Finite Ω and A

    10.9   Another Optimal Property of the Sequential-Probability Ratio Test

    11      ESTIMATION

    11.1   Introduction

    11.2   Bayes Estimates for Special Loss Functions

    11.3   The Translation-Parameter Problem

    11.4   The Scale-Parameter Problem

    11.5   Admissible Minimax Estimates for the Exponential Class

    12      COMPARISON OF EXPERIMENTS

    12.1   Introduction

    12.2   Some Equivalent Conditions

    12.3   Combinations of Experiments

    12.4   Dichotomies

    12.5   Binomial Dichotomies

    REFERENCES

    INDEX

    CHAPTER 1

    Games in Normal Form

    1.1. Introduction

    A game is characterized by a set of rules having a certain formal structure, governing the behavior of certain individuals or groups, the players. Chess and bridge are examples of games in the sense considered here, and will be used for illustrative purposes.

    Broadly speaking, the rules provide that the game shall consist of a finite sequence of moves in a specified order, and the nature of each move is prescribed. Moves are of two kinds, personal moves and chance moves. A personal move is a choice by one of the players of one of a specified, possibly infinite, set of alternatives; for instance, each move in chess is a personal move; the first move is a choice by White of 1 of 20 specified alternatives. The actual decision made in a particular play of a game at a given personal move we shall call the choice at that move. A chance move also results in the choice of one of a specified set of alternatives; here the alternative is selected not by one of the players, but by a chance mechanism, with the probabilities with which the mechanism selects the various alternatives specified by the rules of the game. For instance, the first move in bridge consists of dealing the first card to a specified player. This is a chance move with 52 alternatives; the rules require that each alternative shall have probability of being selected. The actual selection made in a particular play of a game at a given chance move we shall call the outcome at that move.

    In terms of moves, the rules of a game have the following structure. For the first move, the rules specify whether it is to be a personal move or a chance move. If it is a personal move, the rules list the available alternatives and specify which player is to make the choice; if it is a chance move, the rules list the available alternatives and specify the probabilities with which they are to be selected. For moves after the first, say the kth move with k > 1, the rules specify, as a function of the choices and outcomes at the first k – 1 moves (a) whether the kth move is to be a personal move or a chance move, (b) if a chance move, the alternatives and their probabilities of selection, and (c) if a personal move, the alternatives, the player who is to make the choice, and the information concerning the choices and outcomes at the first k – 1 moves that is given to him before he makes his choice. Finally, the rules specify, as a function of the choices and outcomes at the successive moves, when the game shall terminate and the score, not necessarily numerical, that is to be assigned to each player.

    Various points should be noted in connection with the above description. (1) For k > 1, the characteristics of the kth move depend on the results of previous moves. In bridge, for instance, when the first three bids are pass, the move after the fourth bid will be a chance move, i.e., a new deal, if the fourth bid is pass, and a personal move, i.e., a bid by the next player, if the fourth bid is anything other than pass. (2) The information given to a player when he is to make a certain move does not necessarily include the information that was given to him at an earlier move. In bridge, for instance, two partners are best considered as a single player; each partner has complete knowledge of his own hand throughout the play, but not of his partner’s hand. (3) The number of moves occurring in an actual play is not fixed in advance but depends on the successive outcomes and choices. The rules must guarantee, however, that the game will terminate eventually. (4) The concept of move is to some extent indeterminate. For instance, the deal in bridge may be considered either as a single chance move or as a succession of 52 chance moves.

    Figure 1

    An interesting graphical representation of a game can be given in terms of a tree as is illustrated in Figure 1. In this figure the vertices represent various positions for the players as the result of the outcomes of a sequence of moves. No vertex is occupied by more than one player. Play begins at a specified vertex O, and each move is a change in position from a vertex to one of the adjoining higher vertices. When any vertex without branches is reached, the play of the game is over. Each non-terminal vertex has a label (in Figure 1, the labels I and II are for players I and II, respectively), specifying who is to move if that vertex is reached, and the number of alternatives is simply the number of branches at the vertex. The information available to the players is specified by a partition S of the vertices into sets V, called information sets, such that, when a vertex υ ε V is reached, the player who is to move is told only the set V, and not, unless V contains only a single element, the exact position v. Certain obvious requirements are imposed on the information partition : (1) all υ ε V must have the same mover and the same number of alternatives, and (2) if υ1 ε V and v2 is attainable from v1, then . In the game given by Figure 1, the information partition S contains five information sets as members, and three of these, V1, V3, V4, are sets containing only one element. When every information set V of a game contains only one element, the game is said to be a game of perfect information. This special class of games is discussed in Section 1.7.

    A formal mathematical description of the structure of a game in terms of the concepts of moves and information will not be attempted here. One of the most important results in the theory of games is the fact that all games, no matter what their detailed formal structure, may be reduced to an exceedingly simple form, the so-called normal form. In the next two sections we give an informal description of how this reduction is effected.

    1.2. The Concept of a Strategy

    Imagine that you are to play the White pieces in a single game of chess, and that you discover you are unable to be present for the occasion. There is available a deputy, who will represent you on the occasion, and who will carry out your instructions exactly, but who is absolutely unable to make any decisions of his own volition. Thus, in order to guarantee that your deputy will be able to conduct the White pieces throughout the game, your instructions to him must envisage every possible circumstance in which he may be required to move, and must specify, for each such circumstance, what his choice is to be. Any such complete set of instructions constitutes what we shall call a strategy. Thus, a strategy for White must specify a first move and, for each possible reply by Black, a corresponding next move and, in general, for each possible sequence of choices m1, · · ·, m2k, , a choice c(m1, · · ·, m2k) for his (k + 1)st move (the [2k + 1]st move of the game) among the alternatives available to him as a result of the situation m1, · · ·, m2k. Of course, the specification of a strategy that has a reasonable chance of winning against an opponent who will actually be present would be a formidable job, mainly because you would have to make a tremendous number of decisions, most of which would turn out to be irrelevant in any particular play of the game, since most of the situations for which decisions had to be made will simply not turn up. For instance, a White player who is actually present must make two decisions in order to make his first two moves, whereas one who is operating by deputy must make 21 decisions to guarantee that his deputy will be able to make two moves. Thus the use of strategies would make the actual play of a single chess game a major undertaking; nevertheless, we shall see that the concept of strategy leads to great simplification for theoretical purposes.

    In general, a strategy for a given player in a given game consists of a specification, for each situation that could arise in which he would be required to make a personal move, of his choice in that situation. It is to be emphasized that the choice of a strategy imposes no theoretical handicap on a player; specifically he is not required to make any decision on the basis of less information than he would have in the actual play at the stage where this decision was required, since the decisions whose totality constitutes a strategy are conditional ones of the form "If a situation should arise in which I am informed that it is my move and I am given information z about the results of previous moves (chance and personal) and I am told that the set of alternatives open to me is S, my choice will be s (a point of S)." Properly speaking, the information z about the results of previous moves includes the information that it is his turn to move and the specification of the alternatives open to him, since these latter facts in general constitute information about the results of previous moves. With the understanding that information z includes specification of S and of the fact that it is his turn to move, we may describe a player’s strategy as a listing of all possible z’s and an assignment to each z of an s in the set S of available alternatives specified by z, i.e., if Z is the space of possible z’s (in terms of the tree, the space of all information sets V with the player’s label) and S(z) is the set of alternatives specified by z (the set of branches leading out of a vertex of the particular information set), the player’s strategy is a function f such that f(z) ε S(z) for all z ε Z (i.e., f is a rule that specifies, for each information set, which branch is to be selected). The space of all possible strategies f for a given player will be designated by F. (The number of elements in F is clearly given by , where n is the number of information sets with the player’s label and ri is the number of alternatives for the ith information set.)

    Let Z1, Z2 be the Z spaces for White and Black in chess, and let f1, f2 be any strategies for White and Black respectively. A referee, given f1 and f2, could conduct the entire game without further instructions from either player and announce the result. For the information z1 that no previous moves have been made, that it is now White’s turn, and that S(z1) consists of the 20 legal opening moves for White is a point in Z1, so that m1 = f1(z1) ε S(z1) is White’s opening move. The information z2 that White’s opening move was m1, that no other moves have been made by either player, that it is now Black’s turn to move, and that S (z2) consists of the 20 legal moves for Black at this stage will be an element of Z2, so that f2(z2) = m2 will be Black’s reply, etc. Thus the outcome of the game is determined uniquely by the strategies f1, f2 chosen by the players.

    The fact noted above for chess, that the outcome of the game is determined once each player has selected a strategy, will clearly continue to hold for any game in which each move is a personal move by some player, but will fail for bridge, for instance, where chance moves occur and influence the outcome. In discussing chance moves, we make the conceptual simplification corresponding to that of strategy for personal moves of a given player. Let W be the set of all possible circumstances w in which a chance move would be required. Each w includes specification of the nature of the chance move to be performed, i.e., of the set S(w) of alternatives among which one is to be selected, and for each w of the probability distribution p over S(w) according to which the selection of s ε S(w) is to be made. Suppose that a referee performs in advance each chance experiment that may be required in the course of the game, i.e., for each w he selects an element s ε S(w) according to the distribution p, the selections for different w’s being independent. Now a particular selection of an s from S(w) for each w constitutes a function h defined on W with h(w) ε S(w). (In terms of the tree, h is a rule which specifies, for each information set labeled chance move, which branch is selected.) Also, the distributions p, together with the requirement of independent selection, determine an over-all distribution P on these functions h. Consequently, the totality of experiments performed by the referee may be viewed as a single experiment, namely, selecting a function according to the probability function P. Thus, just as the totality of all decisions to be made by a given player can be described by a single decision–the choice of a strategy–so can the totality of chance experiments to be performed be replaced by a single over-all chance move.

    We can now describe quite simply the structure of a game. Suppose there are k players, and let Fi be the space of possible strategies fi for player i, the space of possible outcomes h of the over-all chance experiment, and G the space of (k + 1)-tuples g = (f1, · · ·, fk, h), where fi ε Fi, . Any (k + 1)-tuple g ε G determines uniquely the course of the play. For each play the rules of the game determine a unique outcome r belonging to the space of outcomes R, and consequently for each game there is a function q defined over the space G mapping G onto R; i.e., the range of q is the entire space R. For fixed f = (f1, · · ·, fk), the probability distribution P, according to which h is selected, determines a probability distribution Q over G, and thus a probability distribution πf over R. Hence, every game can be described by k spaces F1, · · ·, Fk, a space R, and the association with each k-tuple (f1, · · ·, fk), fi ε Fi, of a probability distribution πf over R. The game is played as follows: Player i chooses a point fi ε Fi, the k choices being made simultaneously and independently; a point r ε R is then selected according to the probability distribution πf associated with the k choices (f1, · · ·, fk).

    An analysis of the following simple game will provide an illustration of the concepts introduced in this section. Player I moves first and selects one of the two integers 1, 2. The referee then tosses a coin, and, if the outcome is head, he informs player II of player I’s choice, and not otherwise. Player II then moves and selects one of the two integers 3, 4. The fourth move is again a chance move by the referee and consists of selecting one of the three integers 1, 2, 3 with respective probabilities 0.4, 0.2, 0.4. The numbers selected in the first, third, and fourth moves are then added, and that amount in dollars is paid by II to I if the sum is even, and by I to II if the sum is odd.

    The tree of this game is shown in Figure 2. The numbers at the terminal vertices represent the possible winnings in dollars of the two players. The symbol 0 is used to designate chance moves.

    In terms of the two spaces F1 and F2, the strategies of the two players can be described as follows: The space F1 consists simply of the two single element sets (1) and (2). That is,

    The space F2 consists of eight ordered triples (i, j, k) with i, j, k = 3, 4; i.e.,

    where the first position in the triple is conditioned upon the coin falling head and player I choosing 1, the second position in the triple is conditioned upon the coin falling head and player I choosing 2, and the third position in the triple is conditioned upon the coin falling tail. The Space and the values of the probability distribution P for each are given by

    Figure 2

    where the letter H stands for head and T for tail. The space G of all triples g = (f1, f2, h) clearly contains 96 elements. The space of outcomes R consists of the 5 elements $5.00, $6.00, $7.00, $8.00, $9.00. Typical values of the function q which maps G onto the space of outcomes is

    These values are most easily computed by referring to Figure 2. The tree representation of the game also facilitates the computation of the probability distribution πf on R for each f = (f1, f2). Thus, for example, if f = ((1), (3, 3, 4)),

    and, if f = ((2), (3,4,4)),

    PROBLEMS

    1.2.1. In the above illustrative example compute πf(r) for each f and r, and present the results in tabular form.

    1.2.2. From the results of Problem 1.2.1, compute the expected gain for player I for each f, and present the results in a 2-by-8 matrix with the rows representing the strategies of player I and the columns the strategies of player II.

    1.2.3. Reduce the following game to the matrix form as in the previous problem, and sketch its tree:

    Move 1. Player I chooses i = 0 or 1.

    Move 2. A chance move, selecting j = 0 or 1 with equal probabilities.

    Move 3. Player II chooses k = 0 or 1.

    Outcome: If i + j + k = 1, I pays II one unit; otherwise II pays I one unit.

    Information of II at move 3: II is told the value of j but not the value of i.

    1.2.4. Reduce to matrix form and sketch the tree for the variations of Problem 1.2.3, with the same moves and outcomes, but with the following information for II at move 3.

    (1) II is told neither the value of i nor the value of j.

    (2) II is told the values of both i and j.

    (3) II is told the value of i + j (but not the separate values of i and j).

    (4) II is told the value of the product of i and j (but not the separate values of i and j).

    1.3. The Normal Form

    So far we have not mentioned the motivations of the players–an essential element of any game. The description of a game at the end of Section 1.2 was simply a listing of the strategies of the players and the outcome of any set of choices of strategies, without regard to the attitudes of the players toward various outcomes. We now indicate briefly how the final simplification of a game–the normal form–is obtained, by taking into account the preferences of the players.

    We have seen in Section 1.2 that the result of any set of strategies f1, · · ·, fk is a probability distribution πf over the set R of possible outcomes. It would be particularly convenient if a given player could express his preference pattern in R by a bounded numerical function u defined on R, such that he prefers r1 to r2 if and only if u(r1) > u(r2) (so that u(r1) = u(r2) denotes indifference between r1 and r2) and such that, if for any probability distribution ξ over R, we define U(ξ) as the expected value of u(r) computed with respect to ξ, i.e.,

    he prefers ξ1 to ξ2 if and only if U(ξ1) > U(ξ2). It is a remarkable fact that, under extremely plausible hypotheses concerning the preference pattern, such a function u exists. The function U defined for all probability distributions ξ over R, is called the player’s utility function. It is unique, for a given preference pattern, up to a linear transformation. We shall discuss this question in some detail in Chapter 4; for the present we shall simply suppose that each player has such a utility function.

    Thus the aim of each player in the game is to maximize his expected utility. If Ui is the utility function of player i, his aim is to make Mi(f1, · · ·, fk) = Ui(πf) as large as possible, where πf is the probability distribution (for fixed f1, · · ·, fk) over R determined by the over-all chance move. We are now in a position to give a description of the normal form of a game: A game consists of k spaces F1, · · ·, Fk and k bounded numerical functions Mi(f1, · · ·, fk) defined on the space of all k-tuples (f1, · · ·, fk), fi ε Fi, i = 1, · · ·, k. The game is played as follows: Player i chooses an element fi of Fi, the k choices being made simultaneously and independently; player i then receives the amount Mi(f1, · · ·, fk), i = 1, · · ·, k. The aim of player i is to make Mi as large as possible. The statement "Player i then receives the amount Mi(f1, · · ·, fk) is a shorthand way of saying a situation results whose utility for player i is Mi(f1, · · ·, fk)." We shall often speak as though the changes in utility are effected simply by money receipts or payments. This terminology is used because of its convenience and suggestiveness.

    In the theory of games it is usual to treat first a special class of games, the two-person zero-sum games. The theory of these games is particularly simple and complete, and only these games will be treated in this book. A two-person game is of course a game with k = 2; a zero-sum game is one in which for all f1, · · ·, fk; more precisely, since each Mi is unique only up to linear transformations, a game is zero-sum if there is a determination of M1, · · ·, Mk. for which for all f1, · · ·, fk. Thus a two-person zero-sum game is a game between two players in which their interests are diametrically opposed: One player gains only at the expense of the other. Consequently, there is no motive for collusion between the players; it is precisely the fact that collusion is unprofitable that simplifies the theory.

    Notice that a constant-sum game, i.e., one in which = c for all f1, · · ·, fk is zero-sum in the sense defined above, since an alternative choice of utility functions is M*1 = M1 – c, M*i = Mi for i ≠ 1, and . Thus the theory developed below for zero-sum two-person games applies equally to constant-sum two-person games. A slightly different kind of game falling within our definition of zero-sum games is illustrated by the following example. Player I has two strategies, 1 and 2, and player II has a single strategy 1*. If I chooses 1, nothing happens; if he chooses 2, he loses $1 and II wins $10. If we suppose that I prefers the result of 1 to that of 2 and that II has the opposite preference, an appropriate choice of utility functions yields

    and the game is zero-sum. According to the theory developed below, rational play for I in this situation is to choose strategy 1, so that nothing happens.

    What often does happen in situations of this kind is that II approaches I, offering him a fee of, say, $5 if I will choose strategy 2. If I accepts this offer, the result is that I nets $4, II nets $5, and both have improved their positions. This means simply that our original description of the game was incomplete, as we failed to list certain strategies, involving offering and acceptance of fees, which were in fact available to the players. A more complete description, taking these possibilities into account, would yield a non-zero-sum game. Thus it is important, in applying game theory to an actual situation, to be sure to list all available strategies for the players.

    Since for a two-person zero-sum game we have

    we need specify only M1. The term game will hereafter always mean zero-sum two-person game. Thus we have

    Definition 1.3.1. A game G in the normal form is a triple (X, Y, M), where X, Y are arbitrary spaces and M is a bounded numerical function defined on the product space X × Y of pairs (x, y), x ε X, y ε Y. The points x (y) are called strategies for player I (II), and the function M is called the payoff.

    The game G is played as follows: I chooses xε X, II chooses y ε F, the choices being made independently and simultaneously. II then pays I the amount M (x, y).

    Note that, although it was useful, for the understanding of the argument leading to the simplification of a game to the normal form, to restrict the spaces X and Y to be finite, this restriction is removed in Definition 1.3.1 and will not be assumed, unless specified otherwise, in the remainder of this book.

    The game G is played as follows: I chooses x ε X, II chooses y ε Y, the choices being made independently and simultaneously. II then pays I the amount M (x, y).

    Note that, although it was useful, for the understanding of the argument leading to the simplification of a game to the normal form, to restrict the spaces X and Y to be finite, this restriction is removed in Definition 1.3.1 and will not be assumed, unless specified otherwise, in the remainder of this book.

    1.4. Equivalent Games

    If in a given game one relabels the strategies of either player, the new game is not essentially different from the old: Every statement about either game can be translated into a corresponding statement about the other, and we wish to

    Enjoying the preview?
    Page 1 of 1