**ANARCHIST THEORY:**

**ANARCHISM AND GAME THEORY:**

**PART TWO:**

**NEWDICK'S OBJECTIONS:**

In Part One of Doug Newdick's essay presented the day before yesterday here at Molly's Blog the author outlined the basic format of what he calls the "anti-anarchist argument" of the prisoners' dilemma, as set out by Michael Taylor. The next part of this essay is devoted to Newdick's objections to this argument. It should be noted in passing that a rather simplistic form of the "Prisoner's Dilemma" is used by authors Joseph Health and Andrew Potter in their book 'The Rebel Sell'. The book is somewhat reminiscent of older generations of political recanters as they went from being communists to being neo-conservatives. The main point of the book is a criticism of fashionable counter-culture leftism, a stance that the authors apparently held in their youth (or before their present academic and journalistic careers anyways) and a stance that the authors take to be some sort of "anarchism". To say that whatever their earlier views were that they hardly resembled historical anarchism would be understating the case. The main point of their book is that so-called "culture-jamming" is both fruitless and ultimately hypocritical. That is largely true. The authors, however, don't go as far to the opposite extreme, from barbarism to barbarism, as ex-communist neo-cons do. They park themselves as basically right wing social democrats, a political viewpoint whose goals are at least as self-serving to people like them as "trendy leftism" is to others. Better than signing up with the Ahatollah Bush for Jihad I guess.

Whatever passed for "anarchism" in the rarified social circles that they travelled in when younger was, however, simply a crude "feeling", as is made plain by their discussions of it where they expose what only be termed "cosmic ingnorance" of what the word actually means. So, rather than going from barbarism to barbarism they have gone from crude to crude. Their presentation of the Prisoner's Dilemma, often under the alias of 'The Tragedy of the Commons", is crude in the extreme. It is as if everything that their professors threw at them in the economics classes where they "grew out" of their youthful naivity was taken from the state of game theory in the 1950s, without

*any*regard to all the research that has been done since. Regular readers of Molly's Blog will know that I have a low estimate of the qualifications of many leftist academics. Reading Heath and Potter I came away with the "comforting "(???) feeling that the other side of the coin in academia is often just as lazy, thick and time-serving.

Heath and Potter's book has been reviewed in Issue Number One of the new Ontario platformist publication Linchpin (see earlier here at Molly's Blog) and also has been the subject of discussion of one of the forums over at LibCom. As might be expected such reviewers made much of the authors' criticism of subcultural politics and its futility, but they devoted little space to how distorted both the views of anarchism and the presentation of game theory were in the book.

All this is well and good, and it shows the other side of the coin, how game theory has become very much an

*in-topic*outside of the leftist ghetto, even if some of its uses have all the airworthiness of lead bricks. If the reader is interested in this subject here's a further reference, 'Can Cooperation Ever Occur Without the State ?'. Also, the long time zinester and sceptical anarchist John Johson has recently written on anarchism and game theory in his zine "Imagine:Anarchism for the Real World', issue # 7. Sorry guys, no internet reference here, but you can get a copy for (I presume a small donation) at Imagine, Box 8145, Reno, NV 89507, USA. Now what a place to write about game theory from ! But now, on to the Newdick article...

**5. PROVISION OF PUBLIC GOODS ISN'T ALWAYS A PRISONERS' DILEMMA.**

**"**For a game to be a prisoners' dilemma it must fulfill certain conditions: "each player must (a) prefer non-cooperation if the other player does not cooperate, (b) prefer non-cooperation if the other player does cooperate. in other words: (a') neither individual finds it profitable to provide any of the public good by himself; and (b') the value to the player of the amount of the public good provided by the other player alone (ie. the value of being afree rider) exceeeds the value to him of the total amount of the public good provided by joint cooperation less his costs of cooperation" Toylor, 1987: 35).

**5.1 CHICKEN GAMES.***For many public good situations either (a'), (b') or both fail to obtain. If condition (a') fails we can get what Taylor calls a Chicken Game, ie. if we get a situation where it pays a player to provide the public good even if the other player defects. But both players would prefer to let the other player provide the good, and we get this payoff matrix:*

*................................C.......D*

*C............................3,3......2,4*

*D............................4,2......1,1*

*Taylor (1987: 36) gives an example of two neighbouring farms maintaining an irrigation system where the result of mutual defection is so disasterous mthat either indididual would prefer to maintain the system herself. thus this game will model certain kinds of reciprocal arrangements that are not appropriately modelled by a prisoners' dilemma.*

*5.2 ASSURANCE GAMES.**if condition (b') fails to obtain we get what Taylor calls (1987:38) an Assurance Game, that is a situation where neither player can provide a sufficient amount of the good if they contribute alone. Thus for each player if the other defects then she should also defect, but if the other cooperates then she should prefer to cooperate as well. The payoff matrix looks like this:*

*........................C.......D*

*C....................4,4.....1,2*

*D....................2,1.....3,3*

*5.3 COOPERATION IN A CHICKEN OR ASSURANCE GAME.**There should be no problem with mutual cooperation in an Assurance Game (Taylor 1987: 39) because the preferred outcome for both players is that of mutual cooperation. With the one-off Chicken Game mutual cooperation is not assured. Mutual cooperation, however, is more likely than in an one-off Prisoners' Dilemma (5).*

*6. COOPERATION IS RATIONAL IN AN ITERATED PRISONERS' DILEMMA.*

*6.1 WHY ITERATION ?**Unequivocally there is no chance for mutual cooperation in a one-off Prisoners' Dilemma, but as has been pointed out, the one-off game is not a very realistic model of social interactions, especially public goods interactions (Taylor 1987: 60). Most social interactions involve repeated interactions, sometimes as a group (an N-person game) or between specific individuals (which might be modelled as a game between two players). The question then becomes: Is mutual cooperation more likely with iterated games ? (Specifically the iterated Prisoners' Dilemma). As one would expect, the fact that the games are repeated (with the same players) opens up the possibility of conditional cooperation, ie cooperation dependent upon the past performance of the other player.*

*6.2 ITERATED PRISONERS' DILEMMA.**There are two important assumptions to be made about iterated games. firstly, it is assumed (very plausibly) that the value of future games to a player is less than the value of the current game. The amount by which the value of future games are discounted is called thye discount value, the higher the discount value the less future games are worth (Taylor 1987:61). Secondly, it is assumed that the number of games to be played i8s idefinite. If the number of games is known to the players then the rational strategy will be to defect on the last game beacuse they cannot be punished for this by the other. Once this is assumed by both players the second to last game becomes in effect the last game and so on (Taylor 1987: 62).*

*Axelrod (1984) used an ingenious method to test what would be the best strategy for an iterated Prisoners' Dilemma. He held two round-robin computer tournaments where each different strategy (computer program) competed against each of its rivals a number of times. Suprisingly the simplest program, one called TIT FOR TAT, won both tournaments as well as all but one of a number of hypothetical tournaments. Axelrod's results confirmed what Taylor had proven in 1976 (6), TIT FOR TAT is the strategy of choosing C for the first game and thereafter choosing whatever the other player chose the last game (hereafter TIT FOR TAT will be designated stategy B, following taylor (1987).*

*An equiilibrium in an iterated game is defined as "a strategy vector such that no player can obtain a larger payoff using a different strategy while other players' strategy remains the same. An equillibrium then is such that, if each player expects it to be the outcome, he has no incentative to use a different strategy" (Taylor 1987: 63). Put informally, an equilibrium is a pair of strategies such that any move by a player awayfrom that strategy will not improve the player's payoff. Then mutual cooperation will arise if B is an equilibrium because no strategy will do better than B when played against B (7).*

*The payoff for a strategy in an indefinite iterated Prisoners' Dilemma is equal to the sum of an infinite series:*

*X/(1-w)*

*X= payoff*

*w= discount parameter (1-discount value)*

*UD playing with UD gets a payoff of two per game for mutual defection.If we set w=0.9, then UD's payoff is:*

*2/(1-0.9) = 20*

*(MOLLY NOTE: I retain the terminology "UD" as it appeared in the original essay, but I alert the reader that this probably should have been "AD" for "always defect" as it is usually referred to in game theory.)*

*B Playing with B gets a payoff of three per game for mutual cooperation. Thus with w = 0.9 B gets:*

*3/(1-0.9) = 30*

*(B,B) is an equilibrium when the payoff for B from (B,B) is higher than the payoff for UD from (UD,B):*

*B's payoff against B is*

*3/(1-w)*

*UD's payoff against B is*

*4 + 2w/(1-w)*

*Therefore UD cannot do better than B when:*

*(3/(1-w)))> (4 + 2w/(1-w))*

*= w > (4 -3)/(4 -2)*

*=w > 0.5*

*(Axelrod 1984: 208) (8) (9)*

*Can any other strategy fare better against B than B itself ? Informally we can see that this is not possible (assuming future interactions are not too heavily discounted). For any strategy to do better than B it must as some point defect. But is the strategy defects then B will punish this defection with a defection of its own which must result in the new strategy doing worse than it would have had it cooperated. Thus no strategy can do better playing with B than B itself. Now, if B is an equilibrium then the payoff matrix for the iterated game is:*

*........................B...........UD*

*B.....................4,4.........1,3*

*UD..................3,1.........2,2*

*Which is an assurance game. Thus if B is an equilibrium then we should expect mutual cooperation (Taylor 1987: 67). If, however, b isn't an equilibrium (ie the discount value is too high) then the payoffs resemble a Prisoners' Dilemma and thus mutual defection will be the result (Taylor 1987, 67).*

*This is a good place to stop until another day. The actual situation in game theory- and real life- is much more complex than what has been described above. In particular the game described above tends to drive towards iterated defection under a simple TIT FOR TAT strategy. Other refinements such as "forgiving tit for tat" are optimum in some situations, and the role of "spite" has been much further investigated in recent years. In the next section Newdick will describe "N-Persons" games. In such situations not just "spite" but also what has been called "altruistic punishment" comes into play. All this is to alert the reader that, while what will come in the next section is valuable, the present state of the theory is far more advanced than what will be presented here.*

## No comments:

Post a Comment