Bumping into people: the awkward dance

You know when you open the door, and you find someone else is trying to get in at the same time, and so you both end up right in each other’s faces? And then you each try to get out of the other’s way, only to go in the same direction and still be in each other’s face? And then you do a sort of weird dance?

I’ve been trying to find some good Youtube clips of this, and while I know they’re out there, I can’t find them in a brief search. But I think you know what I’m talking about.

Well, surprise surprise, we can model this interaction as a game! Each person has two strategies: move left (L), or move right (R).(1)  If they both move in the same direction, then they are still stuck doing the awkward dance, and get payoff -1. Otherwise, they move out of each other’s way, and so they happily go along their way, getting payoff 1.

 1\2 Left Right
Left (-1,-1) (1,1)
Right (1,1) (-1,-1)
Fig. 1: Awkward dance game

 

There are a couple of pure-strategy Nash equilibria: one player goes left, and the other goes right. But which equilibrium is going to be chosen? A priori, there is no way to tell. Here, social conventions can be useful, such as always moving forward on the right side (for a similar post, see Marli’s post, “When in New York, do as the New Yorkers do“). The problem is when some people didn’t get the memo (*sigh*).

There is a third Nash equilibrium in mixed strategies, where each person chooses to go in one direction with a 50-50 chance. This means that they will have a 50-50 chance each time they play that they will bump into each other, but eventually, after perhaps dancing for a while, they will get it right.

(1) This will all be from the perspective of player 1.


What do kittens have to do with rising tuition?

If you read the Financial Times, you might suspect from an article on Monday that kittens have something to do with rising tuition and Prisoners’ Dilemmas. Let me assure you that they don’t.

A friend of mine sent me the article, which cites a model designed by a team of Bank of America consultants who use the Prisoners’ Dilemma to explain rising college tuition. Here is the graphic they used:

kitten

Fig. 1: Things that are pairwise irrelevant to each other:

a kitten, the Prisoner’s Dilemma, and rising tuition.

They explain that the college ranking system (assuming two colleges) is a zero-sum game. If one college moves up, the other one moves down. “A college can move up in the rankings if it can raise tuition and therefore invest in the school by improving the facilities, hiring better professors and offering more extracurricular activities.” And therefore, they conclude, this is why college tuitions have been rising and why student debt will continue to rise.

First glaring problem: (raise, raise) is a Pareto-optimal outcome as they’ve set up this game, but what they probably meant to say was that it is a Nash equilibrium. Or maybe they meant to say that “raise” is the best response for each college. Anyway, in this game, (don’t raise, don’t raise) is also Pareto-optimal (but not a Nash equilibrium)!

Secondly, they’re trying to illustrate a kind of ratcheting problem: both colleges raise tuition to raise the quality of the resources at the school, in order to maintain their rankings. But, this means it’s a repeated game. In repeated games that have a finite horizon, defection happens at every step, but at infinite horizon games, cooperation can occur. Now, let’s just assume that this is an infinite horizon game, which is what the folks at B of A are assuming when they predict that college tuition will keep rising indefinitely, beyond mere inflation. What incentive is there to cooperate and keep tuition low? According to this game, none.  And according to what you might expect in reality, none – is it plausible that, in the absence of antitrust laws, that colleges would want to collude to keep tuition low, and that because they can’t collude, they are doomed to raise tuition every year against their wills? Nope.

Then, we come to the matter that in fact this game can’t be infinite horizon as it is presented here.  The simple reason is that, even if education is becoming a larger and larger share of a household’s spending, and even if the student is taking out loans and borrowing against his future expected earnings, he still has a budget set that he can’t exceed. Furthermore, the demand for attending college at a particular university should drop as soon as the tuition exceeds the expected lifetime earning/utility advantage for whatever the student sees himself doing in 4 (or more) years over the alternative. So, there will be some stage at which the utilities change and it becomes a best strategy for neither school to increase its tuition. So, it’s a finite stage game and the increase will stop somewhere, namely, where price theory says it should. [1]

Finally, it’s not clear that increasing tuition actually has such a strong effect on school rankings or that colleges are in such a huge rankings race. And, even if students at colleges outside the very top schools tend to choose a college based on things like food quality and dorm rooms, students don’t demand infinitely luxurious college experiences at infinite prices. Evidence: Columbia students feel they’re overpaying for food, and feel entitled to steal Nutella.

The lessons here are these: It’s not a Prisoner’s Dilemma in a strong sense if the cooperative result isn’t strictly preferred to the Nash equilibrium. Don’t model a tenuous game where the game isn’t relevant to the ultimate result (tuitions will stop rising at some point). Don’t assume that trends are linear, when they are definitively not linear. And, don’t put a kitten on your figure just because you have some white space — it really doesn’t help.

———————

[1] Actually, the game doesn’t have to be finite horizon. Suppose the upper limit that the colleges know they can charge is A, and the current tuition is B_t. Then, at each stage, they could increase tuition by (0.5)*(A - B_{t-1}). But, as the tuition approaches A, the increases become smaller and smaller until they pretty much just vanish, and it would be the same as stopping, because there is a time at which the tuition would stop affecting rank (a college isn’t going to improve it’s rank by charging each student an extra cent.)


Clearly, Sicilians do not know game theory

Relax, I’m not referring to actual Sicilians. I’m referring, of course, to Vizzini from the movie “The Princess Bride.” The hero, Westley, is trying to rescue his true love, Buttercup, from the clutches of Vizzini and his henchmen, Inigo Montoya and Fezzik. After outdueling Inigo and knocking out Fezzik, he overtakes Vizzini, who threatens to kill Buttercup if Westley comes any closer. This leads to an impasse: Vizzini cannot escape, but Westley cannot free Buttercup. So, Westley challenges Vizzini to a “battle of wits”:

The structure of the game is simple: there are two glasses of wine. Westley has placed poison (in the form of the odorless, tasteless, yet deadly iocaine powder) somewhere among the two cups, and allows Vizzini to choose which to take. Afterwards, they drink, and they see “who is right, and who is dead.”

Presumably, when Vizzini encounters the game, he is supposed to think that Westley has restricted himself to poisoning one of the glasses. In this case, we have a standard extensive form game of incomplete information, which is equivalent to a normal-form game:

 Vizzini\Westley Poison Westley’s cup Poison Vizzini’s cup
Drink Westley’s cup (Dead, Right) (Right, Dead)
Drink Vizzini’s cup (Right, Dead) (Dead, Right)
Fig. 1: Battle of Wits (outcomes)

Immediately we see that this game is symmetric (or, more precisely, anti-symmetric), in that whatever doesn’t happen to one player happens to the other. In this way, this game is strategically equivalent to the game of matching pennies. This lets us know right away that the equilibrium outcome is for Westley to randomize 50-50 between the choices: do anything else, and Vizzini has a better chance of winning if he plays optimally, as he could just choose the cup that is less likely to have the poison. Similarly, if Vizzini was a priori less likely to choose a given cup, then that is where Westley should have put the poison.

Yet Vizzini does not reason this way. Instead, he attempts to make vacuous arguments about the psyche of Westley, namely, where Westley would have put the poison. He may be reasoning as if Westley is a behavioral type, but clearly, that’s not the best thing to do in a “battle of wits,” where presumably everyone is rational. Instead of making the game-theoretic choice based on mixed strategies, he tries to find an optimal pure strategy.

In the end, Vizzini takes his own cup, which indeed contains the poison. As it turns out, both cups contained poison: Westley has built up tolerance to iocaine, and so it didn’t make any difference which was chosen. So in a way, Westley did make Vizzini indifferent between the two outcomes; it’s just that Vizzini was mistaken in which game was being played. In reality, no matter what, Vizzini would be dead, and Westley would win. This makes one wonder that perhaps Vizzini should have thought something was afoul when Westley proposed the game in the first place, and even more so when he falls for such an obvious trick of misdirection which tries to get Westley to look the other way (see 3:04 in the video). But no matter – while Vizzini may have been smarter than Plato, Aristotle, and Socrates, he could have used some of the 20th century wisdom of John Nash.


So what school should I go to?

The classic question for high school seniors: “So, what are you doing next year? What colleges are you applying to?” Please, give them a break. They get this question way too much, and it only makes them more nervous about their futures. After all, they seem to be under the impression that where they go is a make-or-break issue, and bringing up the subject as if it is important just reinforces that fear.

But while we’re on the topic, where should they apply?

For simplicity, let’s suppose that each college (indexed by a number i) has a particular quality level, q_{i}, at which every potential student values that college. This can be through the quality of academics, the alumni network, the cost, the location, you name it. One might think that it would be best to apply to as many colleges as possible, since that maximizes your chances of getting in somewhere good. But, like everything in life, there is a cost to doing so. This can be the actual application fee, the time involved in putting together the materials, getting ETS to send your SAT scores, whatever. Let’s fix this cost for each college at c_{i}. We can relax these assumptions, but the qualitative result will still be the same.

Suppose there is a large number of people, and we restrict people to applying to one school. Yes, I know, this is an unreasonable assumption, but the qualitative results will again be the same even if we allow for applying to multiple schools; it just makes the math hairier. The probability of getting in is \frac{n_{i}}{a_{i}}, where n_{i}is the number of slots that the school has, and a_{i} is the number of people who apply.

Let’s consider the Nash equilibrium. Since everyone values each school equally, we will expect that everyone will be indifferent between applying to the various colleges. Thus, we will have, for every college i, j,

\frac{n_{i}}{a_{i}}q_{i}-c_{i}=\frac{n_{j}}{a_{j}}q_{j}-c_{j}

This can give us the relative admission rates of each school:

\frac{n_{j}}{a_{j}}=\frac{(n_{i}/a_{i})q_{i}+c_{j}-c_{i}}{q_{j}}

This equation in and of itself is informative. It shows that the admission rate to a school will be increasing in the cost of applying, all other things being equal. This makes sense: if you make it harder to apply, less people will do so, and this will drive up the admission rate needed by the school to fill all of its slots. Similarly, an increase in the quality of the school drives down the admission rate, since more people will then want to go there, making it more competitive.

So, in summary, what should you do? You should apply to the school which maximizes \frac{n_{i}}{a_{i}}q_{i}-c_{i}, which is your expected benefit from applying there. Assuming that everyone else is being rational and doing the same thing, though, then it won’t make much difference where you apply. That being said, this last result will no longer hold if not all people value different schools the same (though the trends for the relative admission rates will still hold), but that makes the analysis too complicated for a mere blog post.

Edit: For a more sophisticated theoretical and empirical model whose basic idea is the same, click here.


Of skirts, judgement, and changing conventions

Jeff’s post last Sunday on Jewesses in Skirts got me thinking, how is it that we have Orthodox Jewish communities that are tolerant of pants-wearing by women, and communities that are not tolerant of pants-wearing, but rarely a community with large factions of each type? The question, of course, applies to more than just skirts worn by Jewish women – we can talk about many aspects of our culture, such as our changing views on LGBT people, or miscegeny, or a gold standard vs. silver standard using the same language.

We’ve established that, assuming that the types of women in the previous post are static (that is, Nature or Circumstance assigned you to one group and you can’t change allegiances), that it is optimal for the more conservative group to adhere to skirt-wearing, and the other group not to bother. Those static proportions of types in the population affect how the “others” in Jeff’s game form their prior beliefs about which type a woman is based on her choice of clothing. But, what if the women could choose or change their ideology, and what if we consider the effects of judgement and peer-pressure?

In the following model, we can look at a scenario where the proportion of each type of player in the population is endogeneous. Suppose that a new community forms, consisting of some random number of “conservative,” skirts-only types (Jeff puts them in class 1) and some number of “progressive” types who sometimes wear pants (Jeff puts them in class 2). This represents what we would expect to happen if everyone formed their own opinions and ideologies totally independently of everyone else. Each person will randomly encounter other members of the community on a one-on-one basis, and receive social payoffs from the encounter. If she encounters a likeminded person, they both feel validated in their choices, and if not, they feel judged. As before, we assume some disutility for a restriction on wardrobe.

 
Conservative Progressive
Conservative 1,1 0,0
Progressive 0,0 2,2

Now, say that the population starts out with a percentage p of progressive types and 1-p of conservative types. Then, assuming that the population is large, if you are one of the members, then of the people you meet, p will be progressive and 1-p will be conservative. Therefore, if you are a woman who chooses to be conservative and wear only skirts, your expected utility is 1(1-p) + 0(p) = 1-p and if you choose to wear pants, then your expected utility in any one encounter is (0)(1-p) + (2)(p) = 2p. You would be indifferent if 1-p = 2p — that is, if p (the fraction of progressive types) is 1/3.

Maintaining p = 1/3 is incredibly difficult, because it is so sensitive to shocks. If for any reason p becomes a little more or a little less — say, a contingent of pants-wearers suddenly move in — the balance would tilt and one of the ideologies would start providing the better payoff, the whole population would start snowballing in that direction, and it would become the predominant convention (or evolutionarily stable state) . That may be why these larger faction groups tend not to exist in real life: they are a lot like a flipped coin that lands on its edge.

So what kinds of things affect what p is? The payoff matrix, obviously, and how I’ve assigned the payoffs. No one said that I must assign those particular numbers (and indeed, I don’t. The numbers that go into those matrices don’t really matter. What matters is their order of size and relative distance to each other. Try it: multiply all of the numbers by a constant, or add a constant to all of them. The solution should be the same.) What if the inconvenience of wearing only skirts is very large? (Imagine replacing the 2s with, say, 5s.) Then, p (the tipping point) could be much smaller, and it would take a much smaller group of rebels to send the equilibrium going the other way. Issues like women’s suffrage are like this — they are so significant that a grassroots movement picks up momentum very quickly. If the inconvenience is less, then p would be greater, and if the existing equilibrium is skirts-only/conservative, it would be harder to change. Equivalently, we can think about the effects of mutually judgemental behavior (making the 0s in the matrix more negative). If people are less tolerant when they meet the other type, conventions are harder to change. If they are more tolerant, change is easier.

 

——————————-
If you enjoyed the ideas in this post, you may also enjoy When in New York, do as the New Yorkers do, which describes a special, symmetrical case of the kind of game we’ve discussed here.


Is Pascal’s Wager Sound?

OK, this isn’t actually technically a game, but since a lot of people think of it as such (given the common depiction through payoff matrices, with probabilities of different scenarios), we’re going to cover it anyway.

The basic gist of Pascal’s Wager is that by hedging one’s bets and choosing the path of religion, one can expect a greater payoff than by being not religious. Its proponents offer two possible states of the world: either God exists, or God does not. From a person’s standpoint, one can choose to be religious, or not:

 Choice God Exists God Does Not Exist
Religious \infty - C - C
Not Religious G -\infty G
Fig. 1: Standard Pascal’s Wager

One gets infinite payoff from being religious if God exists, as then one goes to heaven. If one is not, and God exists, one goes to hell, and gets a payoff of negative infinity. By being religious, one incurs a finite cost C, as religion isn’t so much fun apparently; if one is not religious, one gets positive payoff G since one gets to party all the time.

Now suppose God exists with some probability P>0, however small that might be. Then the expected payoff from being religious is, according to the argument, P(∞ – C) – (1 – P)C = ∞. The payoff from being irreligious is P(G – ∞) + (1 – P)G = -∞. Thus one is better off being religious.

A lot of people really hate this argument, and so do their utmost to bring it down. Yet this argument is not as bad as they think it is. Let’s go through some of the criticisms.

The first criticism is that Pascal automatically assumes that if God exists, then Catholicism is true. But there are many religions out there that posit the existence of heaven and hell, yet are mutually incompatible. For example, Catholics (at least conservative ones) would condemn Muslims for not accepting Jesus as their Lord and Savior; Muslims would condemn Catholics as polytheists for this very acceptance. Since one can do this calculus for both religions, the arguments negate each other in paradox, as we end up granting both infinite payoffs and negative infinite payoffs to the same groups!

The second criticism is the so-called Atheist’s Wager. It could be instead that God wants us to live good lives, and be rational. Since one can live a better life by being irreligious, this is what God would prefer of us. Hence, according to the Atheist’s Wager, the payoff matrix should look as follows:

 Choice God Exists God Does Not Exist
Religious - \infty - C - C
Not Religious G + \infty G
Fig. 2: Atheist’s Wager

Thus, says the atheist, it is a dominant strategy to be irreligious.

The problem with both these arguments is actually a problem with the initial formulation of Pascal’s Wager. However, while we can tweak Pascal’s Wager to make the problem go away, this flaw is quite possibly fatal to the above two criticisms. So now that I’ve built up the drama, here’s the problem: infinity is not a well-defined number on the real line (which we use for expected utilities). Instead, we must use a limit of some number B as it increases toward infinity. Thus, Pascal’s Wager should look like:

 Choice God Exists God Does Not Exist
Religious lim_{B\rightarrow\infty} B - C - C
Not Religious lim_{B\rightarrow\infty}G - B G
Fig. 3: Revised Pascal’s Wager

Addressing the first criticism, we can then compare the possibilities that each religion is true by seeing which is the most probable among them. Thus, taking Catholicism and Islam, with probabilities PC and PI, and difficulties CC and CI, respectively, to establish Catholicism (without loss of generality) as the better way to go of the two, we just need to check whether

PCB – PIB – CC > PIB – PCB – CI,

which, as B gets large, is equivalent to just checking whether PC > PI. As a Jew, I would probably argue that the evidence/support for Judaism is the greatest of all the religions that have a system of heaven/hell, even if the support for any of these religions is slight.

Similarly, we can check whether, given the evidence in front of us, the probability that the Atheist’s Wager (AW) is true is greater than that of Pascal’s Wager (PW):

 Choice God Exists & PW God Exists & AW God Does Not Exist
Religious lim_{B\rightarrow\infty} B - C lim_{B\rightarrow\infty} - B - C - C
Not Religious lim_{B\rightarrow\infty}G - B lim_{B\rightarrow\infty}G + B G
Fig. 4: Revised Pascal’s Wager vs. Atheist’s Wager

Comparing the respective probabilities of the Atheist’s Wager and Pascal’s Wager, PA and PP, we check whether

PAB – PPB + G > PPB – PAB – C

Given that almost all purported religious claims in the past about heaven and hell have been based on being religious, and none have supported the Atheist’s Wager (the closest you get is some who claim that all people go to heaven, whether religious or not), I would think that PP > PA (though I admit the possibility that I am wrong). Thus, the Atheist’s Wager loses out to Pascal’s Wager by comparing expected utilities.

Thus, there is a very strong case to make that if one is merely comparing expected utilities, then  no matter how small the probability is that God exists (as long as it is not zero), Pascal’s Wager is actually sound.

Of course, one might not be comparing expected utilities. After all, if there is only an extremely miniscule possibility of a hugely negative payoff, no matter how bad it is, it might not be a bad thing to completely ignore that possibility. But that is in the realm of decision theory, not game theory. Thus I’ll leave it to your intuitions: if you’re risk averse (as pretty much everyone is – that’s why we all buy insurance), and carry this reasoning even against remote possibilities, then by all means, Pascal’s Wager seems to work. But if you’re willing to take the risk, since it doesn’t matter if you think there’s only, say, a one-in-a-trillion chance that you’ll go to hell if you’re not religious, then go out and boogie.


Deficit Chicken

In recent months, there has been much coverage of sovereign debt crises around the world. Just a couple of weeks ago, Greece approved massive cuts to its budget, in a move designed to implement an austerity plan to avoid a devastating default on its debt (though some warn that even this is not enough to prevent a selective default). This past week, Moody’s (a rating agency) downgraded its ratings  of Portuguese bonds to “junk” status, meaning that there is a good possibility that its bonds will not be repaid. Other European countries, such as Italy and Spain, are also considered at risk, as they have large deficits and/or debts outstanding that imperil their abilities to repay.

Even in the United States, there has been much talk recently of what to do about the federal debt limit, which was surpassed in June. The debt limit must be raised, or the United States will cease to be able to pay its obligations, raising the specter of default. Yet the two sides of how to approach the issue have been taking hard stances over how to avoid this possibility. Republicans want to reduce the deficit solely through spending cuts, under the premise that any tax increases will hamstring the economy at this delicate stage in the recovery from the recent recession.[1] Democrats want to do so through a blend of spending cuts and an increase in taxes on “the wealthy” (see Marli’s post on taxation). Despite the necessity to somehow bridge the gap, the debt talks have recently appeared to be on the verge of collapse. House majority leader Eric Cantor, in the past couple weeks, walked out (insert link here) of the debt talks, citing irreconcilable differences. With both sides unwilling to give in, it seems like the United States is barreling toward a crisis.

With a scenario like this, it seems like a perfect time to whip out my chicken suit.

Both sides are heading straight toward the precipice; if neither swerves, disaster strikes: the government collapses, essential services are denied across the country, and politicians will likely be voted out of office for failure to govern effectively. Yet if anyone swerves first, it is also bad: it will mean giving up some of the sacred cows of their party’s platform, whether it be lower taxes for the Republicans (especially for “the wealthy”), or big-ticket items like healthcare for the Democrats. We can therefore model the game as such:

Lose healthcare Stand strong
Higher taxes (-5, -5) (-10, 0)
Stand strong (0, -10) (-100, -100)
Fig. 1: Deficit Chicken

Hopefully, they’ll be able to negotiate some sort of agreement before the August 2nd deadline, so we can avoid the “crash” solution. It seems like Minnesota hasn’t been able to avoid this, so a positive resolution is not guaranteed. We’ll see what happens…

[1] Though, the idea that these lack of tax increases will somehow pay for themselves is completely ridiculous (according to virtually all economists). Also, government spending cuts will impair the recovery of the economy as well, as such a move reduces aggregate demand. But I digress.


Bidding up blood

Mexican drug cartels, which control the tremendously lucrative flow of drugs into the US, have over the past several years begun to kill civilians with impunity. Bodies are displayed in public, severed limbs have been tossed onto dance floors, and the total body count continues to rise.

Until recently, civilians and children were off limits in the cartels’ informal codes of honor. The willingness to kill civilians was a signal of ruthlessness, to inform citizens and each other who is winning the war[1].

As of 2010, the Mexican drug cartels have formed two tenuous alliances against each other, one composed of the Juárez Cartel, Tijuana Cartel, Los Zetas Cartel and the Beltrán-Leyva Cartel, and the other, the Gulf Cartel, Sinaloa Cartel and La Familia Cartel [2].

To see how the two alliances might be bidding up the violence, we can first model the civilian killings as an all-pay auction. After all, the cartel incurs some cost for each civilian it kills regardless of whether it wins, and the alliance that has the most kills at the end of each period becomes the more feared of the two among civilians.

In the classic War of Attrition game, the only Nash equilibrium outcomes are that one player bids 0 and the other bids V, the value of the territory under dispute for the period. This implies that we should see in any given time period a large number of killings by one alliance and none by the other, and perhaps the territory would switch hands from period to period (as is one solution for repeated Battle of the Sexes). The expected utility for each alliance should be 0. Alternatively, each cartel has a probability distribution over [0,N] for when it will stop killing civilians. If this is a good model, then the increase in killings might be explained by the decrease in cost of killing civilians (law enforcement is getting less effective).

In the war of attrition game, once both players have made a positive bid, any victory will be a Pyrrhic victory — the expected payoff will be negative. Consider the classic example of the all-pay auction for a $20 bill — if one player bids $20 and the other, $0, they both get a payoff of 0. If one bids $20 and the other, $2, then the player who bids $2 will be forfeiting $2 anyway and might as well bid $22 and win the money. But, now the first player is out $20 — he would lose less if he could get by with winning with a bid less than $40. At some point one player should just take the hit and exit with a negative payoff.

So, the body count continues being bid up as long as both alliances continue to kill civilians on every turn — and this is in fact the case. One explanation might be that the killings are not simply a signal to the civilian population, but also a signal to the other alliance.

We can consider a three-period game:

  1. Each alliance finds out whether it is strong or weak
  2. Given the first, each sends a signal (kill many or kill few)
  3. Each decides whether to attack the other, or to defend. Nonaggression only occurs when both defend.

Each alliance must assert that it is “Strong” type rather than “Weak” type in order to maintain a foothold on the piece of territory. If a strong alliance believes the other alliance is weak on a period, it should attack and take over, since the weaker alliance cannot afford to retaliate.

Alliance j is strong
Attack Defend
Attack -2,V -2,V
Defend -2,V 0,0
Fig. 1: If Alliance i is weak, j is strong
Alliance j is weak
Attack Defend
Attack -1,-1 -1,-1
Defend -1,-1 0,0
Fig. 2: If both are weak
Attack Defend
Attack -2,-2 -2,-2
Defend -2,-2 0,0
Fig. 3: If both are strong

We see that if you are weak, your subgame perfect equilibrium strategy in the last stage is to defend regardless of your opponent’s strength. What signal should you send? Since killing might be costly for a weak alliance, a strong alliance will never send a signal that it is weak (killing few people). Therefore, if the opponent receives the signal that few civilians were killed, he knows that this is a credible signal of weakness.

A weak alliance might signal from the set {many kills, few kills}. Since the players are in identical situations at t=0, their probability p that each will be strong or weak, the probability q that they will give a false signal if weak, and the additional cost c to a weak player giving a high kill signal will be the same. Expected payoff for the weak alliance if it sees a high kill signal is

(q)[mi(strong|many)U(strong, many, defend)+mi(weak|many)U(weak, many, defend)-c]+(1-q)(-2)

= (q)mi(strong|many)[mj(strong|many)(0)+mj(weak|many)(-2)-c]+mi(weak|many)(0)+(1-q)(-2)

= (q)mi(strong|many)[mj(weak|many)(-2)-c]+(1-q)(-2)

It turns out that if sending a false signal is costless, then q is maximized at 1 and we have a pooling equilibrium. If it is costly enough, then there is a separating equilibrium (weak alliance sends low signal, strong sends high signal). What it means for our cartels is that as long as there is a pooling equilibrium, both sides will definitely enter a war of attrition and bid up the body count even beyond their valuations for the territory. It is when the cost of killing just one civilian becomes high enough that it creates a separating equilibrium that the weak alliance doesn’t kill anyone, and the strong alliance kills one[3]. Needless to say, without an honor code to raise this cost, and given the state of Mexican law enforcement, this is quite unlikely.

Thanks to Jeffrey Kang for bouncing ideas around with me.
————————-
[1] http://www.washingtonpost.com/world/mexican-drug-cartels-targeting-and-killing-children/2011/04/07/AFwkFb9C_story.html
[2]“Violence the result of fractured arrangement between Zetas and Gulf Cartel, authorities say”. The Brownsville Herald. March 9, 2010. Retrieved 2010-03-12.
[3] Why one? Because people are discrete. If the separating equilibrium were at 2 kills, then 1 kill might be a possible low signal, in which case the players may enter a war of attrition anyway.


How We Learned to Stop Worrying and Love the Game.

It is the mid-1950s, and the United States and Soviet Union are in the midst of a nuclear arms race. In the “War Room,” the President, a general, and an eccentric, wheelchair-bound genius with a thick European accent debate the merits of a first-strike attack that would obliterate all of the Soviet Union before it can retaliate. Here, fiction diverges from reality.

Indeed, a Hungarian wheelchair-confined mathematician on the verge of chemotherapy-induced dementia, John von Neumann, was transported to the White House to advise President Eisenhower[1]. Von Neumann, a founding father of game theory, believed that a first-strike that destroyed the Soviets before they could build an H-bomb was the key to ending the nuclear threat as well as allowing the US to maintain its position as the only nuclear superpower[2].

In Stanley Kubrick’s Dr. Strangelove, the Soviets have a deterrence plan — a Doomsday Device that automatically activates and destroys life on Earth for 100 years if Russia is bombed. The device also cannot be disarmed, and so by building such a machine, the Soviets have made a “completely credible and convincing” threat for deterrence[3]. However, it is revealed by the Soviet ambassador that no one yet knows of the device, since the Soviet Premier had wanted to unveil it with fanfare.

The Americans also have a retaliatory measure in place, “Wing Attack Plan R,” which allows Field Commanders to bomb the Soviet Union in the case that Washington is destroyed by a Soviet first attack. It happens that Brigadier General Jack D. Ripper believes that Communists are poisoning the water and, knowing nothing of the Doomsday Device, orders his nuclear-armed B-52s to attack. He too has effected measures to prevent the recall code from being obtained — and even when it is finally obtained and broadcasted, one of the bombers has a defective radio and was beyond contact. Major Kong rides the bomb as it falls from the plane, and mushroom clouds erupt around the globe.

“The whole point of having a Doomsday Machine is lost if you keep it a secret. Why didn’t you tell the world?”

The nuclear game of Dr. Strangelove shares characteristics with the game of Chicken, where two vehicles accelerate toward each other and the loser is the driver who swerves. If neither swerves, mutual destruction results:

Swerve Straight
Swerve Tie, Tie Lose, Win
Straight Win, Lose Crash, Crash
Fig. 1: A payoff matrix of Chicken
Swerve Straight
Swerve 0, 0 -1, +1
Straight +1, -1 -10, -10
Fig. 2: Chicken with numerical payoffs[4]

The only pure strategy equilibria for Chicken are (straight, swerve) and (swerve, straight). Likewise, in Dr. Strangelove‘s game of nuclear chicken, the pure strategy equilibrium is to allow one country to be the nuclear power and for the other not to threaten:

Disarm Bomb
Disarm Tie, Tie Lose, Win
Bomb Win, Lose Armageddon, Armageddon
Fig. 3: Nuclear Chicken

As can be seen, the game might be won by either the US or USSR by striking first. One strategic move that can be made in Chicken is commitment to a credible threat (called Brinkmanship) — e.g. you could rip out your steering wheel and throw it out the window, demonstrating that your car must go straight and that your opponent must swerve to avoid mutually assured destruction.

This is the strategy that the USSR attempts to play in Dr. Strangelove, and it would have been effective if only the US and USSR both had known that the other had effectively ripped out their steering wheels. And even so, destruction of all life would not have been assured but for one deranged General Ripper’s conspiracy theories.

Game theory is often criticized for its flippance toward irrationality. What about the General Rippers of the humanity, they ask? The whole point is far from lost if we keep it a secret, because rational choices are by their nature not a secret; they can be teased out and brought out into the light. Why not tell the world?

———————

[1] Paul Strathern, Dr Strangelove’s Game: A Brief History of Economic Genius. There’s also evidence that Dr. Strangelove was modelled on Wernher von Braun, the former Nazi inventor of the ballistic missile (Jeff).
[2] This is known as a first-mover advantage.
[3] Dr. Strangelove in the film.
[4] Example from Wikipedia.


Follow

Get every new post delivered to your Inbox.

Join 32 other followers