# Bumping into people: the awkward dance

You know when you open the door, and you find someone else is trying to get in at the same time, and so you both end up right in each other’s faces? And then you each try to get out of the other’s way, only to go in the same direction and still be in each other’s face? And then you do a sort of weird dance?

I’ve been trying to find some good Youtube clips of this, and while I know they’re out there, I can’t find them in a brief search. But I think you know what I’m talking about.

Well, surprise surprise, we can model this interaction as a game! Each person has two strategies: move left (L), or move right (R).(1)  If they both move in the same direction, then they are still stuck doing the awkward dance, and get payoff -1. Otherwise, they move out of each other’s way, and so they happily go along their way, getting payoff 1.

 1\2 Left Right Left (-1,-1) (1,1) Right (1,1) (-1,-1) Fig. 1: Awkward dance game

There are a couple of pure-strategy Nash equilibria: one player goes left, and the other goes right. But which equilibrium is going to be chosen? A priori, there is no way to tell. Here, social conventions can be useful, such as always moving forward on the right side (for a similar post, see Marli’s post, “When in New York, do as the New Yorkers do“). The problem is when some people didn’t get the memo (*sigh*).

There is a third Nash equilibrium in mixed strategies, where each person chooses to go in one direction with a 50-50 chance. This means that they will have a 50-50 chance each time they play that they will bump into each other, but eventually, after perhaps dancing for a while, they will get it right.

(1) This will all be from the perspective of player 1.

# What do kittens have to do with rising tuition?

If you read the Financial Times, you might suspect from an article on Monday that kittens have something to do with rising tuition and Prisoners’ Dilemmas. Let me assure you that they don’t.

A friend of mine sent me the article, which cites a model designed by a team of Bank of America consultants who use the Prisoners’ Dilemma to explain rising college tuition. Here is the graphic they used:

Fig. 1: Things that are pairwise irrelevant to each other:

a kitten, the Prisoner’s Dilemma, and rising tuition.

They explain that the college ranking system (assuming two colleges) is a zero-sum game. If one college moves up, the other one moves down. “A college can move up in the rankings if it can raise tuition and therefore invest in the school by improving the facilities, hiring better professors and offering more extracurricular activities.” And therefore, they conclude, this is why college tuitions have been rising and why student debt will continue to rise.

First glaring problem: (raise, raise) is a Pareto-optimal outcome as they’ve set up this game, but what they probably meant to say was that it is a Nash equilibrium. Or maybe they meant to say that “raise” is the best response for each college. Anyway, in this game, (don’t raise, don’t raise) is also Pareto-optimal (but not a Nash equilibrium)!

Secondly, they’re trying to illustrate a kind of ratcheting problem: both colleges raise tuition to raise the quality of the resources at the school, in order to maintain their rankings. But, this means it’s a repeated game. In repeated games that have a finite horizon, defection happens at every step, but at infinite horizon games, cooperation can occur. Now, let’s just assume that this is an infinite horizon game, which is what the folks at B of A are assuming when they predict that college tuition will keep rising indefinitely, beyond mere inflation. What incentive is there to cooperate and keep tuition low? According to this game, none.  And according to what you might expect in reality, none – is it plausible that, in the absence of antitrust laws, that colleges would want to collude to keep tuition low, and that because they can’t collude, they are doomed to raise tuition every year against their wills? Nope.

Then, we come to the matter that in fact this game can’t be infinite horizon as it is presented here.  The simple reason is that, even if education is becoming a larger and larger share of a household’s spending, and even if the student is taking out loans and borrowing against his future expected earnings, he still has a budget set that he can’t exceed. Furthermore, the demand for attending college at a particular university should drop as soon as the tuition exceeds the expected lifetime earning/utility advantage for whatever the student sees himself doing in 4 (or more) years over the alternative. So, there will be some stage at which the utilities change and it becomes a best strategy for neither school to increase its tuition. So, it’s a finite stage game and the increase will stop somewhere, namely, where price theory says it should. [1]

Finally, it’s not clear that increasing tuition actually has such a strong effect on school rankings or that colleges are in such a huge rankings race. And, even if students at colleges outside the very top schools tend to choose a college based on things like food quality and dorm rooms, students don’t demand infinitely luxurious college experiences at infinite prices. Evidence: Columbia students feel they’re overpaying for food, and feel entitled to steal Nutella.

The lessons here are these: It’s not a Prisoner’s Dilemma in a strong sense if the cooperative result isn’t strictly preferred to the Nash equilibrium. Don’t model a tenuous game where the game isn’t relevant to the ultimate result (tuitions will stop rising at some point). Don’t assume that trends are linear, when they are definitively not linear. And, don’t put a kitten on your figure just because you have some white space — it really doesn’t help.

———————

[1] Actually, the game doesn’t have to be finite horizon. Suppose the upper limit that the colleges know they can charge is $A$, and the current tuition is $B_t$. Then, at each stage, they could increase tuition by $(0.5)*(A - B_{t-1})$. But, as the tuition approaches A, the increases become smaller and smaller until they pretty much just vanish, and it would be the same as stopping, because there is a time at which the tuition would stop affecting rank (a college isn’t going to improve it’s rank by charging each student an extra cent.)

# Clearly, Sicilians do not know game theory

Relax, I’m not referring to actual Sicilians. I’m referring, of course, to Vizzini from the movie “The Princess Bride.” The hero, Westley, is trying to rescue his true love, Buttercup, from the clutches of Vizzini and his henchmen, Inigo Montoya and Fezzik. After outdueling Inigo and knocking out Fezzik, he overtakes Vizzini, who threatens to kill Buttercup if Westley comes any closer. This leads to an impasse: Vizzini cannot escape, but Westley cannot free Buttercup. So, Westley challenges Vizzini to a “battle of wits”:

The structure of the game is simple: there are two glasses of wine. Westley has placed poison (in the form of the odorless, tasteless, yet deadly iocaine powder) somewhere among the two cups, and allows Vizzini to choose which to take. Afterwards, they drink, and they see “who is right, and who is dead.”

Presumably, when Vizzini encounters the game, he is supposed to think that Westley has restricted himself to poisoning one of the glasses. In this case, we have a standard extensive form game of incomplete information, which is equivalent to a normal-form game:

 Vizzini\Westley Poison Westley’s cup Poison Vizzini’s cup Drink Westley’s cup (Dead, Right) (Right, Dead) Drink Vizzini’s cup (Right, Dead) (Dead, Right) Fig. 1: Battle of Wits (outcomes)

Immediately we see that this game is symmetric (or, more precisely, anti-symmetric), in that whatever doesn’t happen to one player happens to the other. In this way, this game is strategically equivalent to the game of matching pennies. This lets us know right away that the equilibrium outcome is for Westley to randomize 50-50 between the choices: do anything else, and Vizzini has a better chance of winning if he plays optimally, as he could just choose the cup that is less likely to have the poison. Similarly, if Vizzini was a priori less likely to choose a given cup, then that is where Westley should have put the poison.

Yet Vizzini does not reason this way. Instead, he attempts to make vacuous arguments about the psyche of Westley, namely, where Westley would have put the poison. He may be reasoning as if Westley is a behavioral type, but clearly, that’s not the best thing to do in a “battle of wits,” where presumably everyone is rational. Instead of making the game-theoretic choice based on mixed strategies, he tries to find an optimal pure strategy.

In the end, Vizzini takes his own cup, which indeed contains the poison. As it turns out, both cups contained poison: Westley has built up tolerance to iocaine, and so it didn’t make any difference which was chosen. So in a way, Westley did make Vizzini indifferent between the two outcomes; it’s just that Vizzini was mistaken in which game was being played. In reality, no matter what, Vizzini would be dead, and Westley would win. This makes one wonder that perhaps Vizzini should have thought something was afoul when Westley proposed the game in the first place, and even more so when he falls for such an obvious trick of misdirection which tries to get Westley to look the other way (see 3:04 in the video). But no matter – while Vizzini may have been smarter than Plato, Aristotle, and Socrates, he could have used some of the 20th century wisdom of John Nash.

# So what school should I go to?

The classic question for high school seniors: “So, what are you doing next year? What colleges are you applying to?” Please, give them a break. They get this question way too much, and it only makes them more nervous about their futures. After all, they seem to be under the impression that where they go is a make-or-break issue, and bringing up the subject as if it is important just reinforces that fear.

But while we’re on the topic, where should they apply?

For simplicity, let’s suppose that each college (indexed by a number $i$) has a particular quality level, $q_{i}$, at which every potential student values that college. This can be through the quality of academics, the alumni network, the cost, the location, you name it. One might think that it would be best to apply to as many colleges as possible, since that maximizes your chances of getting in somewhere good. But, like everything in life, there is a cost to doing so. This can be the actual application fee, the time involved in putting together the materials, getting ETS to send your SAT scores, whatever. Let’s fix this cost for each college at $c_{i}$. We can relax these assumptions, but the qualitative result will still be the same.

Suppose there is a large number of people, and we restrict people to applying to one school. Yes, I know, this is an unreasonable assumption, but the qualitative results will again be the same even if we allow for applying to multiple schools; it just makes the math hairier. The probability of getting in is $\frac{n_{i}}{a_{i}}$, where $n_{i}$is the number of slots that the school has, and $a_{i}$ is the number of people who apply.

Let’s consider the Nash equilibrium. Since everyone values each school equally, we will expect that everyone will be indifferent between applying to the various colleges. Thus, we will have, for every college $i, j$,

$\frac{n_{i}}{a_{i}}q_{i}-c_{i}=\frac{n_{j}}{a_{j}}q_{j}-c_{j}$

This can give us the relative admission rates of each school:

$\frac{n_{j}}{a_{j}}=\frac{(n_{i}/a_{i})q_{i}+c_{j}-c_{i}}{q_{j}}$

This equation in and of itself is informative. It shows that the admission rate to a school will be increasing in the cost of applying, all other things being equal. This makes sense: if you make it harder to apply, less people will do so, and this will drive up the admission rate needed by the school to fill all of its slots. Similarly, an increase in the quality of the school drives down the admission rate, since more people will then want to go there, making it more competitive.

So, in summary, what should you do? You should apply to the school which maximizes $\frac{n_{i}}{a_{i}}q_{i}-c_{i}$, which is your expected benefit from applying there. Assuming that everyone else is being rational and doing the same thing, though, then it won’t make much difference where you apply. That being said, this last result will no longer hold if not all people value different schools the same (though the trends for the relative admission rates will still hold), but that makes the analysis too complicated for a mere blog post.

Edit: For a more sophisticated theoretical and empirical model whose basic idea is the same, click here.

# Of skirts, judgement, and changing conventions

Jeff’s post last Sunday on Jewesses in Skirts got me thinking, how is it that we have Orthodox Jewish communities that are tolerant of pants-wearing by women, and communities that are not tolerant of pants-wearing, but rarely a community with large factions of each type? The question, of course, applies to more than just skirts worn by Jewish women – we can talk about many aspects of our culture, such as our changing views on LGBT people, or miscegeny, or a gold standard vs. silver standard using the same language.

We’ve established that, assuming that the types of women in the previous post are static (that is, Nature or Circumstance assigned you to one group and you can’t change allegiances), that it is optimal for the more conservative group to adhere to skirt-wearing, and the other group not to bother. Those static proportions of types in the population affect how the “others” in Jeff’s game form their prior beliefs about which type a woman is based on her choice of clothing. But, what if the women could choose or change their ideology, and what if we consider the effects of judgement and peer-pressure?

In the following model, we can look at a scenario where the proportion of each type of player in the population is endogeneous. Suppose that a new community forms, consisting of some random number of “conservative,” skirts-only types (Jeff puts them in class 1) and some number of “progressive” types who sometimes wear pants (Jeff puts them in class 2). This represents what we would expect to happen if everyone formed their own opinions and ideologies totally independently of everyone else. Each person will randomly encounter other members of the community on a one-on-one basis, and receive social payoffs from the encounter. If she encounters a likeminded person, they both feel validated in their choices, and if not, they feel judged. As before, we assume some disutility for a restriction on wardrobe.

 Conservative Progressive Conservative 1,1 0,0 Progressive 0,0 2,2

Now, say that the population starts out with a percentage $p$ of progressive types and $1-p$ of conservative types. Then, assuming that the population is large, if you are one of the members, then of the people you meet, $p$ will be progressive and $1-p$ will be conservative. Therefore, if you are a woman who chooses to be conservative and wear only skirts, your expected utility is $1(1-p) + 0(p) = 1-p$ and if you choose to wear pants, then your expected utility in any one encounter is $(0)(1-p) + (2)(p) = 2p$. You would be indifferent if $1-p = 2p$ — that is, if $p$ (the fraction of progressive types) is 1/3.

Maintaining $p = 1/3$ is incredibly difficult, because it is so sensitive to shocks. If for any reason $p$ becomes a little more or a little less — say, a contingent of pants-wearers suddenly move in — the balance would tilt and one of the ideologies would start providing the better payoff, the whole population would start snowballing in that direction, and it would become the predominant convention (or evolutionarily stable state) . That may be why these larger faction groups tend not to exist in real life: they are a lot like a flipped coin that lands on its edge.

So what kinds of things affect what $p$ is? The payoff matrix, obviously, and how I’ve assigned the payoffs. No one said that I must assign those particular numbers (and indeed, I don’t. The numbers that go into those matrices don’t really matter. What matters is their order of size and relative distance to each other. Try it: multiply all of the numbers by a constant, or add a constant to all of them. The solution should be the same.) What if the inconvenience of wearing only skirts is very large? (Imagine replacing the 2s with, say, 5s.) Then, $p$ (the tipping point) could be much smaller, and it would take a much smaller group of rebels to send the equilibrium going the other way. Issues like women’s suffrage are like this — they are so significant that a grassroots movement picks up momentum very quickly. If the inconvenience is less, then p would be greater, and if the existing equilibrium is skirts-only/conservative, it would be harder to change. Equivalently, we can think about the effects of mutually judgemental behavior (making the 0s in the matrix more negative). If people are less tolerant when they meet the other type, conventions are harder to change. If they are more tolerant, change is easier.

——————————-
If you enjoyed the ideas in this post, you may also enjoy When in New York, do as the New Yorkers do, which describes a special, symmetrical case of the kind of game we’ve discussed here.

# Is Pascal’s Wager Sound?

OK, this isn’t actually technically a game, but since a lot of people think of it as such (given the common depiction through payoff matrices, with probabilities of different scenarios), we’re going to cover it anyway.

The basic gist of Pascal’s Wager is that by hedging one’s bets and choosing the path of religion, one can expect a greater payoff than by being not religious. Its proponents offer two possible states of the world: either God exists, or God does not. From a person’s standpoint, one can choose to be religious, or not:

 Choice God Exists God Does Not Exist Religious $\infty - C$ $- C$ Not Religious $G -\infty$ $G$ Fig. 1: Standard Pascal’s Wager

One gets infinite payoff from being religious if God exists, as then one goes to heaven. If one is not, and God exists, one goes to hell, and gets a payoff of negative infinity. By being religious, one incurs a finite cost C, as religion isn’t so much fun apparently; if one is not religious, one gets positive payoff G since one gets to party all the time.

Now suppose God exists with some probability P>0, however small that might be. Then the expected payoff from being religious is, according to the argument, P(∞ – C) – (1 – P)C = ∞. The payoff from being irreligious is P(G – ∞) + (1 – P)G = -∞. Thus one is better off being religious.

A lot of people really hate this argument, and so do their utmost to bring it down. Yet this argument is not as bad as they think it is. Let’s go through some of the criticisms.

The first criticism is that Pascal automatically assumes that if God exists, then Catholicism is true. But there are many religions out there that posit the existence of heaven and hell, yet are mutually incompatible. For example, Catholics (at least conservative ones) would condemn Muslims for not accepting Jesus as their Lord and Savior; Muslims would condemn Catholics as polytheists for this very acceptance. Since one can do this calculus for both religions, the arguments negate each other in paradox, as we end up granting both infinite payoffs and negative infinite payoffs to the same groups!

The second criticism is the so-called Atheist’s Wager. It could be instead that God wants us to live good lives, and be rational. Since one can live a better life by being irreligious, this is what God would prefer of us. Hence, according to the Atheist’s Wager, the payoff matrix should look as follows:

 Choice God Exists God Does Not Exist Religious $- \infty - C$ $- C$ Not Religious $G + \infty$ $G$ Fig. 2: Atheist’s Wager

Thus, says the atheist, it is a dominant strategy to be irreligious.

The problem with both these arguments is actually a problem with the initial formulation of Pascal’s Wager. However, while we can tweak Pascal’s Wager to make the problem go away, this flaw is quite possibly fatal to the above two criticisms. So now that I’ve built up the drama, here’s the problem: infinity is not a well-defined number on the real line (which we use for expected utilities). Instead, we must use a limit of some number B as it increases toward infinity. Thus, Pascal’s Wager should look like:

 Choice God Exists God Does Not Exist Religious $lim_{B\rightarrow\infty} B - C$ $- C$ Not Religious $lim_{B\rightarrow\infty}G - B$ $G$ Fig. 3: Revised Pascal’s Wager

Addressing the first criticism, we can then compare the possibilities that each religion is true by seeing which is the most probable among them. Thus, taking Catholicism and Islam, with probabilities PC and PI, and difficulties CC and CI, respectively, to establish Catholicism (without loss of generality) as the better way to go of the two, we just need to check whether

PCB – PIB – CC > PIB – PCB – CI,

which, as B gets large, is equivalent to just checking whether PC > PI. As a Jew, I would probably argue that the evidence/support for Judaism is the greatest of all the religions that have a system of heaven/hell, even if the support for any of these religions is slight.

Similarly, we can check whether, given the evidence in front of us, the probability that the Atheist’s Wager (AW) is true is greater than that of Pascal’s Wager (PW):

 Choice God Exists & PW God Exists & AW God Does Not Exist Religious $lim_{B\rightarrow\infty} B - C$ $lim_{B\rightarrow\infty} - B - C$ $- C$ Not Religious $lim_{B\rightarrow\infty}G - B$ $lim_{B\rightarrow\infty}G + B$ $G$ Fig. 4: Revised Pascal’s Wager vs. Atheist’s Wager

Comparing the respective probabilities of the Atheist’s Wager and Pascal’s Wager, PA and PP, we check whether

PAB – PPB + G > PPB – PAB – C

Given that almost all purported religious claims in the past about heaven and hell have been based on being religious, and none have supported the Atheist’s Wager (the closest you get is some who claim that all people go to heaven, whether religious or not), I would think that PP > PA (though I admit the possibility that I am wrong). Thus, the Atheist’s Wager loses out to Pascal’s Wager by comparing expected utilities.

Thus, there is a very strong case to make that if one is merely comparing expected utilities, then  no matter how small the probability is that God exists (as long as it is not zero), Pascal’s Wager is actually sound.

Of course, one might not be comparing expected utilities. After all, if there is only an extremely miniscule possibility of a hugely negative payoff, no matter how bad it is, it might not be a bad thing to completely ignore that possibility. But that is in the realm of decision theory, not game theory. Thus I’ll leave it to your intuitions: if you’re risk averse (as pretty much everyone is – that’s why we all buy insurance), and carry this reasoning even against remote possibilities, then by all means, Pascal’s Wager seems to work. But if you’re willing to take the risk, since it doesn’t matter if you think there’s only, say, a one-in-a-trillion chance that you’ll go to hell if you’re not religious, then go out and boogie.

# Deficit Chicken

In recent months, there has been much coverage of sovereign debt crises around the world. Just a couple of weeks ago, Greece approved massive cuts to its budget, in a move designed to implement an austerity plan to avoid a devastating default on its debt (though some warn that even this is not enough to prevent a selective default). This past week, Moody’s (a rating agency) downgraded its ratings  of Portuguese bonds to “junk” status, meaning that there is a good possibility that its bonds will not be repaid. Other European countries, such as Italy and Spain, are also considered at risk, as they have large deficits and/or debts outstanding that imperil their abilities to repay.

Even in the United States, there has been much talk recently of what to do about the federal debt limit, which was surpassed in June. The debt limit must be raised, or the United States will cease to be able to pay its obligations, raising the specter of default. Yet the two sides of how to approach the issue have been taking hard stances over how to avoid this possibility. Republicans want to reduce the deficit solely through spending cuts, under the premise that any tax increases will hamstring the economy at this delicate stage in the recovery from the recent recession.[1] Democrats want to do so through a blend of spending cuts and an increase in taxes on “the wealthy” (see Marli’s post on taxation). Despite the necessity to somehow bridge the gap, the debt talks have recently appeared to be on the verge of collapse. House majority leader Eric Cantor, in the past couple weeks, walked out (insert link here) of the debt talks, citing irreconcilable differences. With both sides unwilling to give in, it seems like the United States is barreling toward a crisis.

With a scenario like this, it seems like a perfect time to whip out my chicken suit.

Both sides are heading straight toward the precipice; if neither swerves, disaster strikes: the government collapses, essential services are denied across the country, and politicians will likely be voted out of office for failure to govern effectively. Yet if anyone swerves first, it is also bad: it will mean giving up some of the sacred cows of their party’s platform, whether it be lower taxes for the Republicans (especially for “the wealthy”), or big-ticket items like healthcare for the Democrats. We can therefore model the game as such:

 Lose healthcare Stand strong Higher taxes (-5, -5) (-10, 0) Stand strong (0, -10) (-100, -100) Fig. 1: Deficit Chicken

Hopefully, they’ll be able to negotiate some sort of agreement before the August 2nd deadline, so we can avoid the “crash” solution. It seems like Minnesota hasn’t been able to avoid this, so a positive resolution is not guaranteed. We’ll see what happens…

[1] Though, the idea that these lack of tax increases will somehow pay for themselves is completely ridiculous (according to virtually all economists). Also, government spending cuts will impair the recovery of the economy as well, as such a move reduces aggregate demand. But I digress.