Relax, I’m not referring to actual Sicilians. I’m referring, of course, to Vizzini from the movie “The Princess Bride.” The hero, Westley, is trying to rescue his true love, Buttercup, from the clutches of Vizzini and his henchmen, Inigo Montoya and Fezzik. After outdueling Inigo and knocking out Fezzik, he overtakes Vizzini, who threatens to kill Buttercup if Westley comes any closer. This leads to an impasse: Vizzini cannot escape, but Westley cannot free Buttercup. So, Westley challenges Vizzini to a “battle of wits”:
The structure of the game is simple: there are two glasses of wine. Westley has placed poison (in the form of the odorless, tasteless, yet deadly iocaine powder) somewhere among the two cups, and allows Vizzini to choose which to take. Afterwards, they drink, and they see “who is right, and who is dead.”
Presumably, when Vizzini encounters the game, he is supposed to think that Westley has restricted himself to poisoning one of the glasses. In this case, we have a standard extensive form game of incomplete information, which is equivalent to a normal-form game:
|Vizzini\Westley||Poison Westley’s cup||Poison Vizzini’s cup|
|Drink Westley’s cup||(Dead, Right)||(Right, Dead)|
|Drink Vizzini’s cup||(Right, Dead)||(Dead, Right)|
|Fig. 1: Battle of Wits (outcomes)
Immediately we see that this game is symmetric (or, more precisely, anti-symmetric), in that whatever doesn’t happen to one player happens to the other. In this way, this game is strategically equivalent to the game of matching pennies. This lets us know right away that the equilibrium outcome is for Westley to randomize 50-50 between the choices: do anything else, and Vizzini has a better chance of winning if he plays optimally, as he could just choose the cup that is less likely to have the poison. Similarly, if Vizzini was a priori less likely to choose a given cup, then that is where Westley should have put the poison.
Yet Vizzini does not reason this way. Instead, he attempts to make vacuous arguments about the psyche of Westley, namely, where Westley would have put the poison. He may be reasoning as if Westley is a behavioral type, but clearly, that’s not the best thing to do in a “battle of wits,” where presumably everyone is rational. Instead of making the game-theoretic choice based on mixed strategies, he tries to find an optimal pure strategy.
In the end, Vizzini takes his own cup, which indeed contains the poison. As it turns out, both cups contained poison: Westley has built up tolerance to iocaine, and so it didn’t make any difference which was chosen. So in a way, Westley did make Vizzini indifferent between the two outcomes; it’s just that Vizzini was mistaken in which game was being played. In reality, no matter what, Vizzini would be dead, and Westley would win. This makes one wonder that perhaps Vizzini should have thought something was afoul when Westley proposed the game in the first place, and even more so when he falls for such an obvious trick of misdirection which tries to get Westley to look the other way (see 3:04 in the video). But no matter – while Vizzini may have been smarter than Plato, Aristotle, and Socrates, he could have used some of the 20th century wisdom of John Nash.
When I was in middle school, I consumed a lot of typical nerd literature like Richard Feynman’s “Surely You’re Joking, Mr. Feynman” and anthologies of mathematics puzzles from the Scientific American by Martin Gardner. In the latter, I first encountered the Monty Hall Problem, and it goes something like this:
Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?
It turns out that, yes, it is always to your advantage to switch your choice. This is a solution that has been notoriously difficult for people to wrap their heads around. After all, when you picked a door, the probability of having picked the door with a car was still 1/3, and after a door was opened, there would still be a car and a goat behind the remaining two doors – it seems as through the probability of choosing the door with the car ought to be ½ regardless of the door chosen.
The Monty Hall Paradox is in fact not a paradox at all, but rather just some clever sleight of hand. The trick is that people are drawn to the fact that there are only two doors rather than three doors remaining, and assume that the host’s having opened a door is favorable to the player. People tend not to realize that the game has imperfect information – the player does not know where on the game tree he is, whereas the host does. Additionally, people assume that the host has no stake in the game (and this is not unreasonable, because the problem does not explicitly describe a parsimonious host! On the other hand, intuitively, we know that the host isn’t going to ruin the game by opening the door with the car.) So, if we assume that the host is profit maximizing and we model the problem as an extensive form game with imperfect information, then the conditional probabilities would be easy to see.
Now, just for fun, we’ll assign some utilities to the outcomes. What is a goat worth? According to a popular Passover song in Aramaic, a (small) goat is worth about 2 Zuz, and according to the traditional Jewish prenuptial document, a wife is worth about 200 Zuz. So, a goat is worth about 1/100th of a wife. I asked my roommate, Anna, how many cars she thought a wife was worth, and she determined that a wife was worth three cars. By transitivity, then, a car is worth about 33 goats. (I think goats have become quite a bit less valuable since that song was written, or maybe goats back then were a lot better than our goats.) So, if the player wins the game, he will walk away with a utility of 33, and the host will walk away with the 2 goats.
In this game, the light gray branches are dominated because the host has no incentive to open the door that the player has already chosen, and the dark gray branches are dominated because, of the remaining two doors, the host would not open the door that has the car. We can tell that in the top branch, the host as 2 possible choices for doors to open, whereas in the lower two branches, the host is constrained to only one door (since, if the player has chosen a goat door, there is only one goat door left to open.)
So, since the player has no idea after the first stage what door has the car, we assume he picks door No.1 (as in the game). If he observes that the host opens door 3, he would know that there are two cases where the host opens door 2: in the world where the car is behind door 2, the host chooses door 3 100% of the time, and in the world where the car is behind door 1, the host chooses door 3 50% of the time. It’s actually twice as likely that we are on the 100% branch as that we are on the 50% branch – and that’s the branch where the car is hidden behind the other door.
What if we know that the host has opened a door, but we don’t know which one? Then, we can’t condition on a prior, because we don’t know what the prior is – we don’t get any new information by observing which door was opened, and switching doors would not help.
The classic question for high school seniors: “So, what are you doing next year? What colleges are you applying to?” Please, give them a break. They get this question way too much, and it only makes them more nervous about their futures. After all, they seem to be under the impression that where they go is a make-or-break issue, and bringing up the subject as if it is important just reinforces that fear.
But while we’re on the topic, where should they apply?
For simplicity, let’s suppose that each college (indexed by a number ) has a particular quality level, , at which every potential student values that college. This can be through the quality of academics, the alumni network, the cost, the location, you name it. One might think that it would be best to apply to as many colleges as possible, since that maximizes your chances of getting in somewhere good. But, like everything in life, there is a cost to doing so. This can be the actual application fee, the time involved in putting together the materials, getting ETS to send your SAT scores, whatever. Let’s fix this cost for each college at . We can relax these assumptions, but the qualitative result will still be the same.
Suppose there is a large number of people, and we restrict people to applying to one school. Yes, I know, this is an unreasonable assumption, but the qualitative results will again be the same even if we allow for applying to multiple schools; it just makes the math hairier. The probability of getting in is , where is the number of slots that the school has, and is the number of people who apply.
Let’s consider the Nash equilibrium. Since everyone values each school equally, we will expect that everyone will be indifferent between applying to the various colleges. Thus, we will have, for every college ,
This can give us the relative admission rates of each school:
This equation in and of itself is informative. It shows that the admission rate to a school will be increasing in the cost of applying, all other things being equal. This makes sense: if you make it harder to apply, less people will do so, and this will drive up the admission rate needed by the school to fill all of its slots. Similarly, an increase in the quality of the school drives down the admission rate, since more people will then want to go there, making it more competitive.
So, in summary, what should you do? You should apply to the school which maximizes , which is your expected benefit from applying there. Assuming that everyone else is being rational and doing the same thing, though, then it won’t make much difference where you apply. That being said, this last result will no longer hold if not all people value different schools the same (though the trends for the relative admission rates will still hold), but that makes the analysis too complicated for a mere blog post.
Edit: For a more sophisticated theoretical and empirical model whose basic idea is the same, click here.
Jeff’s post last Sunday on Jewesses in Skirts got me thinking, how is it that we have Orthodox Jewish communities that are tolerant of pants-wearing by women, and communities that are not tolerant of pants-wearing, but rarely a community with large factions of each type? The question, of course, applies to more than just skirts worn by Jewish women – we can talk about many aspects of our culture, such as our changing views on LGBT people, or miscegeny, or a gold standard vs. silver standard using the same language.
We’ve established that, assuming that the types of women in the previous post are static (that is, Nature or Circumstance assigned you to one group and you can’t change allegiances), that it is optimal for the more conservative group to adhere to skirt-wearing, and the other group not to bother. Those static proportions of types in the population affect how the “others” in Jeff’s game form their prior beliefs about which type a woman is based on her choice of clothing. But, what if the women could choose or change their ideology, and what if we consider the effects of judgement and peer-pressure?
In the following model, we can look at a scenario where the proportion of each type of player in the population is endogeneous. Suppose that a new community forms, consisting of some random number of “conservative,” skirts-only types (Jeff puts them in class 1) and some number of “progressive” types who sometimes wear pants (Jeff puts them in class 2). This represents what we would expect to happen if everyone formed their own opinions and ideologies totally independently of everyone else. Each person will randomly encounter other members of the community on a one-on-one basis, and receive social payoffs from the encounter. If she encounters a likeminded person, they both feel validated in their choices, and if not, they feel judged. As before, we assume some disutility for a restriction on wardrobe.
Now, say that the population starts out with a percentage of progressive types and of conservative types. Then, assuming that the population is large, if you are one of the members, then of the people you meet, will be progressive and will be conservative. Therefore, if you are a woman who chooses to be conservative and wear only skirts, your expected utility is and if you choose to wear pants, then your expected utility in any one encounter is . You would be indifferent if — that is, if (the fraction of progressive types) is 1/3.
Maintaining is incredibly difficult, because it is so sensitive to shocks. If for any reason becomes a little more or a little less — say, a contingent of pants-wearers suddenly move in — the balance would tilt and one of the ideologies would start providing the better payoff, the whole population would start snowballing in that direction, and it would become the predominant convention (or evolutionarily stable state) . That may be why these larger faction groups tend not to exist in real life: they are a lot like a flipped coin that lands on its edge.
So what kinds of things affect what is? The payoff matrix, obviously, and how I’ve assigned the payoffs. No one said that I must assign those particular numbers (and indeed, I don’t. The numbers that go into those matrices don’t really matter. What matters is their order of size and relative distance to each other. Try it: multiply all of the numbers by a constant, or add a constant to all of them. The solution should be the same.) What if the inconvenience of wearing only skirts is very large? (Imagine replacing the 2s with, say, 5s.) Then, (the tipping point) could be much smaller, and it would take a much smaller group of rebels to send the equilibrium going the other way. Issues like women’s suffrage are like this — they are so significant that a grassroots movement picks up momentum very quickly. If the inconvenience is less, then p would be greater, and if the existing equilibrium is skirts-only/conservative, it would be harder to change. Equivalently, we can think about the effects of mutually judgemental behavior (making the 0s in the matrix more negative). If people are less tolerant when they meet the other type, conventions are harder to change. If they are more tolerant, change is easier.
If you enjoyed the ideas in this post, you may also enjoy When in New York, do as the New Yorkers do, which describes a special, symmetrical case of the kind of game we’ve discussed here.
Let me explain. Among Orthodox Jews, many of the women refuse to wear pants – they will only wear skirts of knee length or longer. Yet the reason for this practice is not so clear. Many reasons are given by different people. I’ll list a few, along with a sentence about why I don’t think they make so much sense:
(i) Modesty: Pants are immodest because they are form-fitting to the leg. Yet there are many other parts of the body for which clothes that are just as form-fitting are fine, yet are equally not allowed to be uncovered under Orthodox Jewish law. Besides, no one said you had to be wearing skinny-jeans.
(ii) Men’s clothing: Under Biblical law, women may not wear men’s clothing. Yet by now, it is normal for women to wear pants. Indeed, most Orthodox rabbis agree that this is not the main reason.
(iii) Suggestive: Certain parts of the pants might be sexually suggestive. If this is a problem for women, this would then definitely be a problem for men.
You might be asking yourself at this point: “What the heck does this have to do with game theory?” Well, I actually have a good reason for why many Orthodox Jewish women wear skirts, and it involves a perfect Bayesian equilibrium of an extensive form game.
We divide Orthodox Jews into two classes, each of which holds different communal standards. The first group adheres to a rather stricter standard, which may include certain norms about interactions with men, more stringencies regarding keeping kosher, etc. The second group is a little bit more laid back, though they may also be fully observant according to what they believe is necessary. I’m not making any claims about which one is better – I don’t want to go there – just bear with me.
Orthodox Jewish women in each class prefer others to recognize that they are in the correct class. This is perfectly understandable – one in the first class would not like others to offer food that didn’t meet their standards of keeping kosher, or would not appreciate certain advances from men; one in the second class might not appreciate being pressured into adhering to (from their perspective) unnecessary strictures. We therefore assign each class a payoff of C for being correctly labeled, and W for being incorrectly labeled, where C > W. For similar reasons, other Orthodox Jews would want to correctly label Orthodox Jewish women into these two classes.
To complete the model, we condition the beliefs of other Orthodox Jews on the signal of whether a given woman will wear only skirts (we assume that they can tell if they only wear skirts some of the time). If yes, they place her in the first class; if no, they place her in the second. Since having to only wear skirts is a restriction on the fashion choice of women, we’ll assign a modest loss of payoff (S) for only wearing skirts, where C – W > S.
Given their belief structures, other Orthodox Jews will assume that if you wear a skirt, you’re in Class #1; if you sometimes wear pants, you’re in Class #2. Knowing that others have this belief, the best strategy for Orthodox Jewish women is to actually always wear a skirt if they are in Class #1, while to not bother if they are in Class #2. In this way, the beliefs of others are self-fulfilling in the dress code of Orthodox Jewish women.
Of course, this model isn’t always true – I’m sure there are some people who have strong reasons (aside from those mentioned here) to choose to deviate from the equilibrium path described in this model. Yet I think this actually, to a large extent, gives the most compelling, and most credible, reason for why this dress code exists.
There was a British game show in the early 2000s called “Touch the Truck,” where, according to the rules, the truck would go to whomever could keep his hands on it the longest. Similar games have been featured in a number of other shows, including Survivor, where the last person to keep his hand on the totem would win immunity and That 70’s Show, in the following episode (start at 3:37 — you don’t need to watch the whole episode, unless you want to):
The three games that I’ve just mentioned differ in a couple of ways. In the British show, the contestants are able to take breaks, and are disqualified if they fall asleep, so it is primarily a contest of sleep-deprivation. In Survivor and That 70’s Show, bathroom breaks are not allowed, so the contest is much shorter and contestants might gain an advantage by wearing (ew) diapers or catheters. Obviously, pulling a stunt like that incurs costs — does the contestant really want to wear a diaper?
We can also think of Touch the Truck as a war of attrition or all-pay auction — any amount of time that the contestants spend holding their hands to the truck is a sunk cost. They’re not getting that time back. In most all-pay auctions or wars of attrition, the outcome can someone be that the players bid more than the object is worth — suppose you auction off a $20 bill, and everyone pays their bid regardless of whether they win. Then, you can imagine a scenario in which someone has bid up to $20, and another player who already has $18 in the game might bid $21 so as to only lose $1 rather than $18. If he wins by bidding more than the value of the object, we call it a Pyrrhic victory.
However, there is a slight (and largely inconsequential) difference between Touch the Truck and most wars of attrition or all-pay auctions. In all-pay auctions, players can bid as high as they like, whereas in Touch the Truck, the human body can endure only so much. The world record for staying awake is something like 11 days, and in most of the latter 8 days or so the person is basically not really functioning. (The longest I’ve ever continuously stayed up is probably around 50 hours, and by then I pretty much can’t do anything.) The truck, however, we can say is worth at least $15,000 — a conservative estimate. So, even if someone were able to stay up for 11 days, and took another 4 days off to recuperate, he has still earned $15K in 15 days . Not too shabby. And, since none of the contestants can actually stay up long enough for the time lost to be equal to $15K, there will always be additional rent that each contestant can earn (the winner will never have a Pyrrhic victory).
So what’s the equilibrium outcome? Well, the winner’s outcome is similar to the winner’s outcome in second-price auctions or English auctions. Since the contestants have differentiated levels of ability to stay awake, if you’re not the person who can stay awake the longest, then you might as well lose immediately. If you are the contestant who can endure sleep-deprivation the longest, then you should stay up as long (or longer) than the second most fortitudinous contestant. You should choose to do it even if no one else is choosing to enter the contest, because if there’s a chance you’ll quit sooner than that, then someone else might be able to win and might want to enter the contest.
Of course, in this example, there is perfect information — everyone knows everyone else’s level of fortitude, etc. The entertainment value comes from everyone’s thinking that they might actually be able to endure longer than anyone else, and of course, this comes at the expense of the contestants. Everyone would be better off if he could reliably and accurately communicate how long he could stay touching the truck, but most efforts at communicating this information are cheap talk (like Daniel does in the show.)
 Generally we would have to account for different people’s different valuations of the truck, but, since this is a game show and the organization running the game wants to pick the most enthusiastic bunch of people it can find, we will safely assume that every contestant values the truck at market price.
You gotta keep him close to first. Otherwise he might as well walk to second. Don’t give him anything for free.
But what’s the best way to do that? How often should one throw over to the first baseman to make sure that he doesn’t steal?
Let’s model this through the payoff to the pitcher and the runner. The runner, obviously, wants to steal second, or advance an extra base on a ball put in play. The pitcher wants the exact opposite: he wants to prevent that from happening. To do so, he will attempt to pick off the runner at first if he gets too greedy.
So let’s now assign values to this, as a function of how big a lead that the runner takes (x), and how often the pitcher throws the ball over to first (t). We say that the runner gets payoff from gaining an extra base, and does so with probability , with . Meanwhile, the pitcher gets payoff v from picking off the runner, which occurs with probability , where and for , while . In other words, the probability you get picked off is greater the bigger a lead you take, and the more often the pitcher throws over to first, assuming you’re not actually standing on first. Finally, since you win in baseball if and only if the other team loses, we model this as a zero-sum game.
If this were our entire expression, it would be obvious that the pitcher should throw to first base as much as possible. In other words, he should try to pick off the runner at first at literally every single moment, and ignore the batter at home. So why don’t we see this?
The reason is because the pitcher is only human. Every time he throws the ball to first, there’s a chance that it will get away from him and sail over the first baseman’s head. If so, the runner will get his extra base anyway as the team on the field scrambles to recover the ball. This means that by throwing more frequently, there are more opportunities to screw up. Assuming the probability of throwing the ball away is the same each time, this gives the runner an additional payoff of given how often the ball is thrown. Thus, our expressions for the payoffs of the runner and the pitcher are:
A Nash equilibrium will occur whenever the pitcher cannot make it better for his team to throw more frequently, and the runner cannot improve his chances anymore of taking an extra base without risking himself too much. Thus, in equilibrium:
Of course, it’s possible that the pitcher is so good at picking people off that the runner doesn’t dare step off the bag, in which case we have an equilibrium which doesn’t satisfy the above two equations. Alternatively, the runner could be so good that no matter how often the pitcher tries to pick him off, he gets the extra base anyway, in which case the equations don’t hold either. But these cases are not normally found in the big leagues, so we’ll leave it as is.