How to Win at “Shotgun” (or not)

I’ve known about this game for quite a while. I learned it in camp as a kid, and loved it at the time. In college, some of my friends rediscovered this game, and would play it at the dining hall table. So, I thought I would write about it in a post.

This game is a little bit more obscure than some of the others I’ve covered, so let me go through some of the details (which can be found here, anyway). Basically, you move simultaneously, by slapping your lap twice, then doing one of three moves: reload, shoot, or shield. Reloading gives your gun an additional bullet. Shooting means you’re trying to kill the other guy. Shielding blocks the (potential) bullet that the other guy is shooting at you. You win by shooting the other guy while he’s reloading; if you shoot each other simultaneously, the game ends in a draw.

Let’s assign the winner 1 point, the loser 0 points, and each player 1/2 a point in case of a draw. That seems pretty reasonable, right?

So, first thing, we can see that there’s a very simple subgame perfect Nash equilibrium (SPNE): each player always shields, no matter how many bullets he/she or the other player have. That this is a SPNE is pretty clear: nobody can do better by ever doing something else, since they’ll never get the opportunity to successfully shoot the other guy anyway, since his shield will always be up. Thus the game ends in a draw (or goes on forever, you pick). It’s pretty boring, but it works.

The question is, can we come up with something more interesting? Can anyone actually ever win in an SPNE? As we’ll see, the answer is NO.

Notice that in the SPNE, the expected number of points at any time cannot be less than 1/2 for each player: if it were, then they could do better by simply shielding forever, guaranteeing them at least 1/2 a point. Since the total number of points is 1, this means that each player can always expect exactly 1/2 a point.

This immediately implies that neither player will ever reload when the other guy has a bullet in the chamber of his gun (at least, not with a non-zero probability). If there ever were the slightest chance that he would, the other guy could shoot him, guaranteeing himself an expected number of points greater than 1/2. This is because, in the slight probability that he was reloading, the other guy would win (and get one point); if he ended up shielding while the other guy shot, the other guy could just shield forever. But this goes against the point in the previous paragraph: in any SPNE, the expected number of points for each player is exactly 1/2. Thus one will never reload, when there’s a possibility one could be shot, in any SPNE.

But this means that no one ever wins. Sounds pretty boring. I don’t think I’ll be playing this anymore.


Are you going to Hell?

One often hears Bible-thumpers declaring that anyone who does not adhere to (a rather narrow) religious tradition will be going to Hell when he or she dies. Often, their theology includes a God that is entirely good. Thus, when pressed why such a good God would institute such an awful thing as Hell, they claim that the purpose is to discourage people from deviating from the appropriate religious path. Since, if one does not adhere to the aforesaid religion, one will end up having very bad things happen, one will have incentive to actually do what is right.

We can write out a game tree to express this idea:

Here, H>L, and C,R>0. We illustrate God’s preferences of how the world should be through the “payoff” He gets; this is independent of the (much more controversial) thesis that God somehow enjoys certain states of the world in a hedonistic fashion.

The problem here with this reasoning for the existence of Hell is that it does not constitute a subgame-perfect Nash equilibrium. We’ve assumed that God only does something bad (sending people to Hell) to prevent worse things from happening (mass sinning). Consider the possibility that 100% of people adhere to the proper religion, and then God decides to send all those who acted righteously to heaven (100% of all people), and all those who sinned (egregiously?) to Hell (0% – no one). Then God is indeed doing what He prefers most by sending all sinners to Hell; since all sinners go to Hell, it is actually a Nash equilibrium for God to send all sinners to Hell.

The thing is, if someone actually did sin, God would have to consider what to do then and there. Since, once one is dead, sending you to Hell won’t (retroactively) prevent one from sinning, doing so would not serve its purported purpose. Moreover, Hell is unobservable to those who are still living, so it does not serve to deter the others who are alive from sinning – the conditional beliefs on those still alive are the same, regardless of what God actually does to the sinner. Thus, even if 100% of the world’s population would be sin-free, this scenario would not be a subgame-perfect Nash equilibrium, and thus the optimal thing for people to do would be to live it up. Given that the sinners would have already committed all of their sins, and there would be nothing possible to do about it after their deaths, the optimal thing for God to do would not be to send them to Hell. Indeed, since I don’t think there is any religion which doesn’t think somebody has existed who deserves to go to Hell, this isn’t a Nash equilibrium, either – the past sinners who should’ve gone to Hell under this worldview would not, since God would prefer to not send them to Hell (since He is good).

If one is then to posit the existence of Hell, one will have to address the issue that it cannot be based upon some purpose of deterrence where God acts completely rationally. It will probably have to be instead based on something else, such as a deontological framework, which may include a notion of just deserts.


How to win big in eBay auctions, Part II: Winning stuff for free

So I promised I’d show how there’s a perfect Bayesian equilibrium in the eBay setup where all people pay only the price at which the bidding started, \pi_0. How’s that, you say? Won’t people want to bid up the price, if they’re willing to pay more?

The key to this equilibrium is, obviously, to remove the incentive to bid any higher. Since, in eBay, the highest bid up to any given point is not observed, we can describe perfect Bayesian equilibria based on beliefs for those situations if they were to come up. Thus, if any bidder bids something other than \pi_0 at any time except for the last moment, the other bidders can plausibly believe that this bidder has bid something ridiculously high, and so will respond by bidding up the price of the item, knowing that they have nothing to lose by doing so. This way, they can punish any bidder who deviates by bidding higher than \pi_0, so no one will do so.

For this equilibrium to work, we must also consider the timing and tie-breakers. eBay has structured its auction mechanism so that, once there is at least one bid on an item, all bids must be greater than the current price. This means that at most one person can bid \pi_0; the rest, if they will bid, must bid more. We can resolve this issue by stipulating that all bidders try to bid \pi_0 at some time \tau, and one of these, chosen randomly, does so successfully. Meanwhile, anyone who bids at some other time (rather than the last minute) is believed to have again bid something ridiculously high, and punished in the same way as the one who bids something different from \pi_0. Again, this will ensure that everyone bids only at \tau, except for maybe at the last moment when no one has time to respond to their bid.

We have to (finally) deal with what happens at the last moment. Suppose bidder i is the lucky guy who successfully submitted his bid earlier at time \tau. At the last moment, to discourage others from trying to outbid him, now he is the one who bids ridiculously high. Knowing that they cannot win the item, no one else tries to submit a bid at the last moment.

One can formally check that this is indeed a perfect Bayesian equilibrium. Though this is unlikely to ever happen in reality, this shows the lack of uniqueness of symmetric equilibria (in the sense that all people’s strategies are ex ante the same) in eBay’s setup, and that we can get a pretty sweet outcome given the right beliefs. Pretty cool, huh?


How did the chicken cross the road?

While reading an open newspaper, according to my host for lunch a couple of weeks ago. Apparently this is the most efficient way for a pedestrian to cross a street in Boston, where signs are few and confusing and drivers aren’t fond of traffic laws. (Sound like any other city we know?) An open paper is a big gray signal to the driver that you have no idea she’s there, and if she doesn’t want to run you over, she has to stop. It’s not unlike ripping out your steering wheel and waving it out the window in a game of Chicken.

The city of Philadelphia is aware of this, of course, and the Philly police are cracking down on pedestrians who text and walk. Clearly a cell phone is just not a visible enough signal.

Disclaimer: Play at your own risk, and look both ways. We do not endorse breaking traffic laws.


How to win big in eBay auctions: Part I

OK, I lied a little bit: I can’t guarantee that you will win lots of really awesome things for dirt cheap. But I did stay at a Holiday Inn Express last night. What I can provide, though, is a perfect Bayesian equilibrium strategy that will mean you are bidding optimally (assuming others are also bidding in a similarly defined manner). Basically, I wrote my senior thesis at Princeton on eBay, so I know how it works pretty well.

I’m not going to go through the exact details of how eBay works, since I assume if you care, you already know more or less about that (you can look here for more information). I’ll also omit most of the gory details of how exactly to rigorously demonstrate that these strategies mathematically form an equilibrium, since I assume most readers will care mostly about the practical implications. I hope to upload a link to my thesis, so if you want, you can take a look at that yourselves.

At first glance, eBay seems to work exactly like a sealed-bid second-price auction. Indeed, eBay itself suggests that one should always bid exactly how much one values the item up for sale. But there are two potential issues. First, there are likely to be multiple items of the same type. For example, suppose I collect stamps: there are likely to be multiple copies of the same stamp (unless it is extremely rare). Thus, I might want to avoid bidding on an earlier auction of a stamp, if I could get the same one a little later for a better price (we’ll assume no discounting, since it doesn’t much change the reasoning). Second, eBay complicates the general second-price auction setup by showing the price history; that way, it might be possible to infer how badly others want, say, the stamp, and use that information to one’s advantage.[1]

Fortunately, in the single-item case, the reasoning for second-price auctions works for eBay auctions as well: one should always bid exactly the value of the stamp (though it’s no longer a dominant strategy). There is also a well-established result for multiple items, first shown by Robert Weber, which is that if there are N items, and M bidders (where M>N, and each person wants exactly one item), then we can rank the bidders’ valuations from high to low as (V_{1},V_{2},...V_{M}); it is then a subgame-perfect equilibrium for a person with valuation V to bid
b_{l}(V)=E[V_{N+1} |V_{l+1}=V] (a)
in the lth round of bidding; that is, one bids what one expects the (N+1)^{th} highest valuation to be, if one were to have the (l+1)^{th} highest valuation. For period N, this works out to bidding one’s valuation, so it fits nicely with the single-item case.

Can we generalize this to the eBay model? Fortunately, as I showed in my thesis, we can: all it takes is to construct a set of beliefs where the bidders will ignore the previous bids of others, and bid as they would in the sealed-bid case. The best way to break down the cases is to those of two items, and those of three or more.

In the former, given that, in the second period, everyone is just going to bid as in the single-item case anyway, they will bid as in the sealed-bid, two item case no matter what at the last moment. We just need to make sure that no one will screw things up by bidding earlier. We can ensure that this happens by, say, constructing an equilibrium in which bidders believe that only people who value the stamp very highly would bid earlier than the last moment; so, if someone else has bid earlier, the rest of the bidders would assume they had lost anyway. Since one can’t see the top bid (only the current stamp price, based on the second-highest bid up to then), this belief is plausible. And if they’ve lost, they might as well just bid what they would otherwise, as given by equation (a), since bids are costless. This solves the two-item case.

In the three-plus item case, we have to be a little more careful, since if everyone just blindly bid as in equation (a), you might learn other’s valuations in earlier rounds, realize that you weren’t going to win at all if you continued bidding as in (a), and outbid others who valued the stamp more highly to “steal” the stamp. To get around this, one could construct an equilibrium where the bids are staggered – those who want the stamp more bid earlier. Since equation (a) is monotonic in V, this means that no more than two bids can occur in any given period; those who are supposed to bid earlier cannot, since the item price is already greater than what they are willing to bid (eBay requires that all bids must be greater than the current item price, since otherwise they are not relevant for determining the price of the given item, given the mechanism eBay uses). By assuming some plausible beliefs so that bidders would ignore anyone who deviated from this strategy profile, we can ensure that all bidders will adhere to the strategy profile – they’ll have no incentive to deviate.

Do people actually bid like I described in the multiple-item cases? Obviously not – most people don’t even bid their valuations, let alone think things through as much as here. Nor does it seem that people time their bids as in the equilibria described here. Yet the basic idea, that if one can ignore the bids of other people in determining how much they want the item, and only focus on bidding up to the value suggested in equation [1], that continues to make sense. After all, really, does anyone actually try to memorize how much someone was willing to bid last time a stamp came up for auction? Come on. Since no one tracks bids like this, it’s safe to bid as described here in general.

Note that, unlike the sealed-bid case, the equilibrium described here is not the unique symmetric one. In fact, there is an equilibrium in which all items are sold for the start price, if sold at all. But we’ll get to that in part II.

[1]Indeed, this issue causes there to not be any pure-strategy, increasing bidding function for multiple-item auctions with bid revelations. See Cai, Wurman and Chao (2007).


The veto donation paradox

We think of the veto as a very powerful (perhaps even unfairly powerful) bargaining chip, but this is not always the case. Sometimes having a veto is not as good as giving it away.

In this example, you want to select a juror. Candidates arrive randomly — most are acceptable but mediocre for both sides, a few are great for one side and terrible for the other, and a few are pretty good for both.

This is a variation of the Secretary Game. The central question for secretary games is, “Since, once a candidate is rejected, he does not apply again, when should we stop interviewing?”

For simplicity, we assume that there are only three types:

Type by utility to (x,y) Probability of arrival
b, b where {1/2 < b < 1} 1 - 2\epsilon
1, 0 \epsilon
1- \epsilon, 1 - \epsilon \epsilon

[1]

1. If both players reject the candidate, he is rejected.
2. If both players accept the candidate, he is accepted.
3. If one player accepts the candidate and one player rejects, then the candidate is accepted unless someone uses a veto.

This is a sequential game, so it is a game of perfect information. It is also Markovian, which means that if the candidate is rejected, we return to the same state in which we started.

Suppose neither side has any vetoes. Then, X always accepts (1,0) and Y always rejects this, so it is accepted (since there are no vetoes). Y always accepts (1- \epsilon, 1 - \epsilon) since it is Y’s best outcome, so it is accepted. X rejects (b,b), but Y accepts (b,b) because b > 1/2 and so it improves Y’s average. (If Y also rejected it, then Y’s payoff would be the average of 0 and 1- \epsilon.)

Therefore, the expected utility is:

U(X) = (1- \epsilon)(\epsilon) + (b)(1- 2\epsilon) + (1)(\epsilon)
U(Y) = (1- \epsilon)(\epsilon) + (b)(1- 2\epsilon)

If X has exactly one veto and Y doesn’t, then X would use up the veto when Y accepts b in one round, and we return to the starting point. This makes X slightly better off and Y slightly worse off.

U^*(X) = (1- \epsilon)(\epsilon) + (U(X))(1- 2\epsilon) + (1)(\epsilon)
U^*(Y) = (1- \epsilon)(\epsilon) + (U(Y))(1- 2\epsilon)

However, if Y has one veto and X doesn’t, then X would need to reject (1,0) (since Y would veto this and X would end up with U close to b). If (b,b) arrives, X and Y both reject, and if (1- \epsilon) arrives, both accept. So, Y’s having the veto is actually better for both candidates than X’s having the veto (and, if X could, he should give the veto to Y). Additionally, since this arrangement guarantees a higher expected payoff for both sides, giving one side a veto can even be Pareto-improving.

Extension: What happens if both sides have a positive, finite number of vetoes?

It’s easy for X to guarantee himself an outcome of (1- \epsilon). He can simply accept (1,0) and (1- \epsilon,1- \epsilon) every time they appear and reject (b,b) up until Y only has one veto left. Then, play as in the previous case. He can’t do any better, since there are no cases where Y has vetoes and (1,0) is accepted, and there are no cases where Y has no vetoes and X’s expectation is greater than (1- \epsilon).

——————–
[1] Mathematicians and other quantitative people talk about \epsilon (epsilon, pronounced either “EP-si-lon” in the US or “ep-SIGH-len” in the UK) a lot. We’ve certainly used it often. You can think of it as an arbitrarily small positive number, or “a number as small as you need it to be, but not 0.”

Example is based on Shmuel Gal, Steve Alpern, and Eilon Solan’s A Sequential Selection Game with Vetoes (2008)