Bidding up blood

Mexican drug cartels, which control the tremendously lucrative flow of drugs into the US, have over the past several years begun to kill civilians with impunity. Bodies are displayed in public, severed limbs have been tossed onto dance floors, and the total body count continues to rise.

Until recently, civilians and children were off limits in the cartels’ informal codes of honor. The willingness to kill civilians was a signal of ruthlessness, to inform citizens and each other who is winning the war[1].

As of 2010, the Mexican drug cartels have formed two tenuous alliances against each other, one composed of the Juárez Cartel, Tijuana Cartel, Los Zetas Cartel and the Beltrán-Leyva Cartel, and the other, the Gulf Cartel, Sinaloa Cartel and La Familia Cartel [2].

To see how the two alliances might be bidding up the violence, we can first model the civilian killings as an all-pay auction. After all, the cartel incurs some cost for each civilian it kills regardless of whether it wins, and the alliance that has the most kills at the end of each period becomes the more feared of the two among civilians.

In the classic War of Attrition game, the only Nash equilibrium outcomes are that one player bids 0 and the other bids V, the value of the territory under dispute for the period. This implies that we should see in any given time period a large number of killings by one alliance and none by the other, and perhaps the territory would switch hands from period to period (as is one solution for repeated Battle of the Sexes). The expected utility for each alliance should be 0. Alternatively, each cartel has a probability distribution over [0,N] for when it will stop killing civilians. If this is a good model, then the increase in killings might be explained by the decrease in cost of killing civilians (law enforcement is getting less effective).

In the war of attrition game, once both players have made a positive bid, any victory will be a Pyrrhic victory — the expected payoff will be negative. Consider the classic example of the all-pay auction for a $20 bill — if one player bids $20 and the other, $0, they both get a payoff of 0. If one bids $20 and the other, $2, then the player who bids $2 will be forfeiting $2 anyway and might as well bid $22 and win the money. But, now the first player is out $20 — he would lose less if he could get by with winning with a bid less than $40. At some point one player should just take the hit and exit with a negative payoff.

So, the body count continues being bid up as long as both alliances continue to kill civilians on every turn — and this is in fact the case. One explanation might be that the killings are not simply a signal to the civilian population, but also a signal to the other alliance.

We can consider a three-period game:

  1. Each alliance finds out whether it is strong or weak
  2. Given the first, each sends a signal (kill many or kill few)
  3. Each decides whether to attack the other, or to defend. Nonaggression only occurs when both defend.

Each alliance must assert that it is “Strong” type rather than “Weak” type in order to maintain a foothold on the piece of territory. If a strong alliance believes the other alliance is weak on a period, it should attack and take over, since the weaker alliance cannot afford to retaliate.

Alliance j is strong
Attack Defend
Attack -2,V -2,V
Defend -2,V 0,0
Fig. 1: If Alliance i is weak, j is strong
Alliance j is weak
Attack Defend
Attack -1,-1 -1,-1
Defend -1,-1 0,0
Fig. 2: If both are weak
Attack Defend
Attack -2,-2 -2,-2
Defend -2,-2 0,0
Fig. 3: If both are strong

We see that if you are weak, your subgame perfect equilibrium strategy in the last stage is to defend regardless of your opponent’s strength. What signal should you send? Since killing might be costly for a weak alliance, a strong alliance will never send a signal that it is weak (killing few people). Therefore, if the opponent receives the signal that few civilians were killed, he knows that this is a credible signal of weakness.

A weak alliance might signal from the set {many kills, few kills}. Since the players are in identical situations at t=0, their probability p that each will be strong or weak, the probability q that they will give a false signal if weak, and the additional cost c to a weak player giving a high kill signal will be the same. Expected payoff for the weak alliance if it sees a high kill signal is

(q)[mi(strong|many)U(strong, many, defend)+mi(weak|many)U(weak, many, defend)-c]+(1-q)(-2)

= (q)mi(strong|many)[mj(strong|many)(0)+mj(weak|many)(-2)-c]+mi(weak|many)(0)+(1-q)(-2)

= (q)mi(strong|many)[mj(weak|many)(-2)-c]+(1-q)(-2)

It turns out that if sending a false signal is costless, then q is maximized at 1 and we have a pooling equilibrium. If it is costly enough, then there is a separating equilibrium (weak alliance sends low signal, strong sends high signal). What it means for our cartels is that as long as there is a pooling equilibrium, both sides will definitely enter a war of attrition and bid up the body count even beyond their valuations for the territory. It is when the cost of killing just one civilian becomes high enough that it creates a separating equilibrium that the weak alliance doesn’t kill anyone, and the strong alliance kills one[3]. Needless to say, without an honor code to raise this cost, and given the state of Mexican law enforcement, this is quite unlikely.

Thanks to Jeffrey Kang for bouncing ideas around with me.
[2]”Violence the result of fractured arrangement between Zetas and Gulf Cartel, authorities say”. The Brownsville Herald. March 9, 2010. Retrieved 2010-03-12.
[3] Why one? Because people are discrete. If the separating equilibrium were at 2 kills, then 1 kill might be a possible low signal, in which case the players may enter a war of attrition anyway.

How to win big at Chinese Auctions

(Disclaimer: I wrote my Senior Thesis at Princeton about eBay’s auction mechanism, so I’m kinda obsessed with auctions)

So, there are lots of different types of auctions out there, and lots of different auction houses. There’s Christie’s, a major art auctioneer; eBay, the #1 online auction site; English auctions, Dutch auctions – the list goes on and on. As one would then expect, the academic literature on auctions is huge. To attempt to summarize it here would be impossible. For the barest of an overview, check out Wikipedia’s page. I guess I can provide a couple of brief sentences: in the first-price auction (where you pay what you bid if you win), the Nash equilibrium symmetric strategy is to bid what you expect the guy with the next-highest bid values the object. In the second-price auction (where you pay what the next highest bid), you should always bid how you value the object, no matter what anybody else does.

Surprisingly, there is virtually no literature on “Chinese auctions,” which is very surprising since this type of mechanism is not at all uncommon. The Wikipedia page says that Chinese auctions are “typically featured at charity, church festival and numerous other events.” I know that every year the local Mikvah (Jewish ritual bathhouse) holds a Chinese auction to raise money (my family has actually done quite well with winning stuff, so I don’t know how they would make money if not that the items for auction are donated, but that’s another story). Yet when doing a Google Scholar search, only one relevant article (which you can’t even access, and has only been cited twice) is listed. Compare that to, say, second-price auctions, which generates hundreds, if not thousands, of relevant hits. Not 100% sure why that is.

Anyway, I guess I should describe just how a Chinese auction works. Basically, an item/good is up for sale (say, two tickets to that thing you love), which different people can value differently. I suppose we can make things simple here by having each person value the object independently (not depending on how others do), with a uniform distribution over [0,M]. Each person (i) of the total of N people buys a certain number of tickets (), and a ticket is chosen at random from those bought, selecting the winner. Thus the probability that person i wins the item is , i.e. the number of tickets they bought out of the total number of tickets sold. Again for simplicity, we assume that , and the cost is $1/ticket.

Let’s assume for now that if you like the item, you’re going to buy more tickets. Makes sense – you’re willing to invest more to ensure that you win. Let’s just check to see that we can find a Nash equilibrium with people following a strategy like this one.

At equilibrium, no person wants to buy any more or any less tickets – they are best off by buying exactly based on how much they like the good (). So, they cannot improve their expected payoffs by buying a different number of tickets. That is, for all i,

This is way too complicated to try to analyze directly. So let’s make things simpler. Say there are K other tickets in the pot. Then one will want to buy tickets so that

Thus, when the number of other tickets in the pot is known to be K, it is best to put in exactly tickets. Of course, this value could be negative – in which case, it would be best to buy no tickets at all! This is because the cost of any ticket is greater than the return one can expect from getting it, and one might as well sit out the auction. Notice that this gives us the characteristic we wanted – people who want the good more will buy more tickets.

Now, let’s try the case where each person knows exactly how much everyone else likes the good. If there are T total tickets, where , then we can rewrite the optimal number of tickets that person i purchases as

Thus we immediately see that only those people who value the object more than the total value of tickets in the pot will actually submit any tickets at all. We’ll assume that’s the case for simplicity’s sake; otherwise, we can just ignore those people who don’t want to get anything.

Summing the i equations of the above form, we get

From there, it is easy to plug in to the equations for to solve for .

Example: Suppose there are two people going for an iPad (I’m still a PC person, but whatever). The first values it at $1000, while the second at $500. By the arguments above, we plug in , , and , and get

One more note: when everyone’s valuations are known, it is still a dominant strategy to submit your valuation as your bid in the second-price auction. This yields a revenue (on average) of , where is the highest appraisal of the item. Yet here, the ticket sales are (at most) just (N-1)/N times the harmonic mean of how much everyone values the item. Indeed, when we take a second look at how many tickets each person is willing to buy, we see that the total number of tickets must be less than how much at least two people value the item, which is less than the amount paid by the winner in the second price auction. Thus it appears that Chinese auctions yield lower revenues to the auctioneer than second-price auctions. Perhaps this is not the best way to run a charity auction. Something to keep in mind; maybe I’ll look more into it at a later point.

Marli will write another post later this week.

(Final disclaimer: the arguments in this post constitute a sketch of an argument, not a rigorous proof. As such, the results here should be considered tentative, and this post should not be construed to be the final word on the subject.)

When in New York, do as the New Yorkers do.

I was recently walking through a busy section of midtown Manhattan with Jeffrey and another friend, and, as almost all foot-travelers on busy sidewalks do, we walked on the right. Now and again there would be the odd tourist, camera around neck, standing blissfully oblivious in the sidewalk, taking in the sights of Times Square as the traffic flows around him [1].

Now if you live in North America, you most likely drive on the right side of the road. It’s the law. Likewise, in the UK, Japan, India, Australia, and a handful of other countries, you would drive on the left. There are no such laws for pedestrians, and indeed there is no need — for the most part, we follow the convention.

The convention is an equilibrium — a focal point in a coordination game. Americans could just as well all walk (or drive) on the left with no ill consequences. Imagine that you are on a sidewalk with a number of other pedestrians, who all walk on the right. You could walk on the left, but even if the oncoming traffic were not shooting you glares of death, you would waste precious time dodging them. Your optimal strategy is then to walk on the right, the path of no resistance. The same is true for each other pedestrian on that sidewalk, whose situations are (with some abuse of terminology) symmetric.

Tourists, who happen to be abundant in the theatre district, add a random element to the game. Pedestrian conventions are nowhere as well-established in less-trafficked locales, since really, walking on either side of the street is equally good when encounters with other pedestrians are infrequent. Even when we introduce this random factor, traffic tends toward an equilibrium. Imagine a block on which roughly equal numbers of pedestrians choose to travel in each direction on both sides of the sidewalk: If there are even slightly more people following the right-hand traffic rule or the left-hand traffic rule, there will be an advantage for every pedestrian to follow that rule.

In other parts of the city, which comprise almost exclusively of tourists, there may exist a third equilibrium, an unstable knife edge equilibrium where evey pedestrian is randomly choosing between walking on the left or on the right (0.5, 0.5). Since every other pedestrian is doing the same, it is not advantageous for any one pedestrian to change her strategy (they are equally bad). As soon as a large enough group of convention-following natives joins the sidewalk that the change in proportion is locally perceivable, the traffic is tipped toward one of the pure strategy equilibria.

In fact, it was the case that we found ourselves on a block along Broadway on which more people seemed to be walking on the left than the right, and, since we wanted to escape an impending storm as soon as possible, we walked on the left. But no sooner than a long gap appeared on the right did the two people immediately ahead in turn switch to the walking on the right, and, perceiving this and the oncoming right-walkers approaching from the next block, I and my companions followed suit. Convention won out again.


[1] If you do this downtown during rush hours, you will be stampeded.

“Why does the supermarket only carry Tree Crap??”

I’ve often noticed that the supermarket next door to my parent’s home has very poor inventory control. They buy huge quantities of brands that not many people buy (seriously? half an aisle of Goya beans?), while they quickly run out of their meager supplies of the good stuff, and fail to replace it – obviously, if they sell not so much of something, they should not carry much of it – and so the cycle of unnecessarily low sales continues.

So why the heck does this happen? How are the store managers so bone-headed as to think that people actually want to eat Goya beans? Why can I only buy Tree Crap orange juice when I would like Tropicana? Don’t they want to make more money?

Perhaps we could explain the problem as one of an extensive-form game, in which the first period sees the consumer choosing whether or not to buy the product; the second period sees the seller keeping or replacing the product with a different brand; intuitively, the seller will choose to keep the same brand if the consumer buys in the previous round, and replace otherwise. We then repeat this indefinitely, with a discount factor . For the lay people, the discount factor indicates how much one cares about future times one will have to consume Tree Crap or Tropicana.



We don’t assume that the seller is all stupid, in that they know which one commands a greater price and still won’t sell it, but that they are only mostly stupid; and there is a big difference between mostly stupid, and all stupid. We assume that they can only tell how much people like the item by whether they are willing to buy it when it is sold. Now, if they actually had the brains to do a SURVEY or something (God, why can’t they do something so simple?), then this would be a moot point, as they could easily figure out which was more liked, but as it is, they can’t figure that out since they ain’t got no brains. (Did someone say brains? mmm….)

Let the net value of Tree Crap to the consumer be , and the net value of Tropicana to the consumer be . If

is sufficiently small, so that
the payoff to the consumer from buying the Tree Crap:

is greater in each period than from sitting it out and waiting for the next time for Tropicana:

– ya just gotta get yer orange juice, even if it’s Crappy.

Now of course, there is more than one consumer involved in the supermarket, so the model above isn’t quite right. Besides, if there were only one consumer, the supermarket wouldn’t be able to make much of a profit now, would it?

So let’s change what happens in the first period to adapt to these circumstances. We still consider two brands, Tree Crap and Tropicana, whose values, and (respectively) to consumers are the same across all N of them. We set it so the seller only provides the same brand in the next period if K of the consumers buy the item, where 0 < K < M. In non-technical terms, this means that the seller only keeps on selling Tree Crap if enough people keep on buying it.

Consider the following three cases:

(1) Suppose less than K – 1 other consumers are willing to purchase Tree Crap, and the rest refuse to stoop to that level. Then from the perspective of the individual consumer, he might as well purchase Tree Crap in that period, since his purchase will not lead to the seller providing Tree Crap in the next opportunity to buy orange juice.

(2) Suppose exactly K-1 other consumers are willing to purchase Tree Crap. Then the situation for the individual consumer is exactly the same as that in our initial case with only one consumer, since he makes the difference between the seller offering Tree Crap or Tropicana in the next period. Thus he will buy if and only if

(3) Suppose at least K other consumers are willing to purchase Tree Crap. Then the individual consumer might as well buy the Tree Crap, since he’s doomed to more Tree Crap next time, too.

Notice that situation (3) does not depend on . Thus, no matter how much one cares about future opportunities to buy orange juice, it is a Bayesian Nash equilibrium to buy Tree Crap in every period if everyone else is, too. Which of course means that the seller will keep on selling it. Why can’t they just do a frikkin’ survey???

How We Learned to Stop Worrying and Love the Game.

It is the mid-1950s, and the United States and Soviet Union are in the midst of a nuclear arms race. In the “War Room,” the President, a general, and an eccentric, wheelchair-bound genius with a thick European accent debate the merits of a first-strike attack that would obliterate all of the Soviet Union before it can retaliate. Here, fiction diverges from reality.

Indeed, a Hungarian wheelchair-confined mathematician on the verge of chemotherapy-induced dementia, John von Neumann, was transported to the White House to advise President Eisenhower[1]. Von Neumann, a founding father of game theory, believed that a first-strike that destroyed the Soviets before they could build an H-bomb was the key to ending the nuclear threat as well as allowing the US to maintain its position as the only nuclear superpower[2].

In Stanley Kubrick’s Dr. Strangelove, the Soviets have a deterrence plan — a Doomsday Device that automatically activates and destroys life on Earth for 100 years if Russia is bombed. The device also cannot be disarmed, and so by building such a machine, the Soviets have made a “completely credible and convincing” threat for deterrence[3]. However, it is revealed by the Soviet ambassador that no one yet knows of the device, since the Soviet Premier had wanted to unveil it with fanfare.

The Americans also have a retaliatory measure in place, “Wing Attack Plan R,” which allows Field Commanders to bomb the Soviet Union in the case that Washington is destroyed by a Soviet first attack. It happens that Brigadier General Jack D. Ripper believes that Communists are poisoning the water and, knowing nothing of the Doomsday Device, orders his nuclear-armed B-52s to attack. He too has effected measures to prevent the recall code from being obtained — and even when it is finally obtained and broadcasted, one of the bombers has a defective radio and was beyond contact. Major Kong rides the bomb as it falls from the plane, and mushroom clouds erupt around the globe.

“The whole point of having a Doomsday Machine is lost if you keep it a secret. Why didn’t you tell the world?”

The nuclear game of Dr. Strangelove shares characteristics with the game of Chicken, where two vehicles accelerate toward each other and the loser is the driver who swerves. If neither swerves, mutual destruction results:

Swerve Straight
Swerve Tie, Tie Lose, Win
Straight Win, Lose Crash, Crash
Fig. 1: A payoff matrix of Chicken
Swerve Straight
Swerve 0, 0 -1, +1
Straight +1, -1 -10, -10
Fig. 2: Chicken with numerical payoffs[4]

The only pure strategy equilibria for Chicken are (straight, swerve) and (swerve, straight). Likewise, in Dr. Strangelove‘s game of nuclear chicken, the pure strategy equilibrium is to allow one country to be the nuclear power and for the other not to threaten:

Disarm Bomb
Disarm Tie, Tie Lose, Win
Bomb Win, Lose Armageddon, Armageddon
Fig. 3: Nuclear Chicken

As can be seen, the game might be won by either the US or USSR by striking first. One strategic move that can be made in Chicken is commitment to a credible threat (called Brinkmanship) — e.g. you could rip out your steering wheel and throw it out the window, demonstrating that your car must go straight and that your opponent must swerve to avoid mutually assured destruction.

This is the strategy that the USSR attempts to play in Dr. Strangelove, and it would have been effective if only the US and USSR both had known that the other had effectively ripped out their steering wheels. And even so, destruction of all life would not have been assured but for one deranged General Ripper’s conspiracy theories.

Game theory is often criticized for its flippance toward irrationality. What about the General Rippers of the humanity, they ask? The whole point is far from lost if we keep it a secret, because rational choices are by their nature not a secret; they can be teased out and brought out into the light. Why not tell the world?


[1] Paul Strathern, Dr Strangelove’s Game: A Brief History of Economic Genius. There’s also evidence that Dr. Strangelove was modelled on Wernher von Braun, the former Nazi inventor of the ballistic missile (Jeff).
[2] This is known as a first-mover advantage.
[3] Dr. Strangelove in the film.
[4] Example from Wikipedia.