Why do women (almost) never ask men on dates?

This is something I’ve asked a few of people about. It seems odd that in our modern, post-feminist age, it is almost always men who do the asking out. This is not so good for both men and women. For men, it puts a lot of pressure on them to make all of the moves. For women, I cite Roth and Sotomayor’s classic textbook on matching, which shows that, though the outcome from men always choosing partners is stable, it is the worst possible stable outcome for women. That is, women could get better guys to date if they made the moves.

I have a few hypotheses, but none of them seem particularly appealing:

1) Women aren’t as liberated as we think.

Pro: There doesn’t seem to be any point in history where this was any different, so this social practice may indeed be a holdover from the Stone Age (i.e. before 1960).

Con: If this is true, then it is a very bad social practice, and we should buck it! This is not a good reason to maintain it!

2) If a woman asks a man out, it reveals information about her. This could be a case of multiple equilibria. Suppose that a small percentage of “crazy types” of both men and women exists, and under no circumstances do you ever want to date one of them. The equilibrium in which we are is fully separating for women, where the “normal types” always wait for men to ask them out, while the “crazy types” ask men out. Since this is a perfect Bayesian equilibrium, men know that if they get asked out, the woman must be crazy, and so they reject. Knowing this, the “normal” women would never want to ask a man out, since it would involve the cost of effort/rejection with no chance of success.

Suppose the chance that someone is crazy is some very small \epsilon > 0. Consider the game tree:

Image

Notice that the crazy women always want to ask the guy out, no matter what the beliefs of the guy are.

There are a few perfect Bayesian equilibria of this game, but I will highlight two. The first is that the normal women never ask guys out, and guys never accept. As \epsilon \rightarrow 0, this gives expected payoff to people of (0,0). No one wants to deviate, because only crazy women ask guys out, and so a guy would never accept an offer, as that would give payoff -10 instead of 0; knowing this, normal women will never ask men out, because that gives them payoff -1 instead of 0.

Another equilibrium is that all women ask men out, and men always accept. As \epsilon \rightarrow 0, the expected payoff vector is (2,2). Thus the former is a “bad” equilibrium, while the latter is a “good” one. In other words, we may be stuck in a bad equilibrium.

Pro: I think that there definitely some guys out there who think that women who would ask them out are “aggressive” or “desparate,” and so they wouldn’t go out with them.

Con: I don’t think the above sentiment is true in general, at least for guys worth dating! If a guy has that attitude, he’s probably an @$$#0!3 who’s not worth your time.

There may also be some elements of the problem with (1), but these would be harder to overcome, as the scenario here is an equilibrium.

Finally, while this might have some plausibility for people who don’t really know each other yet, I definitely don’t think this is true for people who know each other somewhat better, and therefore would already know whether the woman in question was crazy. That being said, I would expect it to be more likely that a woman who has known the man in question for longer to be proportionally more likely to ask him out (relative to the man), even if it is still less likely.

3) Women just aren’t as interested. If he’s willing to ask her out, then fine, she’ll go, but otherwise the cost outweighs the benefit.

Pro: It doesn’t have any glaring theoretical problems.

Con: I want you to look me in the eyes and tell me you think this is actually true.

4) They already do. At least, implicitly, that is. Women can signal interest by trying to spend significant amounts of time with men in whom they have interest, and eventually the guys will realize and ask them out.

Pro: This definitely happens.

Con: I’m not sure it’s sufficient to even out the scorecard. Also, this seems to beg the question: if they do that, why can’t they be explicit?

When I originally showed this to some friends, they liked most of these possibilities (especially (1) and (2)), but they had some additional suggestions:

5) Being asked out is self-validating. To quote my (female) friend who suggested this,

…many girls are insecure and being asked out is validation that you are pretty/interesting/generally awesome enough that someone is willing to go out on a limb and ask you out because they want you that badly. If, on the other hand, the girl makes the first move and the guy says yes it is much less clear to her how much the guy really likes her as opposed to is ambivalent or even pitying her.

ProThis is true of some women.

Con: Again to quote my friend, “There are lots of very secure, confident girls out there, so why aren’t they asking guys out?”


6) Utility from a relationship is correlated with interest, and women have a shorter window. This one is actually suggested by Marli:

 If asking someone out is a signal of interest level X > x, and higher interest level is correlated with higher longterm/serious relationship probability, then women might be interested in only dating people with high interest level because they have less time in which to date.

Pro: It is true, women are often conceived to have a shorter “window,” in that they are of child-bearing age (for those for whom that matters) for a shorter period.

Con: This doesn’t seem very plausible. Going on a date doesn’t take very long, at least in terms of opportunity cost relative to the length of the “window.” As a friend put it in response,

Obviously one date doesn’t take up much time; the point of screening for interest X > x is to prevent wasting a year or two with someone who wasn’t that into you after all. But then it would seem rational for (e.g.) her to ask him on one date, and then gauge his seriousness from how he acts after that. Other people’s liking of us is endogenous to our liking of them, it really seems silly to assume that “interest” is pre-determined and immutable.

So overall, it seems like there are reasons which explain how it happens, but no good reason why it should happen. I hope other people have better reasons in mind, with which they can enlighten me!


The Monty Hall Deception

When I was in middle school, I consumed a lot of typical nerd literature like Richard Feynman’s “Surely You’re Joking, Mr. Feynman” and anthologies of mathematics puzzles from the Scientific American by Martin Gardner. In the latter, I first encountered the Monty Hall Problem, and it goes something like this:

Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?

It turns out that, yes, it is always to your advantage to switch your choice. This is a solution that has been notoriously difficult for people to wrap their heads around. After all, when you picked a door, the probability of having picked the door with a car was still 1/3, and after a door was opened, there would still be a car and a goat behind the remaining two doors – it seems as through the probability of choosing the door with the car ought to be ½ regardless of the door chosen.

The Monty Hall Paradox is in fact not a paradox at all, but rather just some clever sleight of hand. The trick is that people are drawn to the fact that there are only two doors rather than three doors remaining, and assume that the host’s having opened a door is favorable to the player. People tend not to realize that the game has imperfect information – the player does not know where on the game tree he is, whereas the host does. Additionally, people assume that the host has no stake in the game (and this is not unreasonable, because the problem does not explicitly describe a parsimonious host! On the other hand, intuitively, we know that the host isn’t going to ruin the game by opening the door with the car.) So, if we assume that the host is profit maximizing and we model the problem as an extensive form game with imperfect information, then the conditional probabilities would be easy to see.

Now, just for fun, we’ll assign some utilities to the outcomes. What is a goat worth? According to a popular Passover song in Aramaic, a (small) goat is worth about 2 Zuz, and according to the traditional Jewish prenuptial document, a wife is worth about 200 Zuz. So, a goat is worth about 1/100th of a wife. I asked my roommate, Anna, how many cars she thought a wife was worth, and she determined that a wife was worth three cars. By transitivity, then, a car is worth about 33 goats. (I think goats have become quite a bit less valuable since that song was written, or maybe goats back then were a lot better than our goats.) So, if the player wins the game, he will walk away with a utility of 33, and the host will walk away with the 2 goats.

monty hall

In this game, the light gray branches are dominated because the host has no incentive to open the door that the player has already chosen, and the dark gray branches are dominated because, of the remaining two doors, the host would not open the door that has the car. We can tell that in the top branch, the host as 2 possible choices for doors to open, whereas in the lower two branches, the host is constrained to only one door (since, if the player has chosen a goat door, there is only one goat door left to open.)

So, since the player has no idea after the first stage what door has the car, we assume he picks door No.1 (as in the game). If he observes that the host opens door 3, he would know that there are two cases where the host opens door 2: in the world where the car is behind door 2, the host chooses door 3 100% of the time, and in the world where the car is behind door 1, the host chooses door 3 50% of the time. It’s actually twice as likely that we are on the 100% branch as that we are on the 50% branch – and that’s the branch where the car is hidden behind the other door.

What if we know that the host has opened a door, but we don’t know which one? Then, we can’t condition on a prior, because we don’t know what the prior is – we don’t get any new information by observing which door was opened, and switching doors would not help.


Jewesses in Skirts

Let me explain. Among Orthodox Jews, many of the women refuse to wear pants – they will only wear skirts of knee length or longer. Yet the reason for this practice is not so clear. Many reasons are given by different people. I’ll list a few, along with a sentence about why I don’t think they make so much sense:

(i) Modesty: Pants are immodest because they are form-fitting to the leg. Yet there are many other parts of the body for which clothes that are just as form-fitting are fine, yet are equally not allowed to be uncovered under Orthodox Jewish law. Besides, no one said you had to be wearing skinny-jeans.

(ii) Men’s clothing: Under Biblical law, women may not wear men’s clothing. Yet by now, it is normal for women to wear pants. Indeed, most Orthodox rabbis agree that this is not the main reason.

(iii) Suggestive: Certain parts of the pants might be sexually suggestive. If this is a problem for women, this would then definitely be a problem for men.

You might be asking yourself at this point: “What the heck does this have to do with game theory?” Well, I actually have a good reason for why many Orthodox Jewish women wear skirts, and it involves a perfect Bayesian equilibrium of an extensive form game.

We divide Orthodox Jews into two classes, each of which holds different communal standards. The first group adheres to a rather stricter standard, which may include certain norms about interactions with men, more stringencies regarding keeping kosher, etc. The second group is a little bit more laid back, though they may also be fully observant according to what they believe is necessary. I’m not making any claims about which one is better – I don’t want to go there – just bear with me.

Orthodox Jewish women in each class prefer others to recognize that they are in the correct class. This is perfectly understandable – one in the first class would not like others to offer food that didn’t meet their standards of keeping kosher, or would not appreciate certain advances from men; one in the second class might not appreciate being pressured into adhering to (from their perspective) unnecessary strictures. We therefore assign each class a payoff of C for being correctly labeled, and W for being incorrectly labeled, where C > W. For similar reasons, other Orthodox Jews would want to correctly label Orthodox Jewish women into these two classes.

To complete the model, we condition the beliefs of other Orthodox Jews on the signal of whether a given woman will wear only skirts (we assume that they can tell if they only wear skirts some of the time). If yes, they place her in the first class; if no, they place her in the second. Since having to only wear skirts is a restriction on the fashion choice of women, we’ll assign a modest loss of payoff (S) for only wearing skirts, where C – W > S.

Image

Image

Given their belief structures, other Orthodox Jews will assume that if you wear a skirt, you’re in Class #1; if you sometimes wear pants, you’re in Class #2. Knowing that others have this belief, the best strategy for Orthodox Jewish women is to actually always wear a skirt if they are in Class #1, while to not bother if they are in Class #2. In this way, the beliefs of others are self-fulfilling in the dress code of Orthodox Jewish women.

Of course, this model isn’t always true – I’m sure there are some people who have strong reasons (aside from those mentioned here) to choose to deviate from the equilibrium path described in this model. Yet I think this actually, to a large extent, gives the most compelling, and most credible, reason for why this dress code exists.


How to resolve a hostage crisis: Part II

Last week, I showed that if the villains are perfectly rational, and this is “common knowledge,” then the villains will be best off by just surrendering immediately without killing any hostages. But the rationality assumption is a big one – let’s see what happens if we drop it.

Once there’s a good chance that the villains are “crazy,” even criminals who are perfectly sane will pretend to be crazy so they can extract money as ransom. We can then model this as a Bayesian game, with N periods corresponding to the N hostages that are holed up in the bank. Each period will consist of two stages: in the first, the SWAT team decides whether to pay up or not; in the second, the villains choosing whether or not to kill a hostage; if they don’t, it’s tantamount to surrender. If they run out of hostages to kill, they are forced to surrender, and the surrender outcome gets progressively worse as they kill more hostages.

The payoffs are described as such:

Each dead hostage: gives -H to the SWAT team, and -C to the villains if they end up surrendering.

Surrender: gives -S to the villains.

Pay up: gives -P to the SWAT team, P to the villains, where H > P (1)

Finally, we assume that there is a probability k (initially) that the villains are nuts.

We’re going to construct a mixed strategy solution to this problem, which will yield a perfect Bayesian equilibrium. That is, in a given period i, the SWAT team pays up with probability p_{i}, and the villains execute a hostage with probability k_{i}. For those who are unfamiliar with mixed strategies, the basic idea is that both sides are indifferent between (at least) two options, so they may as well flip a coin as to which one to do. The trick is that the coin is weighted so as to make the other side indifferent as well. Thus they will flip their coins so as to make the first side indifferent. Hence this forms a Nash equilibrium – neither side can benefit by unilaterally changing their strategy.

The derivation is a little drawn-out, so I’m going to break it into steps:

(i) If the SWAT team always pays up in some period i, then the villains may as well execute their hostages until they reach that stage; after all, this guarantees them an automatic victory. But if they will do that, then the SWAT team should pay up immediately (i.e. in period 1): since they’ll lose anyway, they may as well save the lives of the hostages.

(ii) Conversely, if the villains will always execute a hostage at a certain stage, then the SWAT team should always pay up then. But then we run into the same issue as in (i), so the SWAT team will end up paying at the beginning. Combined, (i) and (ii) limit the types of equilibria we have to analyze.

(iii) If there is a period i at which it is known that the SWAT team will not pay up, no matter what, then the normal (not crazy) types, knowing this, will surrender in period i-1. But this possibility leads to an inconsistency. Knowing that the normal types will do this, the SWAT team will believe that anyone who hasn’t surrendered must be crazy. If so, it is a best response to pay up in order to avoid more casualties.

(iv) Thus, from (iii), there cannot be a perfect Bayesian equilibrium in which the SWAT team will maintain the siege to the bitter end – they must always cave in to the villains’ demands with some positive probability. Indeed, we see from (i) and (iii) that either they pay up immediately, or they pay up in period N with some nonzero likelihood.

(v) There is no benefit to the villains in executing the N^{th} hostage. Thus, if they are normal, they will not execute, and so this hostage will be killed if and only if the villains are crazy, which occurs with probability k_{N}.

(vi) In period N, by (iv), the SWAT team must still be indifferent between paying up and maintaining the siege. This implies that

-P-(N-1)H=-(N-1)H(1-k_{N})-NHk_{N}

k_{N}=\frac{P}{H}

(vii) In all periods i, the normal-type villains must be indifferent between surrendering and killing a hostage. Thus, if their expected payoff, if the game reaches the second stage of period i+1, is \Pi_{villains}^{i+1}, then we get

-(i-1)C-S=p_{i+1}P+(1-p_{i+1})\Pi_{villains}^{i+1}

But we know that since the villains will be indifferent as well in the next period (unless i=N-1), \Pi_{villains}^{i+1}=-iC-S. In any case, this formula still holds for i=N-1, since the normal types surrender in period N anyway. Thus we get, after substitution,

p_{i+1}=\frac{C}{P+iC+S}

As expected, this probability goes down as more hostages are executed – after all, there is less of a risk of more future casualties, since there are fewer hostages left – a dark thought, indeed.

(viii) We now derive the probability that the villains execute a hostage in the periods other than the last. The SWAT team is indifferent between maintaining the siege and paying up. Hence if their expected payoff at the beginning of period i+1 is \Pi_{SWAT}^{i+1}, then

-(i-1)H-P=-(1-k_{i})(i-1)H+k_{i}\Pi_{SWAT}^{i+1}

But because the SWAT team will also be indifferent between paying up and maintaining the siege in the next period, \Pi_{SWAT}^{i+1}=-iH-P. Hence

-P=-k_{i}H-k_{i}P

k_{i}=\frac{P}{P+H}

This value is constant! Note that it is decreasing in H – if the hostage is more valuable to the SWAT team, then the likelihood that they will kill another doesn’t have to be as high in order to deter the SWAT team from maintaining the siege. Conversely, if the hostage is less valuable, the villains will have to be more likely to kill the hostage to show they mean business.

(ix) Combining (vi) and (viii) gives us the the proportion of crazy types, at the beginning of any period i (after the first) in the perfect Bayesian equilibrium will be (\frac{P}{H+P})^{N-i}\frac{P}{H}. Thus, if at period 1, the initial probability that the villains are crazy, k, is greater than this amount, then the risk that they are nuts (or wannabe-nuts) is too high, and the SWAT team should fold immediately. Otherwise, they should maintain the siege in the first period, while the villains execute the hostages with the exact probability so that the likelihood that they are nuts, as of the start of period 2, is exactly (\frac{P}{H+P})^{N-2}\frac{P}{H}. Afterwards, the villains follow the strategy given by (vi) and (viii), while the SWAT team follows the strategy given by (vii).

(x) As a final note, we see from (ix) that as N\rightarrow\infty, the threshhold for the initial likelihood of craziness necessary to enforce a SWAT team payout goes to 0. This makes sense – there’s a larger potential for more casualties as the number of hostages goes up.

(1) (footnote: though dropping this assumption only slightly changes the outcome. Also, we exclude the possibility of storming the bank, since it’s similar in concept to the other options).


The problem with insider trading: a game theory perspective

I’m no expert at finance, so I can’t tell you in general how you should invest your money. What I can tell you, though, is why insider trading is such a harmful phenomenon, and why it should be curtailed to the best of the government’s abilities. To illustrate this, I’m going to use a simple Bayesian model.

To begin with, it is reasonably clear why someone would execute an inside trade. If some executive-honcho dude has some extra knowledge, not generally available to the public, which leads him (or her) to conclude that a particular financial asset (we’ll assume it’s a stock) is worth some amount different from its market value, he can use this to his or her advantage. If the stock is undervalued, he can purchase the asset and make money for nothing. If it’s overvalued, he can short it, again making money for nothing. Either way, he wins at the expense of other financial agents.

So far, pretty clear. But hold on a second: investors know that executives can do this, and so anticipate the relevant market moves. So, if the executive sells, the market for the stock responds by lowering the price; if he buys, it rises by a certain amount as well. Does this ability help the other investors to avoid the pitfalls of insider trading?

Unfortunately, it does not, for a few reasons. The first is pretty obvious from the definition of insider trading – though the market may expect the stock to be worth a certain amount more or less than  before, it can’t know the exact amount. Only the executive, with the inside scoop, knows this. Hence he will be able to capitalize on any difference between what the asset actually should be worth (based on all the information), and what investors expect it to be worth based on their more limited information (which includes the fact that the insider is taking action).

Second, it is important to take note of the advantage in timing that the executive has. Since the market can only correct itself in response to his or her moves, he or she gets to make the first move. So, if the asset is worth more, he can buy more shares before others realize what is going on. Similarly, he can sell before others notice that he or she thinks the stock is overvalued, still making a profit through these means as well.

Finally, we come to what I think is the most interesting problem: the executive can take advantage of investor expectations, even when the fundamentals of the company actually remain the same in as perceived in public. Thus, if investors believe that, say, a sale by the executive indicates that the stock is worth more than previously thought, they will reduce the price at which it trades. But if so, the executive, knowing that the other investors have this belief, can sell for no reason whatsoever, and then repurchase at a lower price, making a net profit without a loss of shares that he or she holds. Similarly, if investors expect the asset to be worth more if the executive buys more shares, then the executive can buy shares and resell at the higher price.

Combined, the first and last reasons demonstrate that there cannot be a perfect Bayesian equilibrium which allows for insider trading. For if the price falls with a sale (so the investors think the stock is worth less), the executive will take advantage of this investor belief through selling without reason (i.e. the stock is not worth less); a similar situation occurs if the investor believes that the asset is worth more, as then the executive will buy without fundamental reason. Hence the decrease in the market value for the stock will occur based on false beliefs due to the executive’s action. Meanwhile, if the investors respond to executive stock purchases by, say, lowering their expected values of the stocks (or keeping the price the same), the first reason above demonstrates that the executive can increase his or her payoff by purchasing when the stock is worth more than publicly thought. Thus, in the latter case, the investors also have false beliefs (undervaluing the stock).

In order to allow for the possibility of a perfect Bayesian equilibrium in the stock market, one must therefore eliminate the possibility of insider trading. Under a rigid enforcement mechanism, such an equilibrium does exist: the market does not respond to actions of the executive. Since the executive cannot use private information to purchase or sell stocks, he or she can’t take advantage of the first or second reason; and since the market does not expect a change in value based on his or her actions, he or she cannot take advantage of the third reason. From the other end, the investors know that the executive can’t use any information that they do not have, they are not taken aback by any executive stock moves, and so they don’t change their expected valuation of the stock. Thus we see the importance of an effective prevention of insider trading – otherwise, the stock market is open to the games of executives with too much information.


How to win big in eBay auctions, Part II: Winning stuff for free

So I promised I’d show how there’s a perfect Bayesian equilibrium in the eBay setup where all people pay only the price at which the bidding started, \pi_0. How’s that, you say? Won’t people want to bid up the price, if they’re willing to pay more?

The key to this equilibrium is, obviously, to remove the incentive to bid any higher. Since, in eBay, the highest bid up to any given point is not observed, we can describe perfect Bayesian equilibria based on beliefs for those situations if they were to come up. Thus, if any bidder bids something other than \pi_0 at any time except for the last moment, the other bidders can plausibly believe that this bidder has bid something ridiculously high, and so will respond by bidding up the price of the item, knowing that they have nothing to lose by doing so. This way, they can punish any bidder who deviates by bidding higher than \pi_0, so no one will do so.

For this equilibrium to work, we must also consider the timing and tie-breakers. eBay has structured its auction mechanism so that, once there is at least one bid on an item, all bids must be greater than the current price. This means that at most one person can bid \pi_0; the rest, if they will bid, must bid more. We can resolve this issue by stipulating that all bidders try to bid \pi_0 at some time \tau, and one of these, chosen randomly, does so successfully. Meanwhile, anyone who bids at some other time (rather than the last minute) is believed to have again bid something ridiculously high, and punished in the same way as the one who bids something different from \pi_0. Again, this will ensure that everyone bids only at \tau, except for maybe at the last moment when no one has time to respond to their bid.

We have to (finally) deal with what happens at the last moment. Suppose bidder i is the lucky guy who successfully submitted his bid earlier at time \tau. At the last moment, to discourage others from trying to outbid him, now he is the one who bids ridiculously high. Knowing that they cannot win the item, no one else tries to submit a bid at the last moment.

One can formally check that this is indeed a perfect Bayesian equilibrium. Though this is unlikely to ever happen in reality, this shows the lack of uniqueness of symmetric equilibria (in the sense that all people’s strategies are ex ante the same) in eBay’s setup, and that we can get a pretty sweet outcome given the right beliefs. Pretty cool, huh?


How to win big in eBay auctions: Part I

OK, I lied a little bit: I can’t guarantee that you will win lots of really awesome things for dirt cheap. But I did stay at a Holiday Inn Express last night. What I can provide, though, is a perfect Bayesian equilibrium strategy that will mean you are bidding optimally (assuming others are also bidding in a similarly defined manner). Basically, I wrote my senior thesis at Princeton on eBay, so I know how it works pretty well.

I’m not going to go through the exact details of how eBay works, since I assume if you care, you already know more or less about that (you can look here for more information). I’ll also omit most of the gory details of how exactly to rigorously demonstrate that these strategies mathematically form an equilibrium, since I assume most readers will care mostly about the practical implications. I hope to upload a link to my thesis, so if you want, you can take a look at that yourselves.

At first glance, eBay seems to work exactly like a sealed-bid second-price auction. Indeed, eBay itself suggests that one should always bid exactly how much one values the item up for sale. But there are two potential issues. First, there are likely to be multiple items of the same type. For example, suppose I collect stamps: there are likely to be multiple copies of the same stamp (unless it is extremely rare). Thus, I might want to avoid bidding on an earlier auction of a stamp, if I could get the same one a little later for a better price (we’ll assume no discounting, since it doesn’t much change the reasoning). Second, eBay complicates the general second-price auction setup by showing the price history; that way, it might be possible to infer how badly others want, say, the stamp, and use that information to one’s advantage.[1]

Fortunately, in the single-item case, the reasoning for second-price auctions works for eBay auctions as well: one should always bid exactly the value of the stamp (though it’s no longer a dominant strategy). There is also a well-established result for multiple items, first shown by Robert Weber, which is that if there are N items, and M bidders (where M>N, and each person wants exactly one item), then we can rank the bidders’ valuations from high to low as (V_{1},V_{2},...V_{M}); it is then a subgame-perfect equilibrium for a person with valuation V to bid
b_{l}(V)=E[V_{N+1} |V_{l+1}=V] (a)
in the lth round of bidding; that is, one bids what one expects the (N+1)^{th} highest valuation to be, if one were to have the (l+1)^{th} highest valuation. For period N, this works out to bidding one’s valuation, so it fits nicely with the single-item case.

Can we generalize this to the eBay model? Fortunately, as I showed in my thesis, we can: all it takes is to construct a set of beliefs where the bidders will ignore the previous bids of others, and bid as they would in the sealed-bid case. The best way to break down the cases is to those of two items, and those of three or more.

In the former, given that, in the second period, everyone is just going to bid as in the single-item case anyway, they will bid as in the sealed-bid, two item case no matter what at the last moment. We just need to make sure that no one will screw things up by bidding earlier. We can ensure that this happens by, say, constructing an equilibrium in which bidders believe that only people who value the stamp very highly would bid earlier than the last moment; so, if someone else has bid earlier, the rest of the bidders would assume they had lost anyway. Since one can’t see the top bid (only the current stamp price, based on the second-highest bid up to then), this belief is plausible. And if they’ve lost, they might as well just bid what they would otherwise, as given by equation (a), since bids are costless. This solves the two-item case.

In the three-plus item case, we have to be a little more careful, since if everyone just blindly bid as in equation (a), you might learn other’s valuations in earlier rounds, realize that you weren’t going to win at all if you continued bidding as in (a), and outbid others who valued the stamp more highly to “steal” the stamp. To get around this, one could construct an equilibrium where the bids are staggered – those who want the stamp more bid earlier. Since equation (a) is monotonic in V, this means that no more than two bids can occur in any given period; those who are supposed to bid earlier cannot, since the item price is already greater than what they are willing to bid (eBay requires that all bids must be greater than the current item price, since otherwise they are not relevant for determining the price of the given item, given the mechanism eBay uses). By assuming some plausible beliefs so that bidders would ignore anyone who deviated from this strategy profile, we can ensure that all bidders will adhere to the strategy profile – they’ll have no incentive to deviate.

Do people actually bid like I described in the multiple-item cases? Obviously not – most people don’t even bid their valuations, let alone think things through as much as here. Nor does it seem that people time their bids as in the equilibria described here. Yet the basic idea, that if one can ignore the bids of other people in determining how much they want the item, and only focus on bidding up to the value suggested in equation [1], that continues to make sense. After all, really, does anyone actually try to memorize how much someone was willing to bid last time a stamp came up for auction? Come on. Since no one tracks bids like this, it’s safe to bid as described here in general.

Note that, unlike the sealed-bid case, the equilibrium described here is not the unique symmetric one. In fact, there is an equilibrium in which all items are sold for the start price, if sold at all. But we’ll get to that in part II.

[1]Indeed, this issue causes there to not be any pure-strategy, increasing bidding function for multiple-item auctions with bid revelations. See Cai, Wurman and Chao (2007).


Sketchy dating after breakups

(I thought I’d put this post up now, since it relates to a friend’s recent post elsewhere.)

Generally, when people end a long-term relationship, they want to take a bit of a break from dating to get their feet back on the ground. Break-ups can be very emotionally taxing, and recovery takes some time. There are several rules of thumb as to how long one should wait; I won’t go into those, since that’s not the point of this post. What is interesting, though, is that often these rules are not well-kept. The question is, why?

For starters, let’s model a person’s payoff for entering a relationship. Let’s assume for now the person is a woman (also, let’s call her Fiona). Obviously she doesn’t want to enter one immediately after the breakup; but how much she does not want to do so depends on how much time has elapsed. More specifically, the payoff increases, eventually approaching a certain (bounded) value, at which point she is totally over her ex.

(Formally, we assign her a utility function U(t), where U(0)=0, and lim_{t\rightarrow\infty}U(t)=B, where B is some positive number. For example, when B=1, we could have a function like this:)

Fig. 1: sample graph

The guy who wants to ask her out also shares the same payoff (and we’ll call him Scotty). After all, of course he would – he’s only happy if she is, right? Thus it’s better for both of them if they wait longer to start up the relationship.

The thing is, Scotty doesn’t know if other people will have their eyes on Fiona. So, if he wants to lock her up as his only, he’s got to act quickly (by some time \tau). Suppose, for simplicity, he’s the first one to arrive on the scene (the same reasoning will apply even more strongly if there are others already competing with him for Fiona’s attention). Other suitors can be expected to arrive at a pretty much constant rate (r) if she’s still single. If he’s willing to ask her out by time \tau, then they definitely will by a later time (t>\tau), since they get an even higher payoff. Fearing this competition, Scotty will ask out Fiona at exactly the point where the gains from waiting are balanced by the losses in potential competition. (That is, U^{\prime}(\tau)=rU(\tau)\geq0.)

As the model is set up now, Fiona still has no reason to accept Scotty’s request. But if we introduce a cost of rejection (C) into the model, things change, even if such a cost is small. We can account for this as a natural consequence of social interactions: for example, things might be awkward between them if she turns him down. And no matter what, she cannot get more than a payoff of B later. Thus, she will certainly accept as long as U(t)\geq B-C.[1] Though she’d have a higher payoff if he asked later, accepting this request is the best response to his move.

To close things off, I should explain why I assumed initially that the person was a woman. Since even in this age of gender equality, guys are generally the ones doing the asking, women will encounter the possibility of being asked out, even if they are not yet looking around for new opportunities to date. Hence they incur the cost, C. By contrast, men might not look until they can get a payoff closer to B, without incurring the cost C. This makes it more likely that this situation will come up when woman have recently broken up with their boyfriends, rather than with the men.

Obviously, both sides in this equation would rather wait longer to start something up. But it’s just too risky to do so, since they might lose out altogether. So, we end up with much sketchiness. Haaaaaaai!

[1] That is, this is a sufficient condition; she might accept an even lower payoff depending on how frequently she expects guys to ask her out later.


Bidding up blood

Mexican drug cartels, which control the tremendously lucrative flow of drugs into the US, have over the past several years begun to kill civilians with impunity. Bodies are displayed in public, severed limbs have been tossed onto dance floors, and the total body count continues to rise.

Until recently, civilians and children were off limits in the cartels’ informal codes of honor. The willingness to kill civilians was a signal of ruthlessness, to inform citizens and each other who is winning the war[1].

As of 2010, the Mexican drug cartels have formed two tenuous alliances against each other, one composed of the Juárez Cartel, Tijuana Cartel, Los Zetas Cartel and the Beltrán-Leyva Cartel, and the other, the Gulf Cartel, Sinaloa Cartel and La Familia Cartel [2].

To see how the two alliances might be bidding up the violence, we can first model the civilian killings as an all-pay auction. After all, the cartel incurs some cost for each civilian it kills regardless of whether it wins, and the alliance that has the most kills at the end of each period becomes the more feared of the two among civilians.

In the classic War of Attrition game, the only Nash equilibrium outcomes are that one player bids 0 and the other bids V, the value of the territory under dispute for the period. This implies that we should see in any given time period a large number of killings by one alliance and none by the other, and perhaps the territory would switch hands from period to period (as is one solution for repeated Battle of the Sexes). The expected utility for each alliance should be 0. Alternatively, each cartel has a probability distribution over [0,N] for when it will stop killing civilians. If this is a good model, then the increase in killings might be explained by the decrease in cost of killing civilians (law enforcement is getting less effective).

In the war of attrition game, once both players have made a positive bid, any victory will be a Pyrrhic victory — the expected payoff will be negative. Consider the classic example of the all-pay auction for a $20 bill — if one player bids $20 and the other, $0, they both get a payoff of 0. If one bids $20 and the other, $2, then the player who bids $2 will be forfeiting $2 anyway and might as well bid $22 and win the money. But, now the first player is out $20 — he would lose less if he could get by with winning with a bid less than $40. At some point one player should just take the hit and exit with a negative payoff.

So, the body count continues being bid up as long as both alliances continue to kill civilians on every turn — and this is in fact the case. One explanation might be that the killings are not simply a signal to the civilian population, but also a signal to the other alliance.

We can consider a three-period game:

  1. Each alliance finds out whether it is strong or weak
  2. Given the first, each sends a signal (kill many or kill few)
  3. Each decides whether to attack the other, or to defend. Nonaggression only occurs when both defend.

Each alliance must assert that it is “Strong” type rather than “Weak” type in order to maintain a foothold on the piece of territory. If a strong alliance believes the other alliance is weak on a period, it should attack and take over, since the weaker alliance cannot afford to retaliate.

Alliance j is strong
Attack Defend
Attack -2,V -2,V
Defend -2,V 0,0
Fig. 1: If Alliance i is weak, j is strong
Alliance j is weak
Attack Defend
Attack -1,-1 -1,-1
Defend -1,-1 0,0
Fig. 2: If both are weak
Attack Defend
Attack -2,-2 -2,-2
Defend -2,-2 0,0
Fig. 3: If both are strong

We see that if you are weak, your subgame perfect equilibrium strategy in the last stage is to defend regardless of your opponent’s strength. What signal should you send? Since killing might be costly for a weak alliance, a strong alliance will never send a signal that it is weak (killing few people). Therefore, if the opponent receives the signal that few civilians were killed, he knows that this is a credible signal of weakness.

A weak alliance might signal from the set {many kills, few kills}. Since the players are in identical situations at t=0, their probability p that each will be strong or weak, the probability q that they will give a false signal if weak, and the additional cost c to a weak player giving a high kill signal will be the same. Expected payoff for the weak alliance if it sees a high kill signal is

(q)[mi(strong|many)U(strong, many, defend)+mi(weak|many)U(weak, many, defend)-c]+(1-q)(-2)

= (q)mi(strong|many)[mj(strong|many)(0)+mj(weak|many)(-2)-c]+mi(weak|many)(0)+(1-q)(-2)

= (q)mi(strong|many)[mj(weak|many)(-2)-c]+(1-q)(-2)

It turns out that if sending a false signal is costless, then q is maximized at 1 and we have a pooling equilibrium. If it is costly enough, then there is a separating equilibrium (weak alliance sends low signal, strong sends high signal). What it means for our cartels is that as long as there is a pooling equilibrium, both sides will definitely enter a war of attrition and bid up the body count even beyond their valuations for the territory. It is when the cost of killing just one civilian becomes high enough that it creates a separating equilibrium that the weak alliance doesn’t kill anyone, and the strong alliance kills one[3]. Needless to say, without an honor code to raise this cost, and given the state of Mexican law enforcement, this is quite unlikely.

Thanks to Jeffrey Kang for bouncing ideas around with me.
————————-
[1] http://www.washingtonpost.com/world/mexican-drug-cartels-targeting-and-killing-children/2011/04/07/AFwkFb9C_story.html
[2]“Violence the result of fractured arrangement between Zetas and Gulf Cartel, authorities say”. The Brownsville Herald. March 9, 2010. Retrieved 2010-03-12.
[3] Why one? Because people are discrete. If the separating equilibrium were at 2 kills, then 1 kill might be a possible low signal, in which case the players may enter a war of attrition anyway.


“Why does the supermarket only carry Tree Crap??”

I’ve often noticed that the supermarket next door to my parent’s home has very poor inventory control. They buy huge quantities of brands that not many people buy (seriously? half an aisle of Goya beans?), while they quickly run out of their meager supplies of the good stuff, and fail to replace it – obviously, if they sell not so much of something, they should not carry much of it – and so the cycle of unnecessarily low sales continues.

So why the heck does this happen? How are the store managers so bone-headed as to think that people actually want to eat Goya beans? Why can I only buy Tree Crap orange juice when I would like Tropicana? Don’t they want to make more money?

Perhaps we could explain the problem as one of an extensive-form game, in which the first period sees the consumer choosing whether or not to buy the product; the second period sees the seller keeping or replacing the product with a different brand; intuitively, the seller will choose to keep the same brand if the consumer buys in the previous round, and replace otherwise. We then repeat this indefinitely, with a discount factor . For the lay people, the discount factor indicates how much one cares about future times one will have to consume Tree Crap or Tropicana.

 

 

We don’t assume that the seller is all stupid, in that they know which one commands a greater price and still won’t sell it, but that they are only mostly stupid; and there is a big difference between mostly stupid, and all stupid. We assume that they can only tell how much people like the item by whether they are willing to buy it when it is sold. Now, if they actually had the brains to do a SURVEY or something (God, why can’t they do something so simple?), then this would be a moot point, as they could easily figure out which was more liked, but as it is, they can’t figure that out since they ain’t got no brains. (Did someone say brains? mmm….)

Let the net value of Tree Crap to the consumer be , and the net value of Tropicana to the consumer be . If

is sufficiently small, so that
,
the payoff to the consumer from buying the Tree Crap:

is greater in each period than from sitting it out and waiting for the next time for Tropicana:

- ya just gotta get yer orange juice, even if it’s Crappy.

Now of course, there is more than one consumer involved in the supermarket, so the model above isn’t quite right. Besides, if there were only one consumer, the supermarket wouldn’t be able to make much of a profit now, would it?

So let’s change what happens in the first period to adapt to these circumstances. We still consider two brands, Tree Crap and Tropicana, whose values, and (respectively) to consumers are the same across all N of them. We set it so the seller only provides the same brand in the next period if K of the consumers buy the item, where 0 < K < M. In non-technical terms, this means that the seller only keeps on selling Tree Crap if enough people keep on buying it.

Consider the following three cases:

(1) Suppose less than K – 1 other consumers are willing to purchase Tree Crap, and the rest refuse to stoop to that level. Then from the perspective of the individual consumer, he might as well purchase Tree Crap in that period, since his purchase will not lead to the seller providing Tree Crap in the next opportunity to buy orange juice.

(2) Suppose exactly K-1 other consumers are willing to purchase Tree Crap. Then the situation for the individual consumer is exactly the same as that in our initial case with only one consumer, since he makes the difference between the seller offering Tree Crap or Tropicana in the next period. Thus he will buy if and only if
.

(3) Suppose at least K other consumers are willing to purchase Tree Crap. Then the individual consumer might as well buy the Tree Crap, since he’s doomed to more Tree Crap next time, too.

Notice that situation (3) does not depend on . Thus, no matter how much one cares about future opportunities to buy orange juice, it is a Bayesian Nash equilibrium to buy Tree Crap in every period if everyone else is, too. Which of course means that the seller will keep on selling it. Why can’t they just do a frikkin’ survey???


Follow

Get every new post delivered to your Inbox.

Join 32 other followers