Why do women (almost) never ask men on dates?

This is something I’ve asked a few of people about. It seems odd that in our modern, post-feminist age, it is almost always men who do the asking out. This is not so good for both men and women. For men, it puts a lot of pressure on them to make all of the moves. For women, I cite Roth and Sotomayor’s classic textbook on matching, which shows that, though the outcome from men always choosing partners is stable, it is the worst possible stable outcome for women. That is, women could get better guys to date if they made the moves.

I have a few hypotheses, but none of them seem particularly appealing:

1) Women aren’t as liberated as we think.

Pro: There doesn’t seem to be any point in history where this was any different, so this social practice may indeed be a holdover from the Stone Age (i.e. before 1960).

Con: If this is true, then it is a very bad social practice, and we should buck it! This is not a good reason to maintain it!

2) If a woman asks a man out, it reveals information about her. This could be a case of multiple equilibria. Suppose that a small percentage of “crazy types” of both men and women exists, and under no circumstances do you ever want to date one of them. The equilibrium in which we are is fully separating for women, where the “normal types” always wait for men to ask them out, while the “crazy types” ask men out. Since this is a perfect Bayesian equilibrium, men know that if they get asked out, the woman must be crazy, and so they reject. Knowing this, the “normal” women would never want to ask a man out, since it would involve the cost of effort/rejection with no chance of success.

Suppose the chance that someone is crazy is some very small \epsilon > 0. Consider the game tree:

Image

Notice that the crazy women always want to ask the guy out, no matter what the beliefs of the guy are.

There are a few perfect Bayesian equilibria of this game, but I will highlight two. The first is that the normal women never ask guys out, and guys never accept. As \epsilon \rightarrow 0, this gives expected payoff to people of (0,0). No one wants to deviate, because only crazy women ask guys out, and so a guy would never accept an offer, as that would give payoff -10 instead of 0; knowing this, normal women will never ask men out, because that gives them payoff -1 instead of 0.

Another equilibrium is that all women ask men out, and men always accept. As \epsilon \rightarrow 0, the expected payoff vector is (2,2). Thus the former is a “bad” equilibrium, while the latter is a “good” one. In other words, we may be stuck in a bad equilibrium.

Pro: I think that there definitely some guys out there who think that women who would ask them out are “aggressive” or “desparate,” and so they wouldn’t go out with them.

Con: I don’t think the above sentiment is true in general, at least for guys worth dating! If a guy has that attitude, he’s probably an @$$#0!3 who’s not worth your time.

There may also be some elements of the problem with (1), but these would be harder to overcome, as the scenario here is an equilibrium.

Finally, while this might have some plausibility for people who don’t really know each other yet, I definitely don’t think this is true for people who know each other somewhat better, and therefore would already know whether the woman in question was crazy. That being said, I would expect it to be more likely that a woman who has known the man in question for longer to be proportionally more likely to ask him out (relative to the man), even if it is still less likely.

3) Women just aren’t as interested. If he’s willing to ask her out, then fine, she’ll go, but otherwise the cost outweighs the benefit.

Pro: It doesn’t have any glaring theoretical problems.

Con: I want you to look me in the eyes and tell me you think this is actually true.

4) They already do. At least, implicitly, that is. Women can signal interest by trying to spend significant amounts of time with men in whom they have interest, and eventually the guys will realize and ask them out.

Pro: This definitely happens.

Con: I’m not sure it’s sufficient to even out the scorecard. Also, this seems to beg the question: if they do that, why can’t they be explicit?

When I originally showed this to some friends, they liked most of these possibilities (especially (1) and (2)), but they had some additional suggestions:

5) Being asked out is self-validating. To quote my (female) friend who suggested this,

…many girls are insecure and being asked out is validation that you are pretty/interesting/generally awesome enough that someone is willing to go out on a limb and ask you out because they want you that badly. If, on the other hand, the girl makes the first move and the guy says yes it is much less clear to her how much the guy really likes her as opposed to is ambivalent or even pitying her.

ProThis is true of some women.

Con: Again to quote my friend, “There are lots of very secure, confident girls out there, so why aren’t they asking guys out?”


6) Utility from a relationship is correlated with interest, and women have a shorter window. This one is actually suggested by Marli:

 If asking someone out is a signal of interest level X > x, and higher interest level is correlated with higher longterm/serious relationship probability, then women might be interested in only dating people with high interest level because they have less time in which to date.

Pro: It is true, women are often conceived to have a shorter “window,” in that they are of child-bearing age (for those for whom that matters) for a shorter period.

Con: This doesn’t seem very plausible. Going on a date doesn’t take very long, at least in terms of opportunity cost relative to the length of the “window.” As a friend put it in response,

Obviously one date doesn’t take up much time; the point of screening for interest X > x is to prevent wasting a year or two with someone who wasn’t that into you after all. But then it would seem rational for (e.g.) her to ask him on one date, and then gauge his seriousness from how he acts after that. Other people’s liking of us is endogenous to our liking of them, it really seems silly to assume that “interest” is pre-determined and immutable.

So overall, it seems like there are reasons which explain how it happens, but no good reason why it should happen. I hope other people have better reasons in mind, with which they can enlighten me!


Bumping into people: the awkward dance

You know when you open the door, and you find someone else is trying to get in at the same time, and so you both end up right in each other’s faces? And then you each try to get out of the other’s way, only to go in the same direction and still be in each other’s face? And then you do a sort of weird dance?

I’ve been trying to find some good Youtube clips of this, and while I know they’re out there, I can’t find them in a brief search. But I think you know what I’m talking about.

Well, surprise surprise, we can model this interaction as a game! Each person has two strategies: move left (L), or move right (R).(1)  If they both move in the same direction, then they are still stuck doing the awkward dance, and get payoff -1. Otherwise, they move out of each other’s way, and so they happily go along their way, getting payoff 1.

 1\2 Left Right
Left (-1,-1) (1,1)
Right (1,1) (-1,-1)
Fig. 1: Awkward dance game

 

There are a couple of pure-strategy Nash equilibria: one player goes left, and the other goes right. But which equilibrium is going to be chosen? A priori, there is no way to tell. Here, social conventions can be useful, such as always moving forward on the right side (for a similar post, see Marli’s post, “When in New York, do as the New Yorkers do“). The problem is when some people didn’t get the memo (*sigh*).

There is a third Nash equilibrium in mixed strategies, where each person chooses to go in one direction with a 50-50 chance. This means that they will have a 50-50 chance each time they play that they will bump into each other, but eventually, after perhaps dancing for a while, they will get it right.

(1) This will all be from the perspective of player 1.


What do kittens have to do with rising tuition?

If you read the Financial Times, you might suspect from an article on Monday that kittens have something to do with rising tuition and Prisoners’ Dilemmas. Let me assure you that they don’t.

A friend of mine sent me the article, which cites a model designed by a team of Bank of America consultants who use the Prisoners’ Dilemma to explain rising college tuition. Here is the graphic they used:

kitten

Fig. 1: Things that are pairwise irrelevant to each other:

a kitten, the Prisoner’s Dilemma, and rising tuition.

They explain that the college ranking system (assuming two colleges) is a zero-sum game. If one college moves up, the other one moves down. “A college can move up in the rankings if it can raise tuition and therefore invest in the school by improving the facilities, hiring better professors and offering more extracurricular activities.” And therefore, they conclude, this is why college tuitions have been rising and why student debt will continue to rise.

First glaring problem: (raise, raise) is a Pareto-optimal outcome as they’ve set up this game, but what they probably meant to say was that it is a Nash equilibrium. Or maybe they meant to say that “raise” is the best response for each college. Anyway, in this game, (don’t raise, don’t raise) is also Pareto-optimal (but not a Nash equilibrium)!

Secondly, they’re trying to illustrate a kind of ratcheting problem: both colleges raise tuition to raise the quality of the resources at the school, in order to maintain their rankings. But, this means it’s a repeated game. In repeated games that have a finite horizon, defection happens at every step, but at infinite horizon games, cooperation can occur. Now, let’s just assume that this is an infinite horizon game, which is what the folks at B of A are assuming when they predict that college tuition will keep rising indefinitely, beyond mere inflation. What incentive is there to cooperate and keep tuition low? According to this game, none.  And according to what you might expect in reality, none – is it plausible that, in the absence of antitrust laws, that colleges would want to collude to keep tuition low, and that because they can’t collude, they are doomed to raise tuition every year against their wills? Nope.

Then, we come to the matter that in fact this game can’t be infinite horizon as it is presented here.  The simple reason is that, even if education is becoming a larger and larger share of a household’s spending, and even if the student is taking out loans and borrowing against his future expected earnings, he still has a budget set that he can’t exceed. Furthermore, the demand for attending college at a particular university should drop as soon as the tuition exceeds the expected lifetime earning/utility advantage for whatever the student sees himself doing in 4 (or more) years over the alternative. So, there will be some stage at which the utilities change and it becomes a best strategy for neither school to increase its tuition. So, it’s a finite stage game and the increase will stop somewhere, namely, where price theory says it should. [1]

Finally, it’s not clear that increasing tuition actually has such a strong effect on school rankings or that colleges are in such a huge rankings race. And, even if students at colleges outside the very top schools tend to choose a college based on things like food quality and dorm rooms, students don’t demand infinitely luxurious college experiences at infinite prices. Evidence: Columbia students feel they’re overpaying for food, and feel entitled to steal Nutella.

The lessons here are these: It’s not a Prisoner’s Dilemma in a strong sense if the cooperative result isn’t strictly preferred to the Nash equilibrium. Don’t model a tenuous game where the game isn’t relevant to the ultimate result (tuitions will stop rising at some point). Don’t assume that trends are linear, when they are definitively not linear. And, don’t put a kitten on your figure just because you have some white space — it really doesn’t help.

———————

[1] Actually, the game doesn’t have to be finite horizon. Suppose the upper limit that the colleges know they can charge is A, and the current tuition is B_t. Then, at each stage, they could increase tuition by (0.5)*(A - B_{t-1}). But, as the tuition approaches A, the increases become smaller and smaller until they pretty much just vanish, and it would be the same as stopping, because there is a time at which the tuition would stop affecting rank (a college isn’t going to improve it’s rank by charging each student an extra cent.)


Clearly, Sicilians do not know game theory

Relax, I’m not referring to actual Sicilians. I’m referring, of course, to Vizzini from the movie “The Princess Bride.” The hero, Westley, is trying to rescue his true love, Buttercup, from the clutches of Vizzini and his henchmen, Inigo Montoya and Fezzik. After outdueling Inigo and knocking out Fezzik, he overtakes Vizzini, who threatens to kill Buttercup if Westley comes any closer. This leads to an impasse: Vizzini cannot escape, but Westley cannot free Buttercup. So, Westley challenges Vizzini to a “battle of wits”:

The structure of the game is simple: there are two glasses of wine. Westley has placed poison (in the form of the odorless, tasteless, yet deadly iocaine powder) somewhere among the two cups, and allows Vizzini to choose which to take. Afterwards, they drink, and they see “who is right, and who is dead.”

Presumably, when Vizzini encounters the game, he is supposed to think that Westley has restricted himself to poisoning one of the glasses. In this case, we have a standard extensive form game of incomplete information, which is equivalent to a normal-form game:

 Vizzini\Westley Poison Westley’s cup Poison Vizzini’s cup
Drink Westley’s cup (Dead, Right) (Right, Dead)
Drink Vizzini’s cup (Right, Dead) (Dead, Right)
Fig. 1: Battle of Wits (outcomes)

Immediately we see that this game is symmetric (or, more precisely, anti-symmetric), in that whatever doesn’t happen to one player happens to the other. In this way, this game is strategically equivalent to the game of matching pennies. This lets us know right away that the equilibrium outcome is for Westley to randomize 50-50 between the choices: do anything else, and Vizzini has a better chance of winning if he plays optimally, as he could just choose the cup that is less likely to have the poison. Similarly, if Vizzini was a priori less likely to choose a given cup, then that is where Westley should have put the poison.

Yet Vizzini does not reason this way. Instead, he attempts to make vacuous arguments about the psyche of Westley, namely, where Westley would have put the poison. He may be reasoning as if Westley is a behavioral type, but clearly, that’s not the best thing to do in a “battle of wits,” where presumably everyone is rational. Instead of making the game-theoretic choice based on mixed strategies, he tries to find an optimal pure strategy.

In the end, Vizzini takes his own cup, which indeed contains the poison. As it turns out, both cups contained poison: Westley has built up tolerance to iocaine, and so it didn’t make any difference which was chosen. So in a way, Westley did make Vizzini indifferent between the two outcomes; it’s just that Vizzini was mistaken in which game was being played. In reality, no matter what, Vizzini would be dead, and Westley would win. This makes one wonder that perhaps Vizzini should have thought something was afoul when Westley proposed the game in the first place, and even more so when he falls for such an obvious trick of misdirection which tries to get Westley to look the other way (see 3:04 in the video). But no matter – while Vizzini may have been smarter than Plato, Aristotle, and Socrates, he could have used some of the 20th century wisdom of John Nash.


The Monty Hall Deception

When I was in middle school, I consumed a lot of typical nerd literature like Richard Feynman’s “Surely You’re Joking, Mr. Feynman” and anthologies of mathematics puzzles from the Scientific American by Martin Gardner. In the latter, I first encountered the Monty Hall Problem, and it goes something like this:

Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?

It turns out that, yes, it is always to your advantage to switch your choice. This is a solution that has been notoriously difficult for people to wrap their heads around. After all, when you picked a door, the probability of having picked the door with a car was still 1/3, and after a door was opened, there would still be a car and a goat behind the remaining two doors – it seems as through the probability of choosing the door with the car ought to be ½ regardless of the door chosen.

The Monty Hall Paradox is in fact not a paradox at all, but rather just some clever sleight of hand. The trick is that people are drawn to the fact that there are only two doors rather than three doors remaining, and assume that the host’s having opened a door is favorable to the player. People tend not to realize that the game has imperfect information – the player does not know where on the game tree he is, whereas the host does. Additionally, people assume that the host has no stake in the game (and this is not unreasonable, because the problem does not explicitly describe a parsimonious host! On the other hand, intuitively, we know that the host isn’t going to ruin the game by opening the door with the car.) So, if we assume that the host is profit maximizing and we model the problem as an extensive form game with imperfect information, then the conditional probabilities would be easy to see.

Now, just for fun, we’ll assign some utilities to the outcomes. What is a goat worth? According to a popular Passover song in Aramaic, a (small) goat is worth about 2 Zuz, and according to the traditional Jewish prenuptial document, a wife is worth about 200 Zuz. So, a goat is worth about 1/100th of a wife. I asked my roommate, Anna, how many cars she thought a wife was worth, and she determined that a wife was worth three cars. By transitivity, then, a car is worth about 33 goats. (I think goats have become quite a bit less valuable since that song was written, or maybe goats back then were a lot better than our goats.) So, if the player wins the game, he will walk away with a utility of 33, and the host will walk away with the 2 goats.

monty hall

In this game, the light gray branches are dominated because the host has no incentive to open the door that the player has already chosen, and the dark gray branches are dominated because, of the remaining two doors, the host would not open the door that has the car. We can tell that in the top branch, the host as 2 possible choices for doors to open, whereas in the lower two branches, the host is constrained to only one door (since, if the player has chosen a goat door, there is only one goat door left to open.)

So, since the player has no idea after the first stage what door has the car, we assume he picks door No.1 (as in the game). If he observes that the host opens door 3, he would know that there are two cases where the host opens door 2: in the world where the car is behind door 2, the host chooses door 3 100% of the time, and in the world where the car is behind door 1, the host chooses door 3 50% of the time. It’s actually twice as likely that we are on the 100% branch as that we are on the 50% branch – and that’s the branch where the car is hidden behind the other door.

What if we know that the host has opened a door, but we don’t know which one? Then, we can’t condition on a prior, because we don’t know what the prior is – we don’t get any new information by observing which door was opened, and switching doors would not help.