This is something I’ve asked a few of people about. It seems odd that in our modern, post-feminist age, it is almost always men who do the asking out. This is not so good for both men and women. For men, it puts a lot of pressure on them to make all of the moves. For women, I cite Roth and Sotomayor’s classic textbook on matching, which shows that, though the outcome from men always choosing partners is stable, it is the worst possible stable outcome for women. That is, women could get better guys to date if they made the moves.
I have a few hypotheses, but none of them seem particularly appealing:
1) Women aren’t as liberated as we think.
Pro: There doesn’t seem to be any point in history where this was any different, so this social practice may indeed be a holdover from the Stone Age (i.e. before 1960).
Con: If this is true, then it is a very bad social practice, and we should buck it! This is not a good reason to maintain it!
2) If a woman asks a man out, it reveals information about her. This could be a case of multiple equilibria. Suppose that a small percentage of “crazy types” of both men and women exists, and under no circumstances do you ever want to date one of them. The equilibrium in which we are is fully separating for women, where the “normal types” always wait for men to ask them out, while the “crazy types” ask men out. Since this is a perfect Bayesian equilibrium, men know that if they get asked out, the woman must be crazy, and so they reject. Knowing this, the “normal” women would never want to ask a man out, since it would involve the cost of effort/rejection with no chance of success.
Suppose the chance that someone is crazy is some very small . Consider the game tree:
Notice that the crazy women always want to ask the guy out, no matter what the beliefs of the guy are.
There are a few perfect Bayesian equilibria of this game, but I will highlight two. The first is that the normal women never ask guys out, and guys never accept. As , this gives expected payoff to people of . No one wants to deviate, because only crazy women ask guys out, and so a guy would never accept an offer, as that would give payoff instead of ; knowing this, normal women will never ask men out, because that gives them payoff instead of .
Another equilibrium is that all women ask men out, and men always accept. As , the expected payoff vector is . Thus the former is a “bad” equilibrium, while the latter is a “good” one. In other words, we may be stuck in a bad equilibrium.
Pro: I think that there definitely some guys out there who think that women who would ask them out are “aggressive” or “desparate,” and so they wouldn’t go out with them.
Con: I don’t think the above sentiment is true in general, at least for guys worth dating! If a guy has that attitude, he’s probably an @$$#0!3 who’s not worth your time.
There may also be some elements of the problem with (1), but these would be harder to overcome, as the scenario here is an equilibrium.
Finally, while this might have some plausibility for people who don’t really know each other yet, I definitely don’t think this is true for people who know each other somewhat better, and therefore would already know whether the woman in question was crazy. That being said, I would expect it to be more likely that a woman who has known the man in question for longer to be proportionally more likely to ask him out (relative to the man), even if it is still less likely.
3) Women just aren’t as interested. If he’s willing to ask her out, then fine, she’ll go, but otherwise the cost outweighs the benefit.
Pro: It doesn’t have any glaring theoretical problems.
Con: I want you to look me in the eyes and tell me you think this is actually true.
4) They already do. At least, implicitly, that is. Women can signal interest by trying to spend significant amounts of time with men in whom they have interest, and eventually the guys will realize and ask them out.
Pro: This definitely happens.
Con: I’m not sure it’s sufficient to even out the scorecard. Also, this seems to beg the question: if they do that, why can’t they be explicit?
When I originally showed this to some friends, they liked most of these possibilities (especially (1) and (2)), but they had some additional suggestions:
5) Being asked out is self-validating. To quote my (female) friend who suggested this,
…many girls are insecure and being asked out is validation that you are pretty/interesting/generally awesome enough that someone is willing to go out on a limb and ask you out because they want you that badly. If, on the other hand, the girl makes the first move and the guy says yes it is much less clear to her how much the guy really likes her as opposed to is ambivalent or even pitying her.
Pro: This is true of some women.
Con: Again to quote my friend, “There are lots of very secure, confident girls out there, so why aren’t they asking guys out?”
6) Utility from a relationship is correlated with interest, and women have a shorter window. This one is actually suggested by Marli:
If asking someone out is a signal of interest level , and higher interest level is correlated with higher longterm/serious relationship probability, then women might be interested in only dating people with high interest level because they have less time in which to date.
Pro: It is true, women are often conceived to have a shorter “window,” in that they are of child-bearing age (for those for whom that matters) for a shorter period.
Con: This doesn’t seem very plausible. Going on a date doesn’t take very long, at least in terms of opportunity cost relative to the length of the “window.” As a friend put it in response,
Obviously one date doesn’t take up much time; the point of screening for interest is to prevent wasting a year or two with someone who wasn’t that into you after all. But then it would seem rational for (e.g.) her to ask him on one date, and then gauge his seriousness from how he acts after that. Other people’s liking of us is endogenous to our liking of them, it really seems silly to assume that “interest” is pre-determined and immutable.
So overall, it seems like there are reasons which explain how it happens, but no good reason why it should happen. I hope other people have better reasons in mind, with which they can enlighten me!
When I was in middle school, I consumed a lot of typical nerd literature like Richard Feynman’s “Surely You’re Joking, Mr. Feynman” and anthologies of mathematics puzzles from the Scientific American by Martin Gardner. In the latter, I first encountered the Monty Hall Problem, and it goes something like this:
Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?
It turns out that, yes, it is always to your advantage to switch your choice. This is a solution that has been notoriously difficult for people to wrap their heads around. After all, when you picked a door, the probability of having picked the door with a car was still 1/3, and after a door was opened, there would still be a car and a goat behind the remaining two doors – it seems as through the probability of choosing the door with the car ought to be ½ regardless of the door chosen.
The Monty Hall Paradox is in fact not a paradox at all, but rather just some clever sleight of hand. The trick is that people are drawn to the fact that there are only two doors rather than three doors remaining, and assume that the host’s having opened a door is favorable to the player. People tend not to realize that the game has imperfect information – the player does not know where on the game tree he is, whereas the host does. Additionally, people assume that the host has no stake in the game (and this is not unreasonable, because the problem does not explicitly describe a parsimonious host! On the other hand, intuitively, we know that the host isn’t going to ruin the game by opening the door with the car.) So, if we assume that the host is profit maximizing and we model the problem as an extensive form game with imperfect information, then the conditional probabilities would be easy to see.
Now, just for fun, we’ll assign some utilities to the outcomes. What is a goat worth? According to a popular Passover song in Aramaic, a (small) goat is worth about 2 Zuz, and according to the traditional Jewish prenuptial document, a wife is worth about 200 Zuz. So, a goat is worth about 1/100th of a wife. I asked my roommate, Anna, how many cars she thought a wife was worth, and she determined that a wife was worth three cars. By transitivity, then, a car is worth about 33 goats. (I think goats have become quite a bit less valuable since that song was written, or maybe goats back then were a lot better than our goats.) So, if the player wins the game, he will walk away with a utility of 33, and the host will walk away with the 2 goats.
In this game, the light gray branches are dominated because the host has no incentive to open the door that the player has already chosen, and the dark gray branches are dominated because, of the remaining two doors, the host would not open the door that has the car. We can tell that in the top branch, the host as 2 possible choices for doors to open, whereas in the lower two branches, the host is constrained to only one door (since, if the player has chosen a goat door, there is only one goat door left to open.)
So, since the player has no idea after the first stage what door has the car, we assume he picks door No.1 (as in the game). If he observes that the host opens door 3, he would know that there are two cases where the host opens door 2: in the world where the car is behind door 2, the host chooses door 3 100% of the time, and in the world where the car is behind door 1, the host chooses door 3 50% of the time. It’s actually twice as likely that we are on the 100% branch as that we are on the 50% branch – and that’s the branch where the car is hidden behind the other door.
What if we know that the host has opened a door, but we don’t know which one? Then, we can’t condition on a prior, because we don’t know what the prior is – we don’t get any new information by observing which door was opened, and switching doors would not help.
Let me explain. Among Orthodox Jews, many of the women refuse to wear pants – they will only wear skirts of knee length or longer. Yet the reason for this practice is not so clear. Many reasons are given by different people. I’ll list a few, along with a sentence about why I don’t think they make so much sense:
(i) Modesty: Pants are immodest because they are form-fitting to the leg. Yet there are many other parts of the body for which clothes that are just as form-fitting are fine, yet are equally not allowed to be uncovered under Orthodox Jewish law. Besides, no one said you had to be wearing skinny-jeans.
(ii) Men’s clothing: Under Biblical law, women may not wear men’s clothing. Yet by now, it is normal for women to wear pants. Indeed, most Orthodox rabbis agree that this is not the main reason.
(iii) Suggestive: Certain parts of the pants might be sexually suggestive. If this is a problem for women, this would then definitely be a problem for men.
You might be asking yourself at this point: “What the heck does this have to do with game theory?” Well, I actually have a good reason for why many Orthodox Jewish women wear skirts, and it involves a perfect Bayesian equilibrium of an extensive form game.
We divide Orthodox Jews into two classes, each of which holds different communal standards. The first group adheres to a rather stricter standard, which may include certain norms about interactions with men, more stringencies regarding keeping kosher, etc. The second group is a little bit more laid back, though they may also be fully observant according to what they believe is necessary. I’m not making any claims about which one is better – I don’t want to go there – just bear with me.
Orthodox Jewish women in each class prefer others to recognize that they are in the correct class. This is perfectly understandable – one in the first class would not like others to offer food that didn’t meet their standards of keeping kosher, or would not appreciate certain advances from men; one in the second class might not appreciate being pressured into adhering to (from their perspective) unnecessary strictures. We therefore assign each class a payoff of C for being correctly labeled, and W for being incorrectly labeled, where C > W. For similar reasons, other Orthodox Jews would want to correctly label Orthodox Jewish women into these two classes.
To complete the model, we condition the beliefs of other Orthodox Jews on the signal of whether a given woman will wear only skirts (we assume that they can tell if they only wear skirts some of the time). If yes, they place her in the first class; if no, they place her in the second. Since having to only wear skirts is a restriction on the fashion choice of women, we’ll assign a modest loss of payoff (S) for only wearing skirts, where C – W > S.
Given their belief structures, other Orthodox Jews will assume that if you wear a skirt, you’re in Class #1; if you sometimes wear pants, you’re in Class #2. Knowing that others have this belief, the best strategy for Orthodox Jewish women is to actually always wear a skirt if they are in Class #1, while to not bother if they are in Class #2. In this way, the beliefs of others are self-fulfilling in the dress code of Orthodox Jewish women.
Of course, this model isn’t always true – I’m sure there are some people who have strong reasons (aside from those mentioned here) to choose to deviate from the equilibrium path described in this model. Yet I think this actually, to a large extent, gives the most compelling, and most credible, reason for why this dress code exists.
Last week, I showed that if the villains are perfectly rational, and this is “common knowledge,” then the villains will be best off by just surrendering immediately without killing any hostages. But the rationality assumption is a big one – let’s see what happens if we drop it.
Once there’s a good chance that the villains are “crazy,” even criminals who are perfectly sane will pretend to be crazy so they can extract money as ransom. We can then model this as a Bayesian game, with N periods corresponding to the hostages that are holed up in the bank. Each period will consist of two stages: in the first, the SWAT team decides whether to pay up or not; in the second, the villains choosing whether or not to kill a hostage; if they don’t, it’s tantamount to surrender. If they run out of hostages to kill, they are forced to surrender, and the surrender outcome gets progressively worse as they kill more hostages.
The payoffs are described as such:
Each dead hostage: gives to the SWAT team, and to the villains if they end up surrendering.
Surrender: gives to the villains.
Pay up: gives to the SWAT team, P to the villains, where (1)
Finally, we assume that there is a probability (initially) that the villains are nuts.
We’re going to construct a mixed strategy solution to this problem, which will yield a perfect Bayesian equilibrium. That is, in a given period , the SWAT team pays up with probability , and the villains execute a hostage with probability . For those who are unfamiliar with mixed strategies, the basic idea is that both sides are indifferent between (at least) two options, so they may as well flip a coin as to which one to do. The trick is that the coin is weighted so as to make the other side indifferent as well. Thus they will flip their coins so as to make the first side indifferent. Hence this forms a Nash equilibrium – neither side can benefit by unilaterally changing their strategy.
The derivation is a little drawn-out, so I’m going to break it into steps:
(i) If the SWAT team always pays up in some period , then the villains may as well execute their hostages until they reach that stage; after all, this guarantees them an automatic victory. But if they will do that, then the SWAT team should pay up immediately (i.e. in period 1): since they’ll lose anyway, they may as well save the lives of the hostages.
(ii) Conversely, if the villains will always execute a hostage at a certain stage, then the SWAT team should always pay up then. But then we run into the same issue as in (i), so the SWAT team will end up paying at the beginning. Combined, (i) and (ii) limit the types of equilibria we have to analyze.
(iii) If there is a period i at which it is known that the SWAT team will not pay up, no matter what, then the normal (not crazy) types, knowing this, will surrender in period . But this possibility leads to an inconsistency. Knowing that the normal types will do this, the SWAT team will believe that anyone who hasn’t surrendered must be crazy. If so, it is a best response to pay up in order to avoid more casualties.
(iv) Thus, from (iii), there cannot be a perfect Bayesian equilibrium in which the SWAT team will maintain the siege to the bitter end – they must always cave in to the villains’ demands with some positive probability. Indeed, we see from (i) and (iii) that either they pay up immediately, or they pay up in period with some nonzero likelihood.
(v) There is no benefit to the villains in executing the hostage. Thus, if they are normal, they will not execute, and so this hostage will be killed if and only if the villains are crazy, which occurs with probability .
(vi) In period , by (iv), the SWAT team must still be indifferent between paying up and maintaining the siege. This implies that
(vii) In all periods , the normal-type villains must be indifferent between surrendering and killing a hostage. Thus, if their expected payoff, if the game reaches the second stage of period , is , then we get
But we know that since the villains will be indifferent as well in the next period (unless ), . In any case, this formula still holds for , since the normal types surrender in period anyway. Thus we get, after substitution,
As expected, this probability goes down as more hostages are executed – after all, there is less of a risk of more future casualties, since there are fewer hostages left – a dark thought, indeed.
(viii) We now derive the probability that the villains execute a hostage in the periods other than the last. The SWAT team is indifferent between maintaining the siege and paying up. Hence if their expected payoff at the beginning of period is , then
But because the SWAT team will also be indifferent between paying up and maintaining the siege in the next period, . Hence
This value is constant! Note that it is decreasing in – if the hostage is more valuable to the SWAT team, then the likelihood that they will kill another doesn’t have to be as high in order to deter the SWAT team from maintaining the siege. Conversely, if the hostage is less valuable, the villains will have to be more likely to kill the hostage to show they mean business.
(ix) Combining (vi) and (viii) gives us the the proportion of crazy types, at the beginning of any period (after the first) in the perfect Bayesian equilibrium will be . Thus, if at period 1, the initial probability that the villains are crazy, , is greater than this amount, then the risk that they are nuts (or wannabe-nuts) is too high, and the SWAT team should fold immediately. Otherwise, they should maintain the siege in the first period, while the villains execute the hostages with the exact probability so that the likelihood that they are nuts, as of the start of period 2, is exactly . Afterwards, the villains follow the strategy given by (vi) and (viii), while the SWAT team follows the strategy given by (vii).
(x) As a final note, we see from (ix) that as , the threshhold for the initial likelihood of craziness necessary to enforce a SWAT team payout goes to 0. This makes sense – there’s a larger potential for more casualties as the number of hostages goes up.
(1) (footnote: though dropping this assumption only slightly changes the outcome. Also, we exclude the possibility of storming the bank, since it’s similar in concept to the other options).
I’m no expert at finance, so I can’t tell you in general how you should invest your money. What I can tell you, though, is why insider trading is such a harmful phenomenon, and why it should be curtailed to the best of the government’s abilities. To illustrate this, I’m going to use a simple Bayesian model.
To begin with, it is reasonably clear why someone would execute an inside trade. If some executive-honcho dude has some extra knowledge, not generally available to the public, which leads him (or her) to conclude that a particular financial asset (we’ll assume it’s a stock) is worth some amount different from its market value, he can use this to his or her advantage. If the stock is undervalued, he can purchase the asset and make money for nothing. If it’s overvalued, he can short it, again making money for nothing. Either way, he wins at the expense of other financial agents.
So far, pretty clear. But hold on a second: investors know that executives can do this, and so anticipate the relevant market moves. So, if the executive sells, the market for the stock responds by lowering the price; if he buys, it rises by a certain amount as well. Does this ability help the other investors to avoid the pitfalls of insider trading?
Unfortunately, it does not, for a few reasons. The first is pretty obvious from the definition of insider trading – though the market may expect the stock to be worth a certain amount more or less than before, it can’t know the exact amount. Only the executive, with the inside scoop, knows this. Hence he will be able to capitalize on any difference between what the asset actually should be worth (based on all the information), and what investors expect it to be worth based on their more limited information (which includes the fact that the insider is taking action).
Second, it is important to take note of the advantage in timing that the executive has. Since the market can only correct itself in response to his or her moves, he or she gets to make the first move. So, if the asset is worth more, he can buy more shares before others realize what is going on. Similarly, he can sell before others notice that he or she thinks the stock is overvalued, still making a profit through these means as well.
Finally, we come to what I think is the most interesting problem: the executive can take advantage of investor expectations, even when the fundamentals of the company actually remain the same in as perceived in public. Thus, if investors believe that, say, a sale by the executive indicates that the stock is worth more than previously thought, they will reduce the price at which it trades. But if so, the executive, knowing that the other investors have this belief, can sell for no reason whatsoever, and then repurchase at a lower price, making a net profit without a loss of shares that he or she holds. Similarly, if investors expect the asset to be worth more if the executive buys more shares, then the executive can buy shares and resell at the higher price.
Combined, the first and last reasons demonstrate that there cannot be a perfect Bayesian equilibrium which allows for insider trading. For if the price falls with a sale (so the investors think the stock is worth less), the executive will take advantage of this investor belief through selling without reason (i.e. the stock is not worth less); a similar situation occurs if the investor believes that the asset is worth more, as then the executive will buy without fundamental reason. Hence the decrease in the market value for the stock will occur based on false beliefs due to the executive’s action. Meanwhile, if the investors respond to executive stock purchases by, say, lowering their expected values of the stocks (or keeping the price the same), the first reason above demonstrates that the executive can increase his or her payoff by purchasing when the stock is worth more than publicly thought. Thus, in the latter case, the investors also have false beliefs (undervaluing the stock).
In order to allow for the possibility of a perfect Bayesian equilibrium in the stock market, one must therefore eliminate the possibility of insider trading. Under a rigid enforcement mechanism, such an equilibrium does exist: the market does not respond to actions of the executive. Since the executive cannot use private information to purchase or sell stocks, he or she can’t take advantage of the first or second reason; and since the market does not expect a change in value based on his or her actions, he or she cannot take advantage of the third reason. From the other end, the investors know that the executive can’t use any information that they do not have, they are not taken aback by any executive stock moves, and so they don’t change their expected valuation of the stock. Thus we see the importance of an effective prevention of insider trading – otherwise, the stock market is open to the games of executives with too much information.
So I promised I’d show how there’s a perfect Bayesian equilibrium in the eBay setup where all people pay only the price at which the bidding started, . How’s that, you say? Won’t people want to bid up the price, if they’re willing to pay more?
The key to this equilibrium is, obviously, to remove the incentive to bid any higher. Since, in eBay, the highest bid up to any given point is not observed, we can describe perfect Bayesian equilibria based on beliefs for those situations if they were to come up. Thus, if any bidder bids something other than at any time except for the last moment, the other bidders can plausibly believe that this bidder has bid something ridiculously high, and so will respond by bidding up the price of the item, knowing that they have nothing to lose by doing so. This way, they can punish any bidder who deviates by bidding higher than , so no one will do so.
For this equilibrium to work, we must also consider the timing and tie-breakers. eBay has structured its auction mechanism so that, once there is at least one bid on an item, all bids must be greater than the current price. This means that at most one person can bid ; the rest, if they will bid, must bid more. We can resolve this issue by stipulating that all bidders try to bid at some time , and one of these, chosen randomly, does so successfully. Meanwhile, anyone who bids at some other time (rather than the last minute) is believed to have again bid something ridiculously high, and punished in the same way as the one who bids something different from . Again, this will ensure that everyone bids only at , except for maybe at the last moment when no one has time to respond to their bid.
We have to (finally) deal with what happens at the last moment. Suppose bidder is the lucky guy who successfully submitted his bid earlier at time . At the last moment, to discourage others from trying to outbid him, now he is the one who bids ridiculously high. Knowing that they cannot win the item, no one else tries to submit a bid at the last moment.
One can formally check that this is indeed a perfect Bayesian equilibrium. Though this is unlikely to ever happen in reality, this shows the lack of uniqueness of symmetric equilibria (in the sense that all people’s strategies are ex ante the same) in eBay’s setup, and that we can get a pretty sweet outcome given the right beliefs. Pretty cool, huh?
OK, I lied a little bit: I can’t guarantee that you will win lots of really awesome things for dirt cheap.
But I did stay at a Holiday Inn Express last night. What I can provide, though, is a perfect Bayesian equilibrium strategy that will mean you are bidding optimally (assuming others are also bidding in a similarly defined manner). Basically, I wrote my senior thesis at Princeton on eBay, so I know how it works pretty well.
I’m not going to go through the exact details of how eBay works, since I assume if you care, you already know more or less about that (you can look here for more information). I’ll also omit most of the gory details of how exactly to rigorously demonstrate that these strategies mathematically form an equilibrium, since I assume most readers will care mostly about the practical implications. I hope to upload a link to my thesis, so if you want, you can take a look at that yourselves.
At first glance, eBay seems to work exactly like a sealed-bid second-price auction. Indeed, eBay itself suggests that one should always bid exactly how much one values the item up for sale. But there are two potential issues. First, there are likely to be multiple items of the same type. For example, suppose I collect stamps: there are likely to be multiple copies of the same stamp (unless it is extremely rare). Thus, I might want to avoid bidding on an earlier auction of a stamp, if I could get the same one a little later for a better price (we’ll assume no discounting, since it doesn’t much change the reasoning). Second, eBay complicates the general second-price auction setup by showing the price history; that way, it might be possible to infer how badly others want, say, the stamp, and use that information to one’s advantage.
Fortunately, in the single-item case, the reasoning for second-price auctions works for eBay auctions as well: one should always bid exactly the value of the stamp (though it’s no longer a dominant strategy). There is also a well-established result for multiple items, first shown by Robert Weber, which is that if there are items, and bidders (where , and each person wants exactly one item), then we can rank the bidders’ valuations from high to low as ; it is then a subgame-perfect equilibrium for a person with valuation to bid
in the lth round of bidding; that is, one bids what one expects the highest valuation to be, if one were to have the highest valuation. For period , this works out to bidding one’s valuation, so it fits nicely with the single-item case.
Can we generalize this to the eBay model? Fortunately, as I showed in my thesis, we can: all it takes is to construct a set of beliefs where the bidders will ignore the previous bids of others, and bid as they would in the sealed-bid case. The best way to break down the cases is to those of two items, and those of three or more.
In the former, given that, in the second period, everyone is just going to bid as in the single-item case anyway, they will bid as in the sealed-bid, two item case no matter what at the last moment. We just need to make sure that no one will screw things up by bidding earlier. We can ensure that this happens by, say, constructing an equilibrium in which bidders believe that only people who value the stamp very highly would bid earlier than the last moment; so, if someone else has bid earlier, the rest of the bidders would assume they had lost anyway. Since one can’t see the top bid (only the current stamp price, based on the second-highest bid up to then), this belief is plausible. And if they’ve lost, they might as well just bid what they would otherwise, as given by equation (a), since bids are costless. This solves the two-item case.
In the three-plus item case, we have to be a little more careful, since if everyone just blindly bid as in equation (a), you might learn other’s valuations in earlier rounds, realize that you weren’t going to win at all if you continued bidding as in (a), and outbid others who valued the stamp more highly to “steal” the stamp. To get around this, one could construct an equilibrium where the bids are staggered – those who want the stamp more bid earlier. Since equation (a) is monotonic in V, this means that no more than two bids can occur in any given period; those who are supposed to bid earlier cannot, since the item price is already greater than what they are willing to bid (eBay requires that all bids must be greater than the current item price, since otherwise they are not relevant for determining the price of the given item, given the mechanism eBay uses). By assuming some plausible beliefs so that bidders would ignore anyone who deviated from this strategy profile, we can ensure that all bidders will adhere to the strategy profile – they’ll have no incentive to deviate.
Do people actually bid like I described in the multiple-item cases? Obviously not – most people don’t even bid their valuations, let alone think things through as much as here. Nor does it seem that people time their bids as in the equilibria described here. Yet the basic idea, that if one can ignore the bids of other people in determining how much they want the item, and only focus on bidding up to the value suggested in equation , that continues to make sense. After all, really, does anyone actually try to memorize how much someone was willing to bid last time a stamp came up for auction? Come on. Since no one tracks bids like this, it’s safe to bid as described here in general.
Note that, unlike the sealed-bid case, the equilibrium described here is not the unique symmetric one. In fact, there is an equilibrium in which all items are sold for the start price, if sold at all. But we’ll get to that in part II.
Indeed, this issue causes there to not be any pure-strategy, increasing bidding function for multiple-item auctions with bid revelations. See Cai, Wurman and Chao (2007).