Sketchy dating after breakups

(I thought I’d put this post up now, since it relates to a friend’s recent post elsewhere.)

Generally, when people end a long-term relationship, they want to take a bit of a break from dating to get their feet back on the ground. Break-ups can be very emotionally taxing, and recovery takes some time. There are several rules of thumb as to how long one should wait; I won’t go into those, since that’s not the point of this post. What is interesting, though, is that often these rules are not well-kept. The question is, why?

For starters, let’s model a person’s payoff for entering a relationship. Let’s assume for now the person is a woman (also, let’s call her Fiona). Obviously she doesn’t want to enter one immediately after the breakup; but how much she does not want to do so depends on how much time has elapsed. More specifically, the payoff increases, eventually approaching a certain (bounded) value, at which point she is totally over her ex.

(Formally, we assign her a utility function U(t), where U(0)=0, and lim_{t\rightarrow\infty}U(t)=B, where B is some positive number. For example, when B=1, we could have a function like this:)

Fig. 1: sample graph

The guy who wants to ask her out also shares the same payoff (and we’ll call him Scotty). After all, of course he would – he’s only happy if she is, right? Thus it’s better for both of them if they wait longer to start up the relationship.

The thing is, Scotty doesn’t know if other people will have their eyes on Fiona. So, if he wants to lock her up as his only, he’s got to act quickly (by some time \tau). Suppose, for simplicity, he’s the first one to arrive on the scene (the same reasoning will apply even more strongly if there are others already competing with him for Fiona’s attention). Other suitors can be expected to arrive at a pretty much constant rate (r) if she’s still single. If he’s willing to ask her out by time \tau, then they definitely will by a later time (t>\tau), since they get an even higher payoff. Fearing this competition, Scotty will ask out Fiona at exactly the point where the gains from waiting are balanced by the losses in potential competition. (That is, U^{\prime}(\tau)=rU(\tau)\geq0.)

As the model is set up now, Fiona still has no reason to accept Scotty’s request. But if we introduce a cost of rejection (C) into the model, things change, even if such a cost is small. We can account for this as a natural consequence of social interactions: for example, things might be awkward between them if she turns him down. And no matter what, she cannot get more than a payoff of B later. Thus, she will certainly accept as long as U(t)\geq B-C.[1] Though she’d have a higher payoff if he asked later, accepting this request is the best response to his move.

To close things off, I should explain why I assumed initially that the person was a woman. Since even in this age of gender equality, guys are generally the ones doing the asking, women will encounter the possibility of being asked out, even if they are not yet looking around for new opportunities to date. Hence they incur the cost, C. By contrast, men might not look until they can get a payoff closer to B, without incurring the cost C. This makes it more likely that this situation will come up when woman have recently broken up with their boyfriends, rather than with the men.

Obviously, both sides in this equation would rather wait longer to start something up. But it’s just too risky to do so, since they might lose out altogether. So, we end up with much sketchiness. Haaaaaaai!

[1] That is, this is a sufficient condition; she might accept an even lower payoff depending on how frequently she expects guys to ask her out later.

Advertisements

Hot deficit potato!

I had originally wanted to share some really cool veto math today, but I’m really fascinated by the chicken endgame being played out — it has to be periodic and not simultaneous near the end. Vetoes will have to wait.

Deficit chicken is starting to look a lot like hot potato in these final days before August 2nd. Even though bank analysts believe the Treasury will be able to hold things together for a few days longer, it’s probably in everyone (in the District)’s interest to take the deadline seriously — better the deadline you know than the one you don’t.

Even if default and/or downgrade don’t happen, Congress is quite aware that the situation is FUBAR and that passing the blame is the name of the game. Whichever of the Dems, GOP, or Obama happens to say the last “no” via veto, filibuster, or downvoting by the time the deadline rolls around will be blamed for preventing legislation of any kind from passing. Whereas we have previously modeled the situation as a game of Chicken, by now we can pretty much count the number of days it will take to get anything through the legislature. Now, if you’re unfamiliar with the game, Hot Potato involves passing an object back and forth or in a circle while music plays. The loser is the player holding the potato when the music stops.

In our game, all parties know exactly when the music will stop, and if all threats to defeat, filibuster, and veto are actually carried out, one of the three players is going to have to say the last “no.”

This is why I don’t find Obama’s threat to veto House Speaker Boehner’s bill (if it passes in both houses) credible — if he vetoes, legislature is basically out of time to draft something else that will pass in both houses. Obama would almost certainly get stuck with the hot potato.

No wonder he’s upset — it’s a Catch-22 for him, since he’s already looking weak to the Democrats for offering such a steep compromise. But, “singlehandedly” causing a default would be much much worse for Obama, which is exactly what the Tea Party hobbits seems to want, according to Senator McCain.

So, why is Obama making this threat in the first place, if it’s not credible? If he makes it seem like vetoing the bill is on the equilibrium path for him, it saves the Senate Democrats some face if they kill Boehner’s bill if/when it arrives in Senate. Otherwise, if the bill is stopped in the Senate, the Democrats in Senate will be left holding the hot debt potato at the end of the game.

Meanwhile, Senator Reid is biding his time before putting his version of a compromise to vote in the Senate. If Boehner’s bill fails to pass today, Reid will have the upper hand because he figures there’s just enough time left to pass his bill in both houses. Obama is supportive, so there’s no threat of veto, and even if the Senate Republicans filibuster as they’ve threatened to do, the GOP will certainly be saddled with the blame, since filing for cloture will set Senate back at least a day or two[1]. If it passes in Senate, the House has little choice but to pass the bill, or be blamed. Unless, of course, they really believe there’s more time to continue tossing the potato, or that doing this might actually make Obama look bad.

Quote of the day, from Reid: “Magic things can happen here in Congress in a very short period of time under the right circumstances.” Right circumstances => Not when filibustering is involved.

A very simplified game tree follows:

As we can see, all current threats, if carried out, lead to someone’s becoming a scapegoat. Interestingly, even though it looks like Boehner might get the votes he needs today, the Senate Democrats are using some excellent commitment strategy, by leaking a letter signed by all 53 members (hey, it looks like they didn’t get Joe Lieberman the Independent on the bandwagon) stating that they do not support the Boehner bill.

Next time: how having veto-power can hurt your outcome.

——————–
[1] If the GOP decides to filibuster until the Fed defaults, it’s going to be pretty hard to pin this on Obama. Just saying.
[2] Fun infographic table thingy.


How much should you wager in Final Jeopardy?

I think you all know how this one works, but let me sketch it out again anyway. You’ve been answering trivia questions, and now you have some dollar score going into the last round. Your two opponents have been doing the same. In the last round, you’re asked one question, for which you write down your answer, and how much you’re willing to bet on it, up to the amount of money you have; your opponents do the same. You’re trying to get as much money as you can from this game, so winning is not all that matters: you want to both win and have a higher score. The question is: how much should you wager? Your mother, Trebek.

We can model this by positing that each of players 1,2,3 have scores of D_{1}, D_{2}, D_{3}, respectively, going into the final round. They expect to answer correctly with probabilities p_{1}, p_{2}, p_{3}, respectively; we assume that these values are common knowledge. The wagers of each player are w_{1}, w_{2}, w_{3}, respectively, where for each i, w_{i}\leq D_{i}.

We describe F_{i}(N_{i}|D_{-i},s_{-i}) to be the probability that player i wins with a certain final score N_{i} (after the question has been answered correctly or incorrectly), given the strategies s_{-i} that the other players use, and the scores they have D_{-i} going into Final Jeopardy. Obviously, since the person with the highest score wins, you have a better chance of winning if you have a higher final score. Thus, F_{i}(\cdot|D_{-i},s_{-i}) is nondecreasing in N_{i}, no matter what s_{-i} and D_{-i} are.

Claim: If p_{i}\geq\frac{1}{2}, then it is a dominant strategy to bid all your money; namely, w_{i}=D_{i}.

Proof of the claim:Fix strategies s_{-i} for the other players. Suppose you wager w_{i}. Then your expected amount of money that you win is

\Pi_{i}(w_{i},D_{i}|D_{-i},s_{-i})=(D_{i}-w_{i})F_{i}(D_{i}-w_{i}|D_{-i},s_{-i})(1-p_{i})+(D_{i}+w)F_{i}(D_{i}+w|D_{-i},s_{-i})p_{i}.

Meanwhile, if you wager D_{i}, you can expect to win 2D_{i}F_{i}(2D_{i}|D_{-i},s_{-i})p_{i}. Notice, though, that since F_{i}(\cdot|D_{-i},s_{-i}) is nondecreasing, and p_{i}\geq\frac{1}{2},

\Pi_{i}(w_{i},D_{i}|D_{-i},s_{-i})\leq(D_{i}+w)F_{i}(D_{i}+w|D_{-i},s_{-i})p_{i}+(D_{i}-w_{i})F_{i}(D_{i}+w_{i}|D_{-i},s_{-i})p_{i}

=2D_{i}F_{i}(D_{i}+w|D_{-i},s_{-i})p_{i}\leq2D_{i}F_{i}(2D_{i}|D_{-i},s_{-i})p_{i}.

Hence, regardless of what D_{-i}, s_{-i} are, you can expect to get the most amount of money by bidding all you’ve got. Indeed, this argument is so general that it applies no matter how many people are playing Final Jeopardy against you – as long as you think you’re more likely to get it right than wrong, you should bid everything.

Now what should you do if you’re not so certain you’re going to get it right? If both other players are still more likely than not to answer correctly, then what you should do is pretty straightforward – you know what they’re going to wager (which is everything they’ve got), and so you can precisely calculate F_{i}(\cdot|D_{-i},s_{-i}). All you then have to do is choose the value of w_{i} which maximizes \Pi_{i}(w_{i},D_{i}), as defined above.

Unfortunately, the equilibrium analysis is in general not very tractable for a 3-player round of Final Jeopardy. That is, it doesn’t give a very nice closed-form solution (or at least, I can’t find one). So, while you should go by the advice in the previous paragraph (assuming these probabilities are commonly known), I can’t say much about what’s going to happen in general. Also, be aware that the advice doesn’t hold if the probability is not commonly known, in which case you will need to consider the entire hierarchy of beliefs.

In the spirit of academic honesty, I am heavily indebted to the online Google Books edition of “Strategy and Games: Theory and Practice,” by Prajit K. Dutta for his analysis for when p_{i}\geq\frac{1}{2}, though the proof I use of the strategic dominance of wagering everything is more formal than his. The analysis for when p_{i}<\frac{1}{2} is my own, and so should probably be taken with a grain of salt.

Edit: A previous edition of this post said that Final Jeopardy violated the conditions of the Kakutani fixed point theorem, and so there was likely no equilibrium. While it is true that these conditions are violated, it is not for the reason claimed, and so there actually will be an equilibrium, as demonstrated by Dasgupta & Maskin (1986). It will likely involve ties in some cases, for reasons that I’m too lazy to show.


Preemptive bribes in legislature

Last week, Jeff discussed Deficit Chicken and the consequences of playing out the game. The August 2nd deadline is almost upon us, and up until now pretty much every product of debt-ceiling negotiations in Congress has failed to yield anything that has a shot at passing in both houses. How are the members of these parties “standing strong” together, anyhow?

It’s clear that some incentive prevents the moderates of each party from switching over — that’s why there’s a Gang of Six and not a Gang of Sixteen or Twenty (yet). Whether it’s pride, pork, or pressure, as Ted DiBiase would say, “Everybody has a price.”

This price, for an individual legislator, would be equivalent to the utility of standing his ground. Let’s suppose these utilities look like this for nine representative legislators on a committee. (Notice that I represent utility below the line as a negative utility from the status quo — it is a positive utility for the opposite outcome.)

Suppose the other party has a budget of 20 (say, in billions of dollars) to spend on bribing members of this party to join their coalition. The status quo party (call it X) has the first move and can pre-emptively give the committee members some additional utility for staying on this side, and afterward, the other party (Y) has the opportunity to make counteroffers.

Now, say that Y needs to ultimately win over 5 of these 9 committee members. As it stands, who would Y bribe? Certainly, Mr. I is already on Y’s side, so Y needs to bribe four more people — probably the least expensive of them, E, F, G, and H. Perhaps party X should devote its energy toward bolstering these weak members, possibly even try to win the dissenters back.

But, this isn’t always the case. Tim Groseclose and James Snyder in their celebrated 1996 paper offered a delightful best response strategy for party X in their vote-buying model.

To win over 5 members in the current state, Y just needs to spend 4 + \epsilon on E, 3 + \epsilon on F and G, and 2 + \epsilon on H to win. Party X can make Y’s life harder by preemptively giving E, F, G, and H more money, say 1, 2, 2, and 3 respectively, to make their payoffs 5 each — this exhausts Y’s war chest. But now, there is a less expensive party member (D) for Y to bribe!

So, X must give all of the members (except I) enough to have a payoff of at least 5, so that there are no “soft spots” to target and Y cannot win over any of them. It turns out that the best strategy for X is always this kind of “leveling schedule,” and X often ends up bribing more members than Y needs to win. We can easily test this here: suppose the optimal strategy is not a leveling schedule. If we add x more to anyone’s payoff, it is unnecessary. If we remove x from anyone’s payoff in the leveled coalition (say, we bribe H with 2 instead of 3). Then, Y will certainly target H, and have 20-4 = 16 left to spend on winning 3 more votes. X would need to add 1/3 to each of C, D, E, F, and G, to prevent Y from winning any three of four, which is another leveling schedule (and one that happens to be more expensive than the 1 saved from H.)

It could be, then, that “standing strong” for our status quo party really does mean emulating Aesop’s fable of the bundle of sticks (which, incidentally, is fascio in Latin).


Is Pascal’s Wager Sound?

OK, this isn’t actually technically a game, but since a lot of people think of it as such (given the common depiction through payoff matrices, with probabilities of different scenarios), we’re going to cover it anyway.

The basic gist of Pascal’s Wager is that by hedging one’s bets and choosing the path of religion, one can expect a greater payoff than by being not religious. Its proponents offer two possible states of the world: either God exists, or God does not. From a person’s standpoint, one can choose to be religious, or not:

 Choice God Exists God Does Not Exist
Religious \infty - C - C
Not Religious G -\infty G
Fig. 1: Standard Pascal’s Wager

One gets infinite payoff from being religious if God exists, as then one goes to heaven. If one is not, and God exists, one goes to hell, and gets a payoff of negative infinity. By being religious, one incurs a finite cost C, as religion isn’t so much fun apparently; if one is not religious, one gets positive payoff G since one gets to party all the time.

Now suppose God exists with some probability P>0, however small that might be. Then the expected payoff from being religious is, according to the argument, P(∞ – C) – (1 – P)C = ∞. The payoff from being irreligious is P(G – ∞) + (1 – P)G = -∞. Thus one is better off being religious.

A lot of people really hate this argument, and so do their utmost to bring it down. Yet this argument is not as bad as they think it is. Let’s go through some of the criticisms.

The first criticism is that Pascal automatically assumes that if God exists, then Catholicism is true. But there are many religions out there that posit the existence of heaven and hell, yet are mutually incompatible. For example, Catholics (at least conservative ones) would condemn Muslims for not accepting Jesus as their Lord and Savior; Muslims would condemn Catholics as polytheists for this very acceptance. Since one can do this calculus for both religions, the arguments negate each other in paradox, as we end up granting both infinite payoffs and negative infinite payoffs to the same groups!

The second criticism is the so-called Atheist’s Wager. It could be instead that God wants us to live good lives, and be rational. Since one can live a better life by being irreligious, this is what God would prefer of us. Hence, according to the Atheist’s Wager, the payoff matrix should look as follows:

 Choice God Exists God Does Not Exist
Religious - \infty - C - C
Not Religious G + \infty G
Fig. 2: Atheist’s Wager

Thus, says the atheist, it is a dominant strategy to be irreligious.

The problem with both these arguments is actually a problem with the initial formulation of Pascal’s Wager. However, while we can tweak Pascal’s Wager to make the problem go away, this flaw is quite possibly fatal to the above two criticisms. So now that I’ve built up the drama, here’s the problem: infinity is not a well-defined number on the real line (which we use for expected utilities). Instead, we must use a limit of some number B as it increases toward infinity. Thus, Pascal’s Wager should look like:

 Choice God Exists God Does Not Exist
Religious lim_{B\rightarrow\infty} B - C - C
Not Religious lim_{B\rightarrow\infty}G - B G
Fig. 3: Revised Pascal’s Wager

Addressing the first criticism, we can then compare the possibilities that each religion is true by seeing which is the most probable among them. Thus, taking Catholicism and Islam, with probabilities PC and PI, and difficulties CC and CI, respectively, to establish Catholicism (without loss of generality) as the better way to go of the two, we just need to check whether

PCB – PIB – CC > PIB – PCB – CI,

which, as B gets large, is equivalent to just checking whether PC > PI. As a Jew, I would probably argue that the evidence/support for Judaism is the greatest of all the religions that have a system of heaven/hell, even if the support for any of these religions is slight.

Similarly, we can check whether, given the evidence in front of us, the probability that the Atheist’s Wager (AW) is true is greater than that of Pascal’s Wager (PW):

 Choice God Exists & PW God Exists & AW God Does Not Exist
Religious lim_{B\rightarrow\infty} B - C lim_{B\rightarrow\infty} - B - C - C
Not Religious lim_{B\rightarrow\infty}G - B lim_{B\rightarrow\infty}G + B G
Fig. 4: Revised Pascal’s Wager vs. Atheist’s Wager

Comparing the respective probabilities of the Atheist’s Wager and Pascal’s Wager, PA and PP, we check whether

PAB – PPB + G > PPB – PAB – C

Given that almost all purported religious claims in the past about heaven and hell have been based on being religious, and none have supported the Atheist’s Wager (the closest you get is some who claim that all people go to heaven, whether religious or not), I would think that PP > PA (though I admit the possibility that I am wrong). Thus, the Atheist’s Wager loses out to Pascal’s Wager by comparing expected utilities.

Thus, there is a very strong case to make that if one is merely comparing expected utilities, then  no matter how small the probability is that God exists (as long as it is not zero), Pascal’s Wager is actually sound.

Of course, one might not be comparing expected utilities. After all, if there is only an extremely miniscule possibility of a hugely negative payoff, no matter how bad it is, it might not be a bad thing to completely ignore that possibility. But that is in the realm of decision theory, not game theory. Thus I’ll leave it to your intuitions: if you’re risk averse (as pretty much everyone is – that’s why we all buy insurance), and carry this reasoning even against remote possibilities, then by all means, Pascal’s Wager seems to work. But if you’re willing to take the risk, since it doesn’t matter if you think there’s only, say, a one-in-a-trillion chance that you’ll go to hell if you’re not religious, then go out and boogie.


S/he wants the Margaritaville, so what’s a girl to do?

So, your significant other has been lusting after a flat screen TV, espresso machine, or other prohibitively expensive luxury good for the past umpteen weeks, and you just saw an amazing deal for one on Amazon. Regardless of whether he actually buys the gadget, it would be nice to let him know you’re thinking of him. Do you say anything?

The best case scenario might be that he sees the ad, you get brownie points for having thought of him, and in the end he decides not to buy the thing. After all, you would rather he save the money for a rainy day, a nice vacation, or that expensive gift you’ve been eyeing… The worst case is, you don’t show him the ad and he goes and buys the thing anyway. If you show him the ad, at least you can put an upper bound on how much he spends (though, if the thing is a game console or smartphone, your payoff might still take a hit in terms of time away from you.)

If this sounds very familiar, this is an extensive form game you’ve played before. It would be easy if you knew exactly what he would do in each circumstance, and you could simply use backward induction to determine your best strategy.

But you don’t perfectly know what his payoffs are going to be. The good news is, you can still make an optimal strategic decision! You probably know, for instance, the approximate probability that if you do nothing, he’ll buy the thing anyway. We’ll call this p, and let’s assume it’s smallish because it’s expensive. Let p = 0.15. So, your expectation is -1.5.

If you show him the ad, what would the probability of buying the Margaritaville become?

-1.5 = -4q + 1(1-q) = -5q + 1
q = 0.5

His likelihood of buying has to become at least 50% before showing him the ad becomes a bad idea! If you don’t think he’ll become so much more likely to buy, you might be better off just showing him the ad.

What if p = 0.05, and q = 0.30? Your expected utility would be the same whether or not you show him the ad. Of course, if you are risk neutral like many of these games assume you are, just flip a coin. It doesn’t matter. If you’re a bit risk averse like most people are, you’ll find that the variance of the upper branch (~4.5) is higher than that of the lower branch (3.85), so it’s a mean preserving spread and you still prefer telling.

If you’re anything like me (not a given, because I’m kinda weird) you’ve found yourself in a similar situation at least once, and you’ve done a little bit of backwards induction to figure out what to do. Game theorists are most often criticized for assuming players are “strategic and perfectly rational” — the latest by Cosma Shalizi, according to Jordan Ellenberg at Quomodocumque:

What game theorists somewhat disturbingly call rationality is assumed throughout—in other words, game players are assumed to be hedonistic yet infinitely calculating sociopaths endowed with supernatural computing abilities.

Although you probably don’t draw a tree or calculate variances, a similar process of estimation happens in your head and you do indeed behave strategically, so it’s rational choice at work. And, as for sociopathy, economists are at worst amoral or mildly paternalistic (trust me, it’s for your own good 😉 ). It’s far from manipulating people for fun — cheap shot, Cosma.


Deficit Chicken

In recent months, there has been much coverage of sovereign debt crises around the world. Just a couple of weeks ago, Greece approved massive cuts to its budget, in a move designed to implement an austerity plan to avoid a devastating default on its debt (though some warn that even this is not enough to prevent a selective default). This past week, Moody’s (a rating agency) downgraded its ratings  of Portuguese bonds to “junk” status, meaning that there is a good possibility that its bonds will not be repaid. Other European countries, such as Italy and Spain, are also considered at risk, as they have large deficits and/or debts outstanding that imperil their abilities to repay.

Even in the United States, there has been much talk recently of what to do about the federal debt limit, which was surpassed in June. The debt limit must be raised, or the United States will cease to be able to pay its obligations, raising the specter of default. Yet the two sides of how to approach the issue have been taking hard stances over how to avoid this possibility. Republicans want to reduce the deficit solely through spending cuts, under the premise that any tax increases will hamstring the economy at this delicate stage in the recovery from the recent recession.[1] Democrats want to do so through a blend of spending cuts and an increase in taxes on “the wealthy” (see Marli’s post on taxation). Despite the necessity to somehow bridge the gap, the debt talks have recently appeared to be on the verge of collapse. House majority leader Eric Cantor, in the past couple weeks, walked out (insert link here) of the debt talks, citing irreconcilable differences. With both sides unwilling to give in, it seems like the United States is barreling toward a crisis.

With a scenario like this, it seems like a perfect time to whip out my chicken suit.

Both sides are heading straight toward the precipice; if neither swerves, disaster strikes: the government collapses, essential services are denied across the country, and politicians will likely be voted out of office for failure to govern effectively. Yet if anyone swerves first, it is also bad: it will mean giving up some of the sacred cows of their party’s platform, whether it be lower taxes for the Republicans (especially for “the wealthy”), or big-ticket items like healthcare for the Democrats. We can therefore model the game as such:

Lose healthcare Stand strong
Higher taxes (-5, -5) (-10, 0)
Stand strong (0, -10) (-100, -100)
Fig. 1: Deficit Chicken

Hopefully, they’ll be able to negotiate some sort of agreement before the August 2nd deadline, so we can avoid the “crash” solution. It seems like Minnesota hasn’t been able to avoid this, so a positive resolution is not guaranteed. We’ll see what happens…

[1] Though, the idea that these lack of tax increases will somehow pay for themselves is completely ridiculous (according to virtually all economists). Also, government spending cuts will impair the recovery of the economy as well, as such a move reduces aggregate demand. But I digress.