If you give your future self a carrot

I spend a lot of time trying to figure out how to be productive. How can I fit more work, sleep, exercise, and leisure into my day? How can I overcome procrastination? How can I focus on a task for longer periods of time? For a long time, my strategy was something like the one illustrated in the following (brilliant) comic:


Commitment devices like StickK received a lot of press when it was mentioned in Freakonomics – the premise was basically that people would commit to achieving a goal, like exercising, and pay a penalty when they didn’t stick to the task (the money could go to a person of their choice, a random charity, or an anti-charity – one whose cause the user opposes.) Users lost more weight when there was money on the line.

StickK gets its name from the carrot-or-stick analogy. The idea is that people might respond better to sticks (punishments) than to carrots (rewards), because they are loss averse: assuming that the income effect is negligible, losing $5 hurts a lot more than winning $5 is pleasurable.  So, when I create a StickK account and set a goal, I’m playing a game with my future self.  I commit to, for instance, working out every day, and if I don’t succeed that day, I have to give my roommate a $5. The commitment is self-executing – say my roommate wants those $5, so she’s definitely going to come and get the money from me if I deserve to lose it. Then, when my future self is debating whether to go to dance class, I’ll have to think, “Would I rather go to the class, or would I rather lose $5?” Of course, I’d rather not have to make the commitment at all, but Future Me won’t stick to the task if I don’t.

One problem is that I might value the time I would get back from being lazy far above $5. I might have to set the penalty at a price > the most I would be willing to give up to get that chunk of time back at the time that Future Me is making the decision (and that might be a pretty high number). Another issue is that for many people, succeeding in something like “not procrastinating” might feel like an even bigger loss than the procrastination itself – what if you don’t do as well as you would like at the task? What if you fail at it? Maybe you’d rather not find out – procrastinate instead. (In that particular case, the solution isn’t to penalize yourself with five pushups, punching yourself in the nose, and giving $1 to the NRA. You should probably figure out how to change how you evaluate your payoffs so that failing doesn’t hurt so much.)


The Monty Hall Deception

When I was in middle school, I consumed a lot of typical nerd literature like Richard Feynman’s “Surely You’re Joking, Mr. Feynman” and anthologies of mathematics puzzles from the Scientific American by Martin Gardner. In the latter, I first encountered the Monty Hall Problem, and it goes something like this:

Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?

It turns out that, yes, it is always to your advantage to switch your choice. This is a solution that has been notoriously difficult for people to wrap their heads around. After all, when you picked a door, the probability of having picked the door with a car was still 1/3, and after a door was opened, there would still be a car and a goat behind the remaining two doors – it seems as through the probability of choosing the door with the car ought to be ½ regardless of the door chosen.

The Monty Hall Paradox is in fact not a paradox at all, but rather just some clever sleight of hand. The trick is that people are drawn to the fact that there are only two doors rather than three doors remaining, and assume that the host’s having opened a door is favorable to the player. People tend not to realize that the game has imperfect information – the player does not know where on the game tree he is, whereas the host does. Additionally, people assume that the host has no stake in the game (and this is not unreasonable, because the problem does not explicitly describe a parsimonious host! On the other hand, intuitively, we know that the host isn’t going to ruin the game by opening the door with the car.) So, if we assume that the host is profit maximizing and we model the problem as an extensive form game with imperfect information, then the conditional probabilities would be easy to see.

Now, just for fun, we’ll assign some utilities to the outcomes. What is a goat worth? According to a popular Passover song in Aramaic, a (small) goat is worth about 2 Zuz, and according to the traditional Jewish prenuptial document, a wife is worth about 200 Zuz. So, a goat is worth about 1/100th of a wife. I asked my roommate, Anna, how many cars she thought a wife was worth, and she determined that a wife was worth three cars. By transitivity, then, a car is worth about 33 goats. (I think goats have become quite a bit less valuable since that song was written, or maybe goats back then were a lot better than our goats.) So, if the player wins the game, he will walk away with a utility of 33, and the host will walk away with the 2 goats.

monty hall

In this game, the light gray branches are dominated because the host has no incentive to open the door that the player has already chosen, and the dark gray branches are dominated because, of the remaining two doors, the host would not open the door that has the car. We can tell that in the top branch, the host as 2 possible choices for doors to open, whereas in the lower two branches, the host is constrained to only one door (since, if the player has chosen a goat door, there is only one goat door left to open.)

So, since the player has no idea after the first stage what door has the car, we assume he picks door No.1 (as in the game). If he observes that the host opens door 3, he would know that there are two cases where the host opens door 2: in the world where the car is behind door 2, the host chooses door 3 100% of the time, and in the world where the car is behind door 1, the host chooses door 3 50% of the time. It’s actually twice as likely that we are on the 100% branch as that we are on the 50% branch – and that’s the branch where the car is hidden behind the other door.

What if we know that the host has opened a door, but we don’t know which one? Then, we can’t condition on a prior, because we don’t know what the prior is – we don’t get any new information by observing which door was opened, and switching doors would not help.

Assumptions Are Being Made.

If you’ve somehow stumbled onto this blog, and you’re still reading, I get to make some basic assumptions about you: that (1) you’ve had some basic exposure to game theory, and (2) you share my diabolical delight in turning everything into a game. If neither of the above are true, they soon will be.

Tune in every week for a new post on an interesting application of game theory. Our purpose is to make games accessible, and to share some seriously neat theory seriously*.


*Not a typo. 😛