Failing public schools

You might think that I’m going to be talking about the (somewhat) recent cheating scandals plaguing schools around the country. Sorry to disappoint. What I’m more interested in is discussing why there is such a large achievement gap between public schools and private schools. We often read about how the public school system in this country is a mess. We thus have various national initiatives to try to help improve public schools, whether it be No Child Left Behind, or Race to the Top, or whatever. In any case, a lot of work has to be done on the public school system. For example, in New York City, less than half of students met the English standards. Compare this with New York City private schools, many of which are so successful that the state doesn’t even bother to test their students, as their curricula significantly surpass those of the state tests.

I think that a large factor in determining the quality of the public schools is how much the general public (including those with more resources to invest in education) actually are willing to contribute. If the wealthier students are all opting out and going to private school, then the money that these students could have been spending on the public school system is lost. Moreover, since students from a wealthier background will probably have a home environment more conducive to studying (after all, if the parents want to send them to private school, it shows a commitment to their children’s education), there will likely be, on average, more of an atmosphere at school as well of academic seriousness. In short, having the students from more affluent backgrounds attend the public schools would probably improve their quality by quite a bit.

Obviously, things are not so simple to fix – the families whose kids go to private school prefer this option, and are free to choose to pursue it. To illustrate the dynamic of sending children to private schools, we have to consider the incentives of the wealthier families. To simplify things, we reduce the number of agents to two (it makes it easier to see the rationale). We’ll call them the Smiths and the Joneses. As it stands now, their kids are mostly going to private schools, so the equilibrium we are in has both the Smiths and the Joneses choosing this option. The question is, is this the only equilibrium?

Smiths\Joneses Public Private
Public (A,A) (L,H)
Private (H,L) (B,B)

For the aforementioned equilibrium to hold, we require that B>L. Note that I don’t set H=B (though it might be), since it’s possible that private schools aren’t quite as good if there are not as many wealthy people attending; for example, they have the same fixed costs to pay for on less income from tuition, which may make it more difficult to maintain a higher quality.

We can say some more things about this. Given that schools are worse off when the wealthier students leave, we would expect this trend to hold even when only some of them leave. Thus we can stipulate that L<H. This means that everything comes down to H – if it is higher than $A$, then the families have a dominant strategy to send their kids to private school. But if not, then (Public, Public) is also an equilibrium. Thus it might be possible to “shock” the system to get the parents to send their kids to public school instead. I’m no education policy expert, so I don’t have any suggestions as to how to execute this, or whether it’s even desirable: perhaps we just want to pour our efforts into our students with the best chance of succeeding. But it’s a thought worth considering.

Note: This post was originally written in 2011, which is why the references are a little bit out of date, but the point is still valid.


Honor Thy Father and Thy Mother

Many of us are familiar with the Fifth Commandment, given by the eponymous title of this post. But what does this mean in practice? In the Jewish tradition, the Rabbis interpreted one’s obligations under this commandment as the requirement to feed and clothe one’s parents, along with other similar duties. In other words, basically, one must take care of one’s parents in their old age.

We would like it if the kids had reason to actually fulfill these duties. For, as Immanuel Kant put it, “Nevertheless, in the practical problem of pure reason, i.e., the necessary pursuit of the summum bonum, such a connection is postulated as necessary: we ought to endeavour to promote the summum bonum, which, therefore, must be possible.”[1] In other words (abusing Kant a little bit), we would like the kids to actually take care of their parents in their old age in a Nash equilibrium solution.

Here’s the problem. The kids may well be rotten, and tell their parents, “So long, and thanks for all the fish!” Why should they waste their precious time and resources taking care of them when there’s nothing in it for them?

The standard reply is that, since you were raised by your parents from infancy, you ought to take care of them. Just as, when you were incapable of maintaining yourself, they clothed and fed you, you ought to do the same when they are incapable of maintaining themselves in their old age. If they would anticipate that you’d be so ungrateful, they wouldn’t have raised you in the first place! But, while the kid is happy that his parents raised him, this is still utterly absurd from a game theory perspective. Here’s the game tree:

Image

As per the story in the following paragraph,  T>D>N, and K>C>0.

We see from the game tree that, if we reach the second stage (where the kid has to make his decision, and the parents have already decided to raise the kid), then the optimal thing for the kid to do is to betray his parents. Since there’s nothing they can do about it, they are sort of stuck if that happens. Knowing this, the parents should expect that, if they raise the kid, they will receive a payoff of N. So, they should stick with the dog.

This solution seems paradoxical. After all, we don’t see parents choosing not to have children because of this. One might say that really, they’d still rather have the kid anyway, i.e. N>D; but this is not universally true, and was certainly more frequently not the case before the modern era. It used to be that children were considered a financial asset, as they could work the fields, bring in extra cash from factory work (in the 19th century), and/or support them in their old age. Without these incentives, the children would not have been thought worthwhile in the first place.

But in any case, we don’t see that all kids abandoning their parents in their old age. The stigma of not taking care of them no longer exists to a large extent, since we hear all the time of children estranged from their parents, sending them to nursing homes, etc. Even if N>D, game theory would seem to predict that they would do so, or else they are not acting rationally; as per the quote from Kant before, we’d like it to be in one’s rational interest to help one’s parents.

Kant would likely say that the game tree does not appear as it morally ought. That is, morally, one ought to have preferences where D>N, so that parents would choose to have children and children would choose to take care of their parents, as this leads to the socially optimal outcome. This falls in line with Kant’s categorical imperative: one ought to act in the way that one wishes were universal law. Since one would wish that it were universal law that children took care of their parents, their preferences and actions should fall into line.

But what if the preferences remain as above? Is there any way to save Kant, along with (more importantly) the incentive to honor one’s parents?

One (naïve) way to try to resolve this is by making the above game tree into a repeated game. If the kids are rotten, then this triggers the “dog strategy,” in which the parents expect that the kids will always be rotten going forward, and so always get the dog instead. Wary of this, the kids will be sure to toe the line. With discount factors sufficiently close to 1, this will be a subgame-perfect Nash equilibrium.

This, however, doesn’t quite work, since the kids are only kids once. By the time they have the ability to act rottenly, they have no concern for future stages of the game – the parents have no means by which to punish them for their deviance. So, the above proposed solution fails.

A more sophisticated method of enforcing taking care of one’s parents involves repeating this game with overlapping generations. After all, nobody lives forever: the kids will one day grow old and feeble themselves. So, we can have THEIR kids punish them for not taking care of their parents.

Here’s the idea: each generation lasts for two periods, each consisting of the two stages in the game tree above. In the first, they are the kids; in the second, they are the parents. If the kids (Generation B) decide not to take care of their parents (generation A), then their kids (Generation C) will not take care of them, either. Thus generation B will get a payoff of K+D, since they will know that their kids will not bother to take care of them to punish them for their own malfeasance.

Now, we might be worried that Generation C will not have incentive to punish their parents, out of fear of being punished themselves by their kids in turn (Generation D). We can resolve this by making an exception to the above rule, so that a generation (C) is not punished for not taking care of its parents (Generation B) if its parents (Generation B again), in turn, did not take care of their parents (here, Generation A). Thus Generation C would get the best of both worlds: free-riding from their parents (if they, Generation B, is dumb enough to have kids anyway), and support from their own children (Generation D). Moreover, the entire process resets once we get to generation D, so even if someone screws up and does the wrong thing, it doesn’t screw doom everyone forever.

Thus this proposed strategy profile is a subgame-perfect Nash equilibrium as long as K+D<(K-C)+T, so that all generations prefer to take care of their parents and be taken care of, over backstabbing their parents and being content with a dog. I think this is likely the case for many individuals, and so we can rest easy that our kids will likely not be so inconsiderate as to send us to a nursing home if they wouldn’t want that for themselves.

That being said, as per a fact known as the folk theorem,[2] multiple equilibria will exist in the repeated game framework, so everyone not taking care of their parents will also be an equilibrium. This can explain why some people fail to do their moral duty. Kant would not approve.


[1] Critique of Practical Reason, Book II, Chapter II, Section V. No, I’m not quite a Kantian, at least in this regard, but I like the quote.

[2] So called because it was among the “folklore” of game theory, sort-of known by everyone before it was rigorously proven.


If you give your future self a carrot

I spend a lot of time trying to figure out how to be productive. How can I fit more work, sleep, exercise, and leisure into my day? How can I overcome procrastination? How can I focus on a task for longer periods of time? For a long time, my strategy was something like the one illustrated in the following (brilliant) comic:

Image

Commitment devices like StickK received a lot of press when it was mentioned in Freakonomics – the premise was basically that people would commit to achieving a goal, like exercising, and pay a penalty when they didn’t stick to the task (the money could go to a person of their choice, a random charity, or an anti-charity – one whose cause the user opposes.) Users lost more weight when there was money on the line.

StickK gets its name from the carrot-or-stick analogy. The idea is that people might respond better to sticks (punishments) than to carrots (rewards), because they are loss averse: assuming that the income effect is negligible, losing $5 hurts a lot more than winning $5 is pleasurable.  So, when I create a StickK account and set a goal, I’m playing a game with my future self.  I commit to, for instance, working out every day, and if I don’t succeed that day, I have to give my roommate a $5. The commitment is self-executing – say my roommate wants those $5, so she’s definitely going to come and get the money from me if I deserve to lose it. Then, when my future self is debating whether to go to dance class, I’ll have to think, “Would I rather go to the class, or would I rather lose $5?” Of course, I’d rather not have to make the commitment at all, but Future Me won’t stick to the task if I don’t.

One problem is that I might value the time I would get back from being lazy far above $5. I might have to set the penalty at a price > the most I would be willing to give up to get that chunk of time back at the time that Future Me is making the decision (and that might be a pretty high number). Another issue is that for many people, succeeding in something like “not procrastinating” might feel like an even bigger loss than the procrastination itself – what if you don’t do as well as you would like at the task? What if you fail at it? Maybe you’d rather not find out – procrastinate instead. (In that particular case, the solution isn’t to penalize yourself with five pushups, punching yourself in the nose, and giving $1 to the NRA. You should probably figure out how to change how you evaluate your payoffs so that failing doesn’t hurt so much.)


Does White have an advantage in chess?

It seems likely, doesn’t it? After all, as White, you get to develop your pieces first, putting them in position to better attack Black, or at least defend against his/her attacks. And the statistics seems to bear this out (at least, according to Wikipedia, my source for all things true). Though it could turn out that Black has an advantage – it might be that any move by White fatally weakens his/her position, so that they are at a comparative disadvantage to Black.

Whatever the outcome might be, it turns out that if either side has an advantage, that advantage is necessarily complete. Put more formally: either (a) there exists a strategy profile for White that guarantees victory, or (b) a strategy profile for Black that guarantees victory, or (c) strategy profiles for both White and Black that guarantee a draw. It sounds somewhat trivial, but it’s not: for example, if (a) is true, this means that no matter what Black does, the outcome of the game is not in doubt – White will win.

Results such as these are common among many games without exogenous uncertainty, and many games have been solved, so we know which of the analogues to the above possibilities are true. For example, checkers was recently shown to have strategy profiles for both players which guarantee a draw. So that this holds true for chess as well should not come as a surprise.

To show that one of these three possibilities must hold, we can draw a game tree which contains all the possible move sequences in chess[1]. This is because chess ends in a finite number of moves: a draw is automatically declared if the same position is reached three times, or if fifty moves have gone by without a pawn move or a piece capture. Since there are only a finite number of possible pawn moves (given the size of the board) and piece captures (since there are only 32 pieces), the game is finite.

Next, we can use backward induction (as in my post on tic-tac-toe) from each possible ending of a game to determine the outcome from the beginning. At each node, the player involved (White or Black) deterministically selects the branch that leads to the best final outcome for him/her (using tie-breakers if necessary if several outcomes are equally good). We proceed in this manner all the way up to the initial node, corresponding to the starting position of the game. We can then go back down the tree, and since we have already determined the best response to any position, we can deterministically get to the best outcome for Black or White. This automatically yields a win for one of them, or a draw.

Unfortunately, while this works well in theory, in practice it is virtually impossible. Given the combinatorial explosion of positions in chess, the computing necessary to determine which possibility is correct is infeasible. I guess we’ll be stuck with just a good game of chess.


[1] That is, theoretically; the actual game tree is WAY too big to actually depict


Why do women (almost) never ask men on dates?

This is something I’ve asked a few of people about. It seems odd that in our modern, post-feminist age, it is almost always men who do the asking out. This is not so good for both men and women. For men, it puts a lot of pressure on them to make all of the moves. For women, I cite Roth and Sotomayor’s classic textbook on matching, which shows that, though the outcome from men always choosing partners is stable, it is the worst possible stable outcome for women. That is, women could get better guys to date if they made the moves.

I have a few hypotheses, but none of them seem particularly appealing:

1) Women aren’t as liberated as we think.

Pro: There doesn’t seem to be any point in history where this was any different, so this social practice may indeed be a holdover from the Stone Age (i.e. before 1960).

Con: If this is true, then it is a very bad social practice, and we should buck it! This is not a good reason to maintain it!

2) If a woman asks a man out, it reveals information about her. This could be a case of multiple equilibria. Suppose that a small percentage of “crazy types” of both men and women exists, and under no circumstances do you ever want to date one of them. The equilibrium in which we are is fully separating for women, where the “normal types” always wait for men to ask them out, while the “crazy types” ask men out. Since this is a perfect Bayesian equilibrium, men know that if they get asked out, the woman must be crazy, and so they reject. Knowing this, the “normal” women would never want to ask a man out, since it would involve the cost of effort/rejection with no chance of success.

Suppose the chance that someone is crazy is some very small \epsilon > 0. Consider the game tree:

Image

Notice that the crazy women always want to ask the guy out, no matter what the beliefs of the guy are.

There are a few perfect Bayesian equilibria of this game, but I will highlight two. The first is that the normal women never ask guys out, and guys never accept. As \epsilon \rightarrow 0, this gives expected payoff to people of (0,0). No one wants to deviate, because only crazy women ask guys out, and so a guy would never accept an offer, as that would give payoff -10 instead of 0; knowing this, normal women will never ask men out, because that gives them payoff -1 instead of 0.

Another equilibrium is that all women ask men out, and men always accept. As \epsilon \rightarrow 0, the expected payoff vector is (2,2). Thus the former is a “bad” equilibrium, while the latter is a “good” one. In other words, we may be stuck in a bad equilibrium.

Pro: I think that there definitely some guys out there who think that women who would ask them out are “aggressive” or “desparate,” and so they wouldn’t go out with them.

Con: I don’t think the above sentiment is true in general, at least for guys worth dating! If a guy has that attitude, he’s probably an @$$#0!3 who’s not worth your time.

There may also be some elements of the problem with (1), but these would be harder to overcome, as the scenario here is an equilibrium.

Finally, while this might have some plausibility for people who don’t really know each other yet, I definitely don’t think this is true for people who know each other somewhat better, and therefore would already know whether the woman in question was crazy. That being said, I would expect it to be more likely that a woman who has known the man in question for longer to be proportionally more likely to ask him out (relative to the man), even if it is still less likely.

3) Women just aren’t as interested. If he’s willing to ask her out, then fine, she’ll go, but otherwise the cost outweighs the benefit.

Pro: It doesn’t have any glaring theoretical problems.

Con: I want you to look me in the eyes and tell me you think this is actually true.

4) They already do. At least, implicitly, that is. Women can signal interest by trying to spend significant amounts of time with men in whom they have interest, and eventually the guys will realize and ask them out.

Pro: This definitely happens.

Con: I’m not sure it’s sufficient to even out the scorecard. Also, this seems to beg the question: if they do that, why can’t they be explicit?

When I originally showed this to some friends, they liked most of these possibilities (especially (1) and (2)), but they had some additional suggestions:

5) Being asked out is self-validating. To quote my (female) friend who suggested this,

…many girls are insecure and being asked out is validation that you are pretty/interesting/generally awesome enough that someone is willing to go out on a limb and ask you out because they want you that badly. If, on the other hand, the girl makes the first move and the guy says yes it is much less clear to her how much the guy really likes her as opposed to is ambivalent or even pitying her.

ProThis is true of some women.

Con: Again to quote my friend, “There are lots of very secure, confident girls out there, so why aren’t they asking guys out?”


6) Utility from a relationship is correlated with interest, and women have a shorter window. This one is actually suggested by Marli:

 If asking someone out is a signal of interest level X > x, and higher interest level is correlated with higher longterm/serious relationship probability, then women might be interested in only dating people with high interest level because they have less time in which to date.

Pro: It is true, women are often conceived to have a shorter “window,” in that they are of child-bearing age (for those for whom that matters) for a shorter period.

Con: This doesn’t seem very plausible. Going on a date doesn’t take very long, at least in terms of opportunity cost relative to the length of the “window.” As a friend put it in response,

Obviously one date doesn’t take up much time; the point of screening for interest X > x is to prevent wasting a year or two with someone who wasn’t that into you after all. But then it would seem rational for (e.g.) her to ask him on one date, and then gauge his seriousness from how he acts after that. Other people’s liking of us is endogenous to our liking of them, it really seems silly to assume that “interest” is pre-determined and immutable.

So overall, it seems like there are reasons which explain how it happens, but no good reason why it should happen. I hope other people have better reasons in mind, with which they can enlighten me!


Bumping into people: the awkward dance

You know when you open the door, and you find someone else is trying to get in at the same time, and so you both end up right in each other’s faces? And then you each try to get out of the other’s way, only to go in the same direction and still be in each other’s face? And then you do a sort of weird dance?

I’ve been trying to find some good Youtube clips of this, and while I know they’re out there, I can’t find them in a brief search. But I think you know what I’m talking about.

Well, surprise surprise, we can model this interaction as a game! Each person has two strategies: move left (L), or move right (R).(1)  If they both move in the same direction, then they are still stuck doing the awkward dance, and get payoff -1. Otherwise, they move out of each other’s way, and so they happily go along their way, getting payoff 1.

 1\2 Left Right
Left (-1,-1) (1,1)
Right (1,1) (-1,-1)
Fig. 1: Awkward dance game

 

There are a couple of pure-strategy Nash equilibria: one player goes left, and the other goes right. But which equilibrium is going to be chosen? A priori, there is no way to tell. Here, social conventions can be useful, such as always moving forward on the right side (for a similar post, see Marli’s post, “When in New York, do as the New Yorkers do“). The problem is when some people didn’t get the memo (*sigh*).

There is a third Nash equilibrium in mixed strategies, where each person chooses to go in one direction with a 50-50 chance. This means that they will have a 50-50 chance each time they play that they will bump into each other, but eventually, after perhaps dancing for a while, they will get it right.

(1) This will all be from the perspective of player 1.


What do kittens have to do with rising tuition?

If you read the Financial Times, you might suspect from an article on Monday that kittens have something to do with rising tuition and Prisoners’ Dilemmas. Let me assure you that they don’t.

A friend of mine sent me the article, which cites a model designed by a team of Bank of America consultants who use the Prisoners’ Dilemma to explain rising college tuition. Here is the graphic they used:

kitten

Fig. 1: Things that are pairwise irrelevant to each other:

a kitten, the Prisoner’s Dilemma, and rising tuition.

They explain that the college ranking system (assuming two colleges) is a zero-sum game. If one college moves up, the other one moves down. “A college can move up in the rankings if it can raise tuition and therefore invest in the school by improving the facilities, hiring better professors and offering more extracurricular activities.” And therefore, they conclude, this is why college tuitions have been rising and why student debt will continue to rise.

First glaring problem: (raise, raise) is a Pareto-optimal outcome as they’ve set up this game, but what they probably meant to say was that it is a Nash equilibrium. Or maybe they meant to say that “raise” is the best response for each college. Anyway, in this game, (don’t raise, don’t raise) is also Pareto-optimal (but not a Nash equilibrium)!

Secondly, they’re trying to illustrate a kind of ratcheting problem: both colleges raise tuition to raise the quality of the resources at the school, in order to maintain their rankings. But, this means it’s a repeated game. In repeated games that have a finite horizon, defection happens at every step, but at infinite horizon games, cooperation can occur. Now, let’s just assume that this is an infinite horizon game, which is what the folks at B of A are assuming when they predict that college tuition will keep rising indefinitely, beyond mere inflation. What incentive is there to cooperate and keep tuition low? According to this game, none.  And according to what you might expect in reality, none – is it plausible that, in the absence of antitrust laws, that colleges would want to collude to keep tuition low, and that because they can’t collude, they are doomed to raise tuition every year against their wills? Nope.

Then, we come to the matter that in fact this game can’t be infinite horizon as it is presented here.  The simple reason is that, even if education is becoming a larger and larger share of a household’s spending, and even if the student is taking out loans and borrowing against his future expected earnings, he still has a budget set that he can’t exceed. Furthermore, the demand for attending college at a particular university should drop as soon as the tuition exceeds the expected lifetime earning/utility advantage for whatever the student sees himself doing in 4 (or more) years over the alternative. So, there will be some stage at which the utilities change and it becomes a best strategy for neither school to increase its tuition. So, it’s a finite stage game and the increase will stop somewhere, namely, where price theory says it should. [1]

Finally, it’s not clear that increasing tuition actually has such a strong effect on school rankings or that colleges are in such a huge rankings race. And, even if students at colleges outside the very top schools tend to choose a college based on things like food quality and dorm rooms, students don’t demand infinitely luxurious college experiences at infinite prices. Evidence: Columbia students feel they’re overpaying for food, and feel entitled to steal Nutella.

The lessons here are these: It’s not a Prisoner’s Dilemma in a strong sense if the cooperative result isn’t strictly preferred to the Nash equilibrium. Don’t model a tenuous game where the game isn’t relevant to the ultimate result (tuitions will stop rising at some point). Don’t assume that trends are linear, when they are definitively not linear. And, don’t put a kitten on your figure just because you have some white space — it really doesn’t help.

———————

[1] Actually, the game doesn’t have to be finite horizon. Suppose the upper limit that the colleges know they can charge is A, and the current tuition is B_t. Then, at each stage, they could increase tuition by (0.5)*(A - B_{t-1}). But, as the tuition approaches A, the increases become smaller and smaller until they pretty much just vanish, and it would be the same as stopping, because there is a time at which the tuition would stop affecting rank (a college isn’t going to improve it’s rank by charging each student an extra cent.)


Follow

Get every new post delivered to your Inbox.

Join 32 other followers