The veto donation paradox

We think of the veto as a very powerful (perhaps even unfairly powerful) bargaining chip, but this is not always the case. Sometimes having a veto is not as good as giving it away.

In this example, you want to select a juror. Candidates arrive randomly — most are acceptable but mediocre for both sides, a few are great for one side and terrible for the other, and a few are pretty good for both.

This is a variation of the Secretary Game. The central question for secretary games is, “Since, once a candidate is rejected, he does not apply again, when should we stop interviewing?”

For simplicity, we assume that there are only three types:

Type by utility to (x,y) Probability of arrival
b, b where {1/2 < b < 1} 1 - 2\epsilon
1, 0 \epsilon
1- \epsilon, 1 - \epsilon \epsilon


1. If both players reject the candidate, he is rejected.
2. If both players accept the candidate, he is accepted.
3. If one player accepts the candidate and one player rejects, then the candidate is accepted unless someone uses a veto.

This is a sequential game, so it is a game of perfect information. It is also Markovian, which means that if the candidate is rejected, we return to the same state in which we started.

Suppose neither side has any vetoes. Then, X always accepts (1,0) and Y always rejects this, so it is accepted (since there are no vetoes). Y always accepts (1- \epsilon, 1 - \epsilon) since it is Y’s best outcome, so it is accepted. X rejects (b,b), but Y accepts (b,b) because b > 1/2 and so it improves Y’s average. (If Y also rejected it, then Y’s payoff would be the average of 0 and 1- \epsilon.)

Therefore, the expected utility is:

U(X) = (1- \epsilon)(\epsilon) + (b)(1- 2\epsilon) + (1)(\epsilon)
U(Y) = (1- \epsilon)(\epsilon) + (b)(1- 2\epsilon)

If X has exactly one veto and Y doesn’t, then X would use up the veto when Y accepts b in one round, and we return to the starting point. This makes X slightly better off and Y slightly worse off.

U^*(X) = (1- \epsilon)(\epsilon) + (U(X))(1- 2\epsilon) + (1)(\epsilon)
U^*(Y) = (1- \epsilon)(\epsilon) + (U(Y))(1- 2\epsilon)

However, if Y has one veto and X doesn’t, then X would need to reject (1,0) (since Y would veto this and X would end up with U close to b). If (b,b) arrives, X and Y both reject, and if (1- \epsilon) arrives, both accept. So, Y’s having the veto is actually better for both candidates than X’s having the veto (and, if X could, he should give the veto to Y). Additionally, since this arrangement guarantees a higher expected payoff for both sides, giving one side a veto can even be Pareto-improving.

Extension: What happens if both sides have a positive, finite number of vetoes?

It’s easy for X to guarantee himself an outcome of (1- \epsilon). He can simply accept (1,0) and (1- \epsilon,1- \epsilon) every time they appear and reject (b,b) up until Y only has one veto left. Then, play as in the previous case. He can’t do any better, since there are no cases where Y has vetoes and (1,0) is accepted, and there are no cases where Y has no vetoes and X’s expectation is greater than (1- \epsilon).

[1] Mathematicians and other quantitative people talk about \epsilon (epsilon, pronounced either “EP-si-lon” in the US or “ep-SIGH-len” in the UK) a lot. We’ve certainly used it often. You can think of it as an arbitrarily small positive number, or “a number as small as you need it to be, but not 0.”

Example is based on Shmuel Gal, Steve Alpern, and Eilon Solan’s A Sequential Selection Game with Vetoes (2008)