## Notes on Arrow’s Impossibility Theorem

Someone ran across my CV and asked me if I could send anything I’d written on game theory. So I dug up my class notes (for an undergrad class at Hillsdale) on Kenneth Arrow’s famous “Impossibility Theorem” regarding social choice. I haven’t looked at these in 7 years, so I hope they’re right:

NOTES ON ARROW’S

IMPOSSIBILITY THEOREM

Economics 356

History of Economic Thought II

Spring 2005

**Individual Preferences**

Following the methodological revolution described in Hicks, by the 1940s most economists no longer believed in cardinal utility. At the very least, most economists considered it much safer to assume that people had merely *ordinal *preferences, rather than to take the stronger view that people actually received units of psychic happiness (“utils” or “wantabs”) from various goods and services.

Economists developed formal techniques to rigorously develop this line of thought. The first step is to define the set of all possible items to be valued. Depending on the context, this set consists of different types of things. At the most abstract level, it could be viewed as “the set of all possible universes.” In a much more specific example, it could merely refer to “the set of all possible pizza orders the class could phone in to Hungry Howie’s.”

Once we have defined the appropriate set, we then can talk about how each individual ranks each element in the set. Since we are not going to assume cardinal utility, we can only discuss an individual’s *ordinal *rankings; that is, we can only take two elements at a time, and ask the individual, “Which of these do you prefer, or are you indifferent between them?” We can never ask—indeed, it doesn’t make *sense *to ask—the individual, “How *much more *do you like this element over this other element?”

Formally, we can summarize an individual’s answers to these questions by use of a *preference relation*. Normally we indicate this by a symbol that looks like a curvy greater-than-or-equal-to sign, but here I’ll just use the symbol @. If we take two elements, let’s call them *x* and *y*, from the set of all possible things to be valued, then the statement *x*@*y* means that the individual thinks that *x* is at least as good as *y*. If it were *not *true (at the same time) that *y*@*x*, then we would conclude that *x* is *better *than *y* (not merely just as good), because we know *x* is at least as good as *y*, but *y* is not at least as good as *x*. And if we knew that *x*@*y* and *y*@*x* at the same time, then we would conclude that this individual is indifferent between *x* and *y*.

NOTE: In order to construct a coherent ranking (from best to worst) of an individual’s preferences, it is necessary that his or her @ be *complete *and *transitive*. If @ is complete, that means it can be applied to any two elements from the set. I.e., for any elements (call them *a* and *b*), the individual could report either *a*@*b*, *b*@*a*, or both. If @ were incomplete, then the individual might say, “I really don’t know how I feel about those two elements; I can’t tell you which is at least as good as the other.”

If @ is transitive, then whenever *x*@*y* and *y*@*z*, it must also be the case that *x*@*z*.

**“Social” Preferences**

Economists often want to use their science in order to make policy recommendations, or at least to make “objective” statements about various social arrangements. By analogy with an individual preference ranking, we can ask how “society” does (or should) value the different possible elements in the set of all valued things.

Because society is ultimately composed of individuals, most economists think that “social” preferences ought to be constructed from the preferences of the individuals in society. But at this point, a problem emerges: If people do not agree on how to rank, say, *x *with *y*—i.e. some people feel that *x*@*y* while other people do not—then how can we say how “society” should rank these two possible outcomes?

Economists thus began a search for plausible *social welfare orderings*. These are functions that take the @ for each individual—and we could keep them distinct by putting a superscript on them, so that @^{1} is the preference relation of person #1, etc.—and then use this information to generate a preference relation for society, which we will label @^{S}.

So now the question is, what types of social welfare orderings are appealing, both on logical and moral grounds? In principle, there are billions of different rules we could invent, in order to generate a @^{S} out of the individual @^{i} of each member *i *in the society.

**Arrow’s Theorem**

Kenneth Arrow intended to weed out the “silly” or obviously distasteful social welfare orderings (henceforth SWO). So he came up with a quite reasonable list of criteria that any decent SWO would need to satisfy.

One basic requirement is that it should be complete and transitive. That is, whenever the individual @^{i} of each person in society is complete and transitive, whatever our rule is that generates the @^{S}, that list of social preferences should *also *end up being complete and transitive.

Another criterion is that the SWO should obey *weak Pareto optimality*. In the present context, this means that if *x*@^{i}*y* for every single person *i*, then it should also be the case that *x*@^{S}*y*. In other words, if every single person in society thinks that *x* is at least as good as *y*, it would be ridiculous if our SWO then ended up saying that “society” should value *y* more than *x*.

A third criterion is the *independence of irrelevant alternatives*. This is the least intuitive of the criteria. What it requires is that the determination of the social ranking of *x *and *y* should depend *only *on how each individual ranks *x* and *y*.

The final criterion is *no dictatorship*. This means that there cannot be some individual *j *such that @^{S}=* *@^{j} no matter what every other person’s preferences are. Note that this is a very weak requirement. For any *particular* group of individual preferences @^{1}, @^{2}, @^{3}, …, it’s perfectly acceptable if our SWO constructs a @^{S} that happens to be identical to some individual @^{j}; this alone would not christen individual *j *as a dictator. What *would *qualify him as a dictator is if @^{S}=* *@^{j} for *any possible *group of individual preferences @^{1}, @^{2}, @^{3}, …

What Arrow proved is that *there does not exist *any SWO that satisfies all four of the above conditions (if we have at least a few people and a few different elements in the set of valued things). Specifically, Arrow proved that if we assume we are dealing with an SWO that meets the first three criteria, then that SWO necessarily must work by picking some individual *j* and then simply setting @^{S}=@^{j}.

**Examples**

Since Arrow’s Impossibility Theorem is a negative result, it’s best to illustrate it by showing SWOs that *do not *satisfy his criteria. For simplicity, we’ll assume there are only three people, Joe, Billy, and Martha, and only three possible states of the world, *x*, *y*, and *z*. We thus are looking for a set of rules to take @^{J}, @^{B}, and @^{M} in order to construct a “social” ranking of the possible outcomes *x*, *y*, and *z*.

Suppose we have the very simple SWO that says, “No matter how Joe, Billy, and Martha rank the alternatives, @^{S} should always be defined so that *x*@^{S}*y*, *y*@^{S}*z*, and *x*@^{S}*z*, and so that the reverse is not true, e.g. that it is not the case that *y*@^{S}*x*, etc.”

Which of Arrow’s criteria does this suggested SWO violate? Well, it’s complete and transitive, so it’s okay on those grounds. It doesn’t have a dictator, either (it’s always possible that any person will have preferences that differ from those indicated by @^{S}). Although it’s not as easy to see, I’m pretty sure that this hypothetical SWO also obeys the independence criterion. (E.g. the social ranking of *x *and *y* will never be affected by changing the individuals’ rankings of, say, *y* and *z*.)

What this SWO *does *(obviously) violate is the weak Pareto condition. For example, if Joe, Billy, and Martha all strictly prefer *z* to *y*, then our suggested SWO will still say that “society” prefers *y* to *z*. Thus our suggested SWO does not meet Arrow’s criteria, and we must keep looking.

What about majority rule? That is, suppose we define @^{S} such that *x*@^{S}*y *only if at least two people feel this way, etc.

Majority rule violates the criterion of transitivity. That is, there are *possible *preferences that Joe, Billy, and Martha could have, such that a @^{S} constructed on the basis of majority rule would violate transitivity. (To see this, consider the case where Joe ranks the alternatives in the order *x*, *y*, *z*, Billy ranks them *y*, *z*, *x*, and Martha ranks them *z*, *x*, *y*.) Note that Arrow requires the SWO to be transitive for *any *possible list of individual preference relations; it’s not enough that the SWO might satisfy all four criteria for some particular list of individuals’ preferences.

Finally, let’s consider the SWO that proceeds like this: “We will say that *x*@^{S}*y* only if Joe, Billy, and Martha all agree that *x* is at least as good as *y*. If at least one of them disagrees, though, we will say that it is *not *the case that *x*@^{S}*y*. Etc.”

Which criterion does this rule violate? Well, it’s transitive (so long as the individual relations are); if everybody thinks *x* is better than *y*, and that *y* is better than *z*, then that means everybody thinks *x* is better than *z*, and thus so will “society.” There is also no dictator with this proposed SWO, and it is also true (I think) that there is no violation of independence. And of course this SWO obeys the weak Pareto condition.

But this proposed SWO is, unfortunately, incomplete. That is, the rule we defined will not always tell us how “society” should compare, say, *x* and *z*. For suppose that Joe thinks *x* is strictly better than *z*, but that Martha thinks that *z* is strictly better than *x*. Then according to our rule, it can neither be true that *x*@^{S}*z* nor *z*@^{S}*x*. And completeness requires that our preference relation be able to tell us that one (or both) of these items is at least as good the other. Hence this proposed SWO too fails to satisfy Arrow’s criteria.

Wow, nice undergrad class notes.

OH, teaching notes. Wow, that blew my mind for a minute there, I was like “What undergrad takes notes like that??”

Wait, oh, hahaha, I thought the same thing! So they’re teaching notes? Whew, I was like damn, Murphy was already a superstar then.

I was going to say that my undergrad notes look, well, like they were written by someone half baked…and with Parkinsons…

I was thinking holy cow, am I way out of my league or what?

That’s why I made the compliment.

Now I feel a little awkward, but not too much, because I just let the cat out of the bag on believing Murphy wrote like that when back he was listening to euro dance mix 95.

Disregard, Murphy just said below that he wrote these notes. Our original interpretations were correct.

Now, where’s that rock I call home? I should go back under it.

Not sure what is going on here. I wrote these notes and then handed them out to my students. I wasn’t in undergrad in Spring of 2005, if that is a context clue.

HAHAHAHAHA

I am an idiot.

Dude I listened to Hall & Oates back then. And Survivor. I was Old Skool before it was cool.

If we’re talkin’ Old School I was listening to Machaut then.

Cool story

We can never ask—indeed, it doesn’t make sense to ask—the individual, “How much more do you like this element over this other element?Sure it does: brain computation time needed to return an answer, personal estimate of probability of regret …

Sure it does: brain computation time needed to return an answer…But by that criterion I really liked your comment.

You (probably) really liked giving an answer rather than not giving one.

Brilliant.

I know this is kind of a tangent of the central issue here, but I’ve never understood this obsession with abolishing relative strengths of preferences. I just showed you two easy ways to operationalize the concept in an observable way. (Paul Birch has an implicit one in his essay on Ethics as Entropy of choice, where your “probability” of taking/liking a choice is indicative of its relative, cardinal goodness.)

And in the last thread, a lot of folks — *thinking* they were taking the Austrian position — offered other ways to operationalize interpersonal comparisons of “preference strength” in the parable of the poor woman altruist. For example, they said that you could count up her wealth and say it was a “bigger sacrifice” by giving up a higher fraction of per possessions

weighted by market (i.e. social) value.Even your joke here, Bob_Murphy, shows your failure to appreciate the point, since you confused “time to answer” with “time to decide whether to answer”.

On top of that, your canonical example of “heh — you can’t have like, a friend that ‘two-times’ as good a friend” is far from convincing to most people, who can indeed meaningfully describe what it would be like for all of their friends to simultaneously become better friends while preserving relative friend ranking.

Time to stop grounding your econ in pure ordinality.

Elegance and logical purity.

I think I mostly agree with you Silas. I believe new understanding of the brain will transform a lot of debates, including this one. But you have to admit we have no clearly correct way to establish a single scale of utility applicable in all circumstnace. So even with a pretty good one there will be situations where the ordinal-only model will be the best tool at hand.

But I agree a lot of folks push this to extremes. As we saw with th tale of the woman and her two mites.

Sure it does: brain computation time needed to return an answer, personal estimate of probability of regret …How are these two things proxies, or representations of, or manifestations of, intensity of preference?

If it takes me all day to figure out a physics problem, but it takes me about 5 seconds to accept a billion dollars for free, or vice versa, how do these times of computation tell us the intensity of my preference?

Offer RPM a real live debate with Krugman. The reaction would be huge and instant.

Major_Freedom, you’re making the same mistake Bob_Murphy made: the comparison is supposed to between computation time needed *to make a choice*, not to perform an arbitrary computation.

So here, if you had some alternative B, you would need to compare:

1) the decision time needed to choose $1 billion over B, to

2) the decision time needed to choose to do the physics problem over B

If your decision time in 1) were significantly shorter than in 2), that would suggest, I think, that the intensity of your preference for the billion dollars

relative to Bis stronger than your preference for doing the physics problemrelative to B.And the same is true if the measure is for personal estimate of probability of regret. (Although I’d prefer to go with log-odds, since it’s robust and consistent across cases where you actually prefer B, and scales consistently with high or low probabilities.)

Where do you disagree?

the comparison is supposed to between computation time needed *to make a choice*, not to perform an arbitrary computation.What’s the difference? There are choices tied up with means and ends. I tend to call choices tied up with ends “constitutive choices”.

For example, I choose to use the means of reading a score to learn how to play the moonlight sonata. I choose to use a piano as the constitutive choice of actually performing the goal I intended to achieve. Here, the computation and the choice “overlap”.

So here, if you had some alternative B, you would need to compare:1) the decision time needed to choose $1 billion over B, to2) the decision time needed to choose to do the physics problem over BIf your decision time in 1) were significantly shorter than in 2), that would suggest, I think, that the intensity of your preference for the billion dollars relative to B is stronger than your preference for doing the physics problem relative to B.But that just means 1) is ranked higher than 2), which are both ranked higher than B by stipulation. What I mean is, to me you are using the word “intensity” (of a preference) to describe its relative location in a scale of ranked preferences.

I mean, you even stipulated that 1) and 2) are

rankedhigher than B!Saying the “intensity” of 1) relative to B is larger than the “intensity” of 2) relative to B, cannot

eliminateyour statement being one of a ranking, namely, of 1) over 2), and 2) over B.Then there is the blatant counter-example I have in mind. Suppose my computation time for choosing the billion dollars over B is “significantly longer” than the computation time for choosing the physics problem over B. Could I not still rank the billion dollars above the physics problem? I can think of a few reasons why I would take a longer time to decide I want the billion dollars over B, as compared to the physics problem over B, even though I prefer the billion over the physics problem.

Maybe the billion dollars would change my life so much that I need to take a longer time (than choosing the physics problem) to decide if I am willing to risk losing friends and family (which can happen when people come across that much money suddenly). I can do that whilst still ranking the billion over the physics problem.

You just kind of assumed that choosing the billion dollars takes a shorter time to decide as compared to the physics problem.

You still have not explained WHY varying computation times would suggest the existence of an “intensity” of preferences.

And the same is true if the measure is for personal estimate of probability of regret. (Although I’d prefer to go with log-odds, since it’s robust and consistent across cases where you actually prefer B, and scales consistently with high or low probabilities.)If your premise is the same for personal estimate of probability of regret, then the same questions/challenges I used for computation time above apply here too.

Where do you disagree?It’s more of me being unconvinced due to the as of now lack of actual argumentative connection between computation time of decision, to intensity of preference.

To me, computation time is insufficient, because I can think of scenarios where computation time would suggest an intensity metric whose implied ranking contradicts the actual ranking.

I think you’re not quite careful enough with your dictator. Under the other conditions for any set of preferences there must be j such that … But the value of j is not fixed beforehand by the rules alone. Change the ordered set of preferences of voters v1, v2, .. and you can get a different dictator.

Ken B. your arrogance is astounding. But be a little more specific: What mistake did I make in my notes?

Not being arrogant Bob, just collegial (M_F? I only ask as I’m sure you won’t have learnt the word on FA …). I just think your wording is a little off because the rules do not need to specify a specific voter vj. I think this passage needs to be a bit clearer:

“it’s perfectly acceptable if our SWO constructs a @S that happens to be identical to some individual @j; this alone would not christen individual j as a dictator. What would qualify him as a dictator is if @S= @j for any possible group of individual preferences @1, @2, @3, …”

My point is j is not fixed and your wording, though perhaps not yopur intent, suggests otherwise. That’s why I said “not careful enough’ and did not say ‘way wrong dude’.

So some example constitutions.

Constitution 1. Voters draw lots to determine a dictator.

Constitution 2. If any two voters have identical preference orderings v2 is dictator, otherwise v1 is dictator.

Plus for some constitution where vj emerges as the Arrovian dictator, re-instantiate the example with voter j’s and voter k’s

preferencesswapped. For at least some rules I believe voter k will now be dictator.There is no voter i in {1, …, n} such that for every set of orderings in the domain of the constitution and every pair of social states x and y, x P(i) y implies x P y.

Your instances of which the set of orderings is modified, and the result changes, is not a situation where there is an Arrovian dictator.

Ken I don’t know what you are saying. My original wording is exactly what Arrow’s Theorem requires.

OK, I see that I have misread what you said. I thought you were implying the rules had to (explicitly) christen a dictator. EWxample 1 shows why that’s not quite right. Re-reading I see you did *not* say that that. So I misread.

My bad, I retract.

Thanks, and I shouldn’t have been so ornery. In case you’re curious, what set me off is that you didn’t say, “Bob, that makes no sense to me. What about a case like…? Are you saying Arrow requires…?”

Instead of that you wrote, “I think you’re not quite careful enough with your dictator.” I.e. since you didn’t understand what I wrote, your first hypothesis is that I must have botched my explanation, even though I did a Field in Theory in NYU and in my Game Theory class (not this History of Thought class) we were literally going over the symbolic proof of Arrow’s Theorem at the time I wrote these notes.

I think if you look at my comment, it reads like what one mathematical sophisticate would say to another to highlight what he believed was a small imprecision. That’s certainly how I intended it. I have had such exchanges with Landsburg for example, both correcting him and correcting my own mistakes. Rather than slighting your mathematical skills I think it assumes them. (In this case I was wrong about the point, but of course I wouldn’t comment if I thought that.)

Actually I knew collegial, Ken B.

Interestingly, I got it from Bob, but he wasn’t dealing with you at the time.

LOL, voting…in a dictatorship.

Nice.

“You can vote for any candidate you want, as long as it’s me.”

LOL dictating in a democracy. Nice.

🙂

Actually if you look up most discussions of Arrow’s theorem the word used is voter. And there is nothing odd about that. As Silas implies, votes need not have much effect. A voter is just someone who expresses a preference.

Sure, but your criticism is structured in a way that sounded funny.

It might. As I just explained to Bob, it was not written for the general reader. Right or wrong on substance it was written for a certain kind of reader. See my answer to Bob below.

That’s not uncommon I think. Many of your exchanges with LK assume a familiarity with Austrian theory for instance.

Like, all of them you mean.

I can’t even think of not thinking like an Austrian any more. It would be like asking me to assume my listener is not familiar with first order logic. I couldn’t even converse with them.

“I can’t even think of not thinking like an Austrian any more. It would be like asking me to assume my listener is not familiar with first order logic.”

They would need to be unfamiliar with first order logic to not be an Austrian rationalist. Seriously, I just got through reading Hoppe’s defense of rationalism, and cannot understand how one can logically take the stance of empiricism and not recognize the inherent contradiction.

I love that essay.

Self-referential logic FTW.

Seriously Bob I have read this comment of mine several times now. The criticism seems quite mild to me, and politley expressed. I even think it indicates what I think is the problem, though I admit it would have been clearer had I included my examples. Your reaction is way over the top.

Considering your recent recanting, I think it’s safe to say your original demeanor was a little over the top.

Don’t you agree?

Nope. As explained at ponderous length elsewhere.

YOU’RE WRONG!!!!!

[Goes back to ex post rationalizing my wronginess]

I hope we’re copacetic now Bob. We owe it to ourselves to get along better and not burden the future by borrowing trouble …

I am trying really, really, really hard to see what’s in it for him, but I confess I am at a loss, but then again, I can’t observe his subjective value scale beyond his actions, but then again, uh, then again, I rarely see pleasant demeanors from either of you against each other.

Well, at least he’s replying to you. One of my nemeses Sumner pretends I don’t exist. His patience cannot hold a candle to Bob’s.

I hope you realize at the end of the day that he is doing you a favor. Don’t forget the title of this blog.

I think you missed the double entendres.

I got them, I was referring to the first sentence.

Ken B. I admit I overreacted if you admit you waltzed in and accused me of botching Arrow’s Theorem when really the problem was that you didn’t understand what I wrote. If we didn’t have such a history I wouldn’t have snapped like that.

With the proviso that ‘waltzed’ is tendentious — rhumba-ed? polka-ed? fox-trotted? — yes I thought I was criticizing a small error but actually misread your lapidary prose.

Heh OK I had to look up “lapidary” so we’re friends again. I am the intellectual equivalent of a Seattle doormat.

http://www.infowars.com/keynesian-hurricane-sandy-good-for-economy/

the blessings of destruction… according to Peter Morici