11 Oct 2017

Reactions to Thaler’s Nobel

Economics 13 Comments

Commentary from David R. Henderson, Alex Tabarrok, and Tyler Cowen.

Von Pepe also sends this feisty reaction from Mario Rizzo. Check this out:

Nevertheless, the emphasis on the limits of the standard rational paradigm, as pioneered by Thaler, has been a very refreshing and useful thing. And yet behavioral economics remains wedded to this narrow conception of rationality as a normative and prescriptive standard of evaluation. It drives the critique of many market outcomes and is the basis of policy prescriptions. It is precisely because people are not narrowly rational that their behavior must be fixed. Their behavior must be taxed, regulated or nudged in the direction of the behavior of the perfectly rational neoclassical man. For example, it is alleged that people are obese because they fail to take “full account” of the negative effects of their unhealthful eating habits. What is full account? They must reckon or discount these effects at the rational rate of discount – the long-run rate, the rate one would use if one were super-rational and calm in making a diet plan to be implemented in, say, six months or a year. But how the agent looks at things now, at the moment of deciding what to eat, is wrong. It is impetuous. It is “present biased.” The individual needs help. And, in practice, it is the government’s help.

Aside from the policy implications, there is an incredible irony here. Standard economics is mocked for its rationality assumptions and yet those assumptions are held up as an ideal for real human beings. It is as if there is a neoclassical man deep in each of us struggling to get out but he is continually bombarded by behavioral shocks. Behavioral policy is about nothing less than becoming the real you! All this despite your resistance. [Italics in original.]

And also check out the big guns in the comments of Rizzo’s post.

13 Responses to “Reactions to Thaler’s Nobel”

  1. Tel says:

    They must reckon or discount these effects at the rational rate of discount – the long-run rate, the rate one would use if one were super-rational and calm in making a diet plan to be implemented in, say, six months or a year.

    The rate that is arbitrarily set by the central bank, and can change any time on a whim?

  2. Craw says:

    That’s a very insightful comment by Rizzo. I hadn’t thought of Nudge in quite those terms but it’s exactly right, it assumes you’d be “rational” if you could. Which is ironic since Thaler said he would spend the money as irrationally as possible.

    • Tel says:

      Anyone claiming to define “rational” in mathematically precise terms has also by implication made the claim to be able to build a rational machine using the same definition.

      I’m yet to meet anyone who can do it.

      • Harold says:

        The mathematical claims are usually made about a specific thing – is that not the case? One can make a rational machine that would decide in the value of a specific future good in a time consistent way. Economists do it all the time.

        • Tel says:

          Let’s look at a narrow domain then… maybe trading oil futures. We don’t expect this rational machine to do any difficult tasks like feeding and cleaning itself or making polite conversation at family dinners; just the very narrow and specific task of correctly pricing oil futures in the most rational way.

          The machine would make a steady profit of course, against those less rational human traders. Some human somewhere would be incentivized by that guaranteed profit and would build the mythical rational machine and collect plenty of free money.

          Well actually we went and tested this and what was discovered is that in any medium to long term trade humans beat robots. The only time robots win is on the millisecond timescale when humans just can’t react fast enough and the robots are able to arbitrage fractions of a cent… and even then the robots require constant adjustment by humans.

          How you feel about High-Frequency Trading is another issue I’m happy to discuss, but hopefully we have agreement that the definition of “rational” should not simply come down to who can push the “sell” button faster than the other guy. Speaking for myself I find such a definition of rationality to be unacceptable.

          • Harold says:

            There is a vast difference between being rational and being a successful predictor. They are different things. Especially if asked to predict the outcome of the actions of irrational beings.

            “discovered is that in any medium to long term trade humans beat robots. ”

            I am not sure about this. I have certainly read that on average tracker funds do as well as managed funds.

            • Tel says:

              Hang on a moment…

              One can make a rational machine that would decide in the value of a specific future good

              Is that a predictor or what?

              Especially if asked to predict the outcome of the actions of irrational beings.

              So it’s rational to make excuses now?!? Blame the other guy… I get it.

          • Harold says:

            Tel, you mis-quoted by missing off the second half of my sentence.

            I said “in a time consistent way” not “accurately”.

            I also said “value” not “price”.

            In the interest of clarity, I meant that one could make a rational machine that would value goods in a time consistent way. Say, exponential discounting or whatever, but it would be consistent. It would not sometimes use exponential and sometimes use parabolic and other time some other apparently arbitrary method for deciding how it would value things in the future compared to today.

            I was not intending to say that one could make a machine that could predict prices.

            I was getting at the idea that what is called irrationality is often inconsistency.

            I am suggesting that rationality is seen as following rules such as if I prefer A to B and B to C I should prefer A to C. We can make machines that follow these rules.

            • Tel says:

              If you are not interested in accuracy, then how are you going to recognize the difference between rational and irrational agents?

              Consistency is nice to have, but suppose I make a machine that answers every question by saying “42”… it would be very consistent, but sitting at the low end of the rationality scale IMHO.

              Also, if you want to distinguish “value” from “price” (I’m guess you mean “value” as the internal valuation within the agent) then any agent can say, “Well that’s the value to me, it’s my preferences” and even the highly consistent Agent-42 mentioned above could use this justification (he never says it out loud, but you can find the words etched on a small brass plate at the base).

              Based on that, the entire behavioral economics can be thrown out the window because there literally is no way to call anyone irrational after that. Even people committing suicide can be considered rational by am internal valuation, in as much as at the moment of death they did what they wanted to do.

              You have the further problem (for empiricists) that internal value states are not normally measurable at all, so you don’t even have visibility to collect any basic starting data).

              Saying, “It’s just my personal preferences” becomes the ultimate answer to life, the universe and everything.

              My preference is not to do it that way.

              • Harold says:

                “If you are not interested in accuracy…”

                I did not say was not interested in accuracy but that I was not discussing accuracy.

                “… then how are you going to recognize the difference between rational and irrational agents?”

                Not by the accuracy of their predictions.

                How do we recognise rationality? Surely because it follows some rules of consistency. If I prefer A to B and B to C, is it rational to prefer C to A (all else being equal)?

                I am pretty sure that many have said it is not, and that has nothing to do with predicting the future.

      • Craw says:

        No rational machine would expose itself!

  3. Dan K says:
    • Tel says:

      Who nudges the nudgers?

      I guess we need a system to Audit each person, gradually remove their cognitive bias and then they will be Clear. I trust Tom Cruise to do the job!

Leave a Reply