30 Aug 2020

BMS ep. 142: Explaining John Nash’s Dissertation

Bob Murphy Show, Game Theory 17 Comments

I don’t mean to shock you but Ron Howard / Russell Crowe got it wrong.

17 Responses to “BMS ep. 142: Explaining John Nash’s Dissertation”

  1. Harold says:

    The word you can’t make out is “acts”

    Nicely explained.

    For RPS, if player 1 is using 1/3, player 2 has no preference. If player 1 is not using 1/3 of each, player 2 will have a better strategy than 1/3. So each playing 1/3 is the only way that both players can be playing the best strategy, so it is an equilibrium. Is that right?

    The problem with the film is that the solution Nash proposes is not a Nash equilibrium. I think a Nash equilibrium is one guy goes for the blond, which gives us 4 different Nash equilibria. I don’t think Nash can offer us anything about how to decide which guy goes for the blond, that is, which Nash equilibria to chose.

    I seem to remember that in the film he hallucinated a roommate – that seems pretty far out.

    A while ago Landsburg offered a Russian Roulette version of the VNM criteria. Michael Dekker in the comments came up with a different version. I wondered at the time how it would work, but by then all commenters had worked through the original problem, so I hope you don’t mind if I try it here (slightly ammended)

    “You’re on a game show. You can’t leave with negative money. There are 6 doors, 4 with $6,000 cash, 2 with a goat (worth nothing…). You can pick a door, but first you can offer the host $x from your winnings (currently $0) to replace the two goats with the prize. What is the maximum you would offer?”

    The answer depends on your risk aversion etc, but pick a number before going on. My thought was that expected return is $4000, so maybe a reasonable figure would be up to $2000?

    Now, same show but there are 6 doors, 2 prizes, 4 goats. What is the maximum you offer from your winnings to replace 1 goat with a prize?

    I will give my answer if anyone is interested.

    • random person says:

      Now, same show but there are 6 doors, 2 prizes, 4 goats. What is the maximum you offer from your winnings to replace 1 goat with a prize? “

      If “you can’t leave with negative money”, then I’m afraid of what they might do if I offered money to replace one goat with a prize, and then opened a door and still got a goat and nothing to replace the money I offered.

      Being allowed to actually leave is probably the most important thing.

    • random person says:

      “You’re on a game show. You can’t leave with negative money. There are 6 doors, 4 with $6,000 cash, 2 with a goat (worth nothing…). You can pick a door, but first you can offer the host $x from your winnings (currently $0) to replace the two goats with the prize. What is the maximum you would offer?”

      The answer depends on your risk aversion etc, but pick a number before going on. My thought was that expected return is $4000, so maybe a reasonable figure would be up to $2000?

      Expected return is $4000 initially, but will be $6000 minus whatever you paid the host if he replaces the two goats. If you offer $2000, then it’s $6000 minus the $2000 which is still $4000. So, $2000 is a reasonable maximum, to guarantee you get $4000 rather than having a two thirds chance of $6000 and a one third chance of a goat, but to get your expected return higher than $4000, you’d really want to try to haggle for a fee less than $2000.

      • Harold says:

        How about the second one with 6 doors and 4 goats?

        • random person says:

          myself earlier,

          If “you can’t leave with negative money”, then I’m afraid of what they might do if I offered money to replace one goat with a prize, and then opened a door and still got a goat and nothing to replace the money I offered.

          Being allowed to actually leave is probably the most important thing.

          Because I’d want to be allowed to leave, and they won’t let us leave with negative money, $0 would be my maximum.

          • Harold says:

            That maybe was unclear – it means you cannot lose money you have outside the game. You are only gambling with the potential winnings. It is a normal game show, you can leave.

            You are allowed to offer money from the $6000 to remove one of the four goats.

            • random person says:

              So in essence, in this scenario, you are offering to pay the host part of your winnings *only* if you get one of the $6000 prizes. Otherwise, you get a goat and the host gets nothing other than whatever their normal wage is.

              Expected return is only $2000 initially. However, the expected return will raise to $3000, minus half whatever you are offering the host (since there is only a 50% chance you will have to pay the host from the winnings), if one of the four goats is replaced with a third $6000 prize. Offering the host $2000 would keep the expected return at $2000, but give a 50% chance of getting $4000 rather than a one third chance of getting $6000. Haggling for a fee less than $2000 would give an expected return higher than $2000.

              • Harold says:

                The original question from Steve Landsburg. You have been forced to participate in a game of Russian Roulette.:

                “Question 1: At the moment, there are two bullets in the six-shooter pointed at your head. How much would you pay to remove both bullets and play with an empty chamber?

                Question 2: At the moment, there are four bullets in the six-shooter. How much would you pay to remove one of them and play with a half-full chamber?

                The Big Question: Which would you pay more for — the right to remove two bullets out of two, or the right to remove one bullet out of four?

                The question is to be answered on the assumption that you have no heirs you care about, so money has no value to you after you’re dead.”

              • random person says:

                I assume the answer to the question would depend on how much money I have (perhaps, all of the money I have, to remove any bullets they’ll let me remove.) Unless of course I hate the kidnappers so much I’d rather die than give them anything.

              • Harold says:

                Most people get the Russian Roulette one “wrong” according to the von Neumann-Morgenstern axioms. The answer, as you found in the quiz show example, is that you should pay the same, whatever that would be. Most people do not say that.

                If we get one of the first 3 bullets we are dead anyway and we have a 50% chance of getting that. So we should pay the same for removing 1 out of 3 (Q2) as we do for removing 2 out of 6 (Q1).

                I think it is because people do not grasp the idea that they do not care if they are dead. The situation in the quiz show is the same but it easier to get your head around the idea that you don’t care how much prize money was there if you don’t win it.

                So if people see the quiz show more easily than the Russian roulette, it can be argued that it is not VNM axioms so much as failing to understand the situation properly.

                What do you think your reaction would have been to the Russian roulette version had you not seen the quiz show version?

              • random person says:

                Not sure. Maybe if you didn’t have me thinking about the game show first, I might’ve been more likely to just say, “I’d probably hate the kidnappers too much to pay them anything.” Then again, maybe not. Hard to say without knowing more about the kidnappers.

              • Harold says:

                The key question is
                ” Which would you pay more for — the right to remove two bullets out of two, or the right to remove one bullet out of four?”

                The kidnappers are the same in both cases.

                The answer following VNM axioms is the same, but many people do not see this.

                Is this because people are irrational, VNM axioms are wrong or something else?

              • random person says:

                I think maybe it’s because there are psychological factors that are hard to capture in a what-if scenario.

                Like, strictly speaking, the kidnappers aren’t the same in both scenarios, if for no other reason than because each set of kidnappers is offering you a different choice. One group is offering you the opportunity to play with an empty chamber, and the other, only the opportunity to play with a half-full chamber.

                One might well hate the second group of kidnappers more, and, for psychological reasons, be less willing to cooperate with them / give them anything.

                On the other hand, if one is less motivated by hatred of kidnappers, and more by simply trying to stay alive, one might hope to haggle with the second group of kidnappers to try to get them to remove more than just one bullet. Obviously, you won’t have anything left to haggle with them to try to get them to remove a second or a third or a fourth bullet, if you spend all your money paying them to remove just one. You can say for the sake of a what-if scenario that the kidnappers are only willing to remove a single bullet, but in real life, you might not be so convinced that you couldn’t get them to remove more than one.

                Also, you could say for the sake of a hypothetical scenario that the kidnappers are being truthful about the number of bullets in the chamber, but in real life, they could be lying. For example, when CIA agent Larry Devlin was forced to play Russian roulette in the Congo, he was lead to believe there was a bullet, but in fact, the chamber was empty. He dubbed this version of roulette — where there were no bullets, but his captors made him think there was a bullet — Congolese roulette.

                Source: Prologue of Chief of Station, Congo: Fighting the Cold War in a Hot Zone

                You can read the prologue for free here without having to buy the whole book.
                https://www.publicaffairsbooks.com/titles/lawrence-devlin/chief-of-station-congo/9780786732180/#module-whats-inside

  2. Harold says:

    Reading a bit more, many say that Nash’s insight was to extend a 2 person zero sum game to multi person games, not just zero sum games. Von Neuman effectively reduced multi-player games to 2 players by proposing coalitions. The problem with many player games is defining what “optimal strategy” means and does it even exist? Nash did ths by defining the Nash equilibrium as the optimal strategy.

  3. Tel says:

    I must admit I did not quite grock the example with the various percentages of winnings and the choices offered.

    However, Bob missed the most obvious example that proves the question on randomness and expectations … and that is the entire insurance industry! We all know that insurance companies make a profit (although in most jurisdictions it’s a corporate profit limited by statute, but even then they have to pay their staff, contractors, etc). That means the long term expectation of every person buying insurance must be less than zero … and yet plenty of people still buy insurance.

    https://www.npr.org/2020/08/26/906243873/summer-school-8-risk-disaster

    Just to show what a well rounded person I am, I will post an NPR link, which in this case is fairly well explained. I have a quibble over the young lady who has difficulty with the question of morals, but I accept this is reasonably normal today, and I guess we are all to blame for that.

  4. Tel says:

    Regarding the game of Rock/Scissors/Paper, it’s interesting because it doesn’t fit the normal rules of the Nash Equilibrium.

    You would expect that if both players are playing at the equilibrium position in a typical game then when either player moves away from equilibrium then (at least in the short term) that player is penalized by making that move. In games such as Prisoner’s Dilemma there might be long term advantage in moving away from equilibrium if you can find a way to convince the other player to work with you, but at least in the short term you are penalized for moving away from equilibrium.

    However, in Rock/Scissors/Paper … if I know that the other guy will play the equilibrium position of a random “mixed strategy” equal probability then this does not impose penalty on me, in fact it offers me complete freedom. I could play Rock five times in a row and let the other guy play his random strategy and I’m no worse off (nor any better off). After you see me play Rock five times in a row, will you play Paper and try to beat me?

    I dare you.

    • Harold says:

      It is interesting that given your opponent playing the Nash equilibrium strategy you are not penalized by employing any strategy at all. Using a random strategy does not preclude rock five times in a row. It seems that against a computer playing 1/3 you will always end up with a draw, whatever your strategy, and whatever you do you cannot do better or worse than this, which seems a bit odd. I reduces to a coin toss each round.

      The real game does rely on strategies such as you describe – psyching out the other player. I think some players can consistently win against a novice, but the novice never plays the 1/3 strategy, which would be their best option. Given that real people are unlikely to be able or willing to play the 1/3 strategy it is likely that your best strategy will be something else too.

      The Nash equilibrium is not the best way to play against (or with) real people.

      A bit of an aside, the game of Chicken has 3 Nash equilibria, pure ones where one swerves and other continues straight and a mixed strategy where the players randomly chose. Chicken differs from prisoner’s dilemma because the double defect option in PD is preferred to the cooperate/defect (opponents move last), whereas in chicken the cooperate/defect is preferred to the double defect. That is, the worst outcome for you in chicken is straight/straight, the worst outcome in PD is cooperate/defect.

Leave a Reply to random person

Cancel Reply