10 Sep 2015

Google and Game Theory

Steve Landsburg 28 Comments

Steve Landsburg has a really neat series of posts with a game theoretic brain teaser that has an unintentional flaw. If you have the time, I encourage you to not “cheat” by reading ahead; instead try to solve them in order, just by reading Steve’s posts and not the comments: First, second, and third. Then, after you’ve read his posts and done however much thinking you’re going to on your end, go ahead and read the comments, at least in the third one.

28 Responses to “Google and Game Theory”

  1. Nick Rowe says:

    That’s why game theorists invented https://en.wikipedia.org/wiki/Subgame_perfect_equilibrium , to rule out Steve’s other Nash equilibria.

    • Keshav Srinivasan says:

      Actually, Steve’s “weird” equilibria are subgame perfect, because you can continue the weird voting behavior in the later rounds, almost up until the end.

      • Harold says:

        Keshav, I don’t understand all this, but from the wiki article:
        “A common method for determining subgame perfect equilibria in the case of a finite game is backward induction.”

        How can you arrive at the weird pirate equilibria from backward induction? Is that bit in wiki wrong, or have I just got the wrong end of the stick?

      • Nick Rowe says:

        Keshav; you may well be right. Are they trembling hand perfect? (I forget all the variants of Nash refinements).

        • Keshav Srinivasan says:

          No, only the standard solution is trembling-hand perfect. And only the standard solution is a weak-dominant strategy equilibrium. The latter fact is why I think that the standard solution is the only valid solution to the problem, but Steve disagrees.

          • Nick Rowe says:

            Kashav: what you say makes sense to me. In any(?) game with majority voting, there must be loads of equilibria that are Nash but not weak-dominant. Because it doesn’t matter what I vote, unless I’m the decisive voter. If everyone else votes to blow up the world, it doesn’t matter whether I vote yes or no, so everyone voting to blow up the world is a Nash equilibrium.

            • Bob Murphy says:

              Nick that’s a suave way of saying, “Ah, my initial comment was totally wrong.” (Just messing with you; like I said, I initially was quite certain Steve’s post hinged on Nash vs. sublime perfect. I was astonished when I realized it didn’t.)

            • Nick Rowe says:

              Bob: you rumbled me!

            • baconbacon says:

              I don’t think weak equilibria are a problem here. Acting in way x is a nessecary, but insufficient, condition to getting what you want. If I want to go out with Sally I have to ask her and she has to say yes. If I place no negative value on her saying no I will ask her out even though there is “no reason to ask her out or not ask her out since I don’t control the final outcome.”

              Likewise if you already knew that 99 people had voted to blow up the world and that your 1 vote didn’t matter then who knows what you will do, but if you don’t know the result you damn well better vote not to.

              The pirates yes vote may not be worth 1 coin, but it is worth more than 0 coins and I don’t think this complaint works.

    • Bob Murphy says:

      Nick, I assumed all along that that was going to be the crux of things…but I don’t think it is. I think you should look more closely at the comments in the third post.

  2. Josiah says:

    The rational solution is to throw Steve overboard.

  3. Tel says:

    OK… possible spoiler alert… I have only read the first post so far… but anyone who can restrain themselves by not reading the comments on Steve’s blog, could also delay reading this one until they have had a go at it themselves.

    I claim that under perfect conditions, the Captain could form an extremely fragile quorum by bribing only four other pirates, with one gold each, thus keeping 96 gold for himself. I’m guessing this is the answer that is also stated to be wrong on the basis that it requires a number of presumptions not explicitly stated in the question but possibly implied by the question. I’ll also point out that, if it was me, there’s no way I would bet my life on such a long shot, but one would guess these logical problems are not intended to be realistic, thus requiring suspension of disbelief.

    Or on the contrary, if they are intended to be realistic, the existence of such unlikely optima might be a hint as to why the “pure logic” style of rationality works so badly when applied to any real world analysis. Quite frankly, I think it is also unlikely to be useful in any programming problem, other than under quite amazing circumstances.

    • Tel says:

      Ha! I read through to the end.

      Secret ballot amongst pirates?!? Who counts the votes?

    • guest says:

      “… but anyone who can restrain themselves by not reading the comments on Steve’s blog, could also delay reading this one until they have had a go at it themselves.”

      “… thus requiring suspension of disbelief.”

      Ow! My restraint!

      😀

  4. Matthew Cory says:

    This is a traditional problem in game theory courses. Google tests regurgitated knowledge and not analytical intelligence. They’re overrated Johnny-come-latelies. This problem is actually a “late lie” for most people.

    Game theory is mostly quackery:
    http://backwardations.blogspot.com/2015/09/backward-induction-suction-and-pirates.html

    You can apply it to women’s badminton:
    http://backwardations.blogspot.com/2015/09/game-theory-of-womens-badminton.html

  5. Silas Barta says:

    Off-topic, but since the context is Google job interviews …

    They don’t do these brain teasers anymore and I’ve confirmed it myself, though I don’t want to be any more specific due to NDA.

  6. Major.Freedom says:

    I homestly loathe game theory. It is rife with arrogant and presumptuous principles, and every example is always overly restrictive and artificial.

    Nash himself disliked people in general, and his modelling essentially views people as fundamentally sociopathic.

    For the pirates example, whose to say that I would accept being bought off for one coin? Some overly presumptuous anti-social model is what. Is such a price “optimal”? By what standard? Where is the human element in all of this?

    /rant

    • Keshav Srinivasan says:

      You’d be willing to be bought off for one coin if the alternative is receiving zero coins.

      • Major.Freedom says:

        Not necessarily true.

        It would only be true under certain limited assumptions, which is my fundamental gripe with these models.

        I could very well refuse a single coin in order to send a particular signal to the others, or I just don’t want to partake in these “trades”. In other words, there are an almost unlimited number of actual outcomes that I do not want to dismiss simply because “they aren’t a part of the game theorists’ model”.

      • Grane Peer says:

        Hi Keshav,
        From Steve’s second post, why doesn’t problem2 result in a fifty fifty split? I mean you have two highly violent men regardless of whatever ‘most’ is supposed to mean Igor is taking a massive risk. Is it in Igor’s best interest to screw a violent man out of his booty?

        • Keshav Srinivasan says:

          We’re assuming that no violence is possible except as specified in the problem. Igor’s plan gets approved if he wins half or more of the vote, and obviously he will win because he’ll vote for himself.

          • Grane Peer says:

            Thanks, Unless I am mistaken the ferocity issue has absolutely no bearing on the outcome.

            • Keshav Srinivasan says:

              Yeah, ferocity is just an arbitrary label. The important point is that the pirates have some ordering that’s clear to everyone. It might as well be alphabetical order or age.

    • Tel says:

      Is such a price “optimal”? By what standard?

      The standard is a bit weird: it presumes that everyone in the game plays by the strategy that is the best absolute return for themselves, and that everyone has fully analysed everyone else’s strategy on the same basis. Obviously no one every does that in a real situation.

      Because of this, there as sometimes highly fragile optima that stack many layers deep, each time depending on all the other players to make the same analysis.

      Where is the human element in all of this?

      Humans can consistently beat computers at Poker, even though the computer has a better and faster statistical calculation system. There’s a problem you see that in a multi player game, calculating your own optimum based on everyone else in the game being highly predictable leaves you vulnerable to someone who just wants to screw you around.

      For the pirates example, whose to say that I would accept being bought off for one coin? Some overly presumptuous anti-social model is what.

      Well they are pirates, so you might expect a bit of anti social behaviour. The problem for the optimiser is much worse… what if one pirate is even more anti-social than normal and just votes to throw someone overboard for the heck of it? One coin might not make a difference. It only takes one guy to mess up the fragile optima, and mess it up badly.

      You end up with the problem, of needing to judge the other people, and estimate their reactions… as well as leaving margin for error in the place where you can afford to make the error (e.g. giving an extra coin or two “just in case” is not optimal but it hurts you a lot less than getting thrown overboard).

      There’s a reason why dumb gradient followers are not the be-all and end-all of optimisation science.

      • Major.Freedom says:

        Even the concept of “equilibria” bugs me. One set of choices is called an “equilibrium” of one type or another…as if what, the people will repeat the exact same choices over and over indefinitely? Like robots?

        If one such “equilibrium” had a set of choices that had me choosing X, then if I were to do the same game over and over again, I might just choose what the game theory asserts is not an equilibrium, because “F it” I’m bored out of my wits.

        • Tel says:

          No, be careful with the concepts. The same word has multiple meanings depending on context.

          In a dynamic system, the same calculations are happening over and over, so the equilibrium is the long run settling point of the system (or possibly the statistical norm over a long run for systems that don’t really settle to a steady state). That’s one type of usage.

          In the pirate game it’s a one off game, not a repeating game. The equilibrium means the point where no tweaking of strategy by any of the players can make that player better off. BTW, this also presumes that no group of players can assemble a contractually bound consortium and play a collective strategy (something missing from the original question, but game theorists just presume that pirates never sign contracts).

          There’s a separate part of game theory to handle games where you know you will be meeting the same players for many rounds, in which case cooperative strategies become more common, but they still don’t allow contracts!

        • Tel says:

          And yeah, the “F U” aspect of humans is difficult to either justify or model in a machine intelligence, but demonstrably it works in many cases.

          You are right to put your finger on this.

          • Tel says:

            Hey MF, you might like this one:

            https://www.academia.edu/12754747/The_evolution_of_fairness_through_spite

            The pirate game is a more complex version of the ultimatum game, which has been studied quite a bit both theoretically and experimentally. The observation is that humans do NOT behave according to what the Nash game theory suggests is rational, and because of this, an attempt to optimise your strategy based on this theory will fail badly (because the optimum moves depending on how other players behave).

            You are not the only one who has questioned the validity of a traditional “rational” game theory optimiser. I’ll point out that game theory itself is just a theory of making choices and outcomes of those choices… the true optimum choice is not simply obtainable from the starting theory, which is what makes it interesting. Some games (especially iterated games) quite possibly lead to fractal-style expansion of double-guessing, triple-guessing, etc. BTW, if you aren’t up on Axlerod and “The Evolution of Cooperation” you probably want to get that covered as well.

Leave a Reply to Tel

Cancel Reply