18
Mar
2020

## Murphy Twin Spin

==> In the latest Lara-Murphy Show, Carlos and I talk about financial nuttiness and the coronavirus. We recorded this on Saturday, though, so we didn’t know about the Fed’s Sunday surprise.

==> In the latest Bob Murphy Show, I explain the “scientific” approach to Blackjack, and discuss the controversies among academics over the famous Kelly criterion for optimal bet sizing.

Does not the expected value run up against “breaking the bank”? There is a maximum you can get. I guess the expected return would still be highest for letting it ride up to the round where the bank cannot pay more. It is just putting a limit on the number of times you can play.

Kelly says “If we compare the fates of two gamblers, however, playing a nonterminating game, the one which uses the value found above will, with probability one, eventually get ahead and stay ahead of one using any other “.

So if you have 1000 gamblers working for you, with probability 1, each of those gamblers will eventually earn you more by using the Kelly criterion than others using the expected return. If we could go on forever, you would still be best to play by Kelly.

I guess it depends on that “eventually.” Can we work out how long the eventually is likely to be? Say you have 1000 gamblers betting with even odds on a coin toss, with the coin biased 60/40 to heads (that only they know about). Obviously(?) if they can only play the game once, the expected return strategy of getting them all to bet it all will be best. I would imagine if they play 1000 times each the Kelly criterion would be best, as the chances of losing are pretty high each time, but how many times do they have to play before the Kelly criterion is better? Can we work it out? (By “we” I mean someone else.)

The above example could not go to 1000 rounds due to the breaking the bank factor I mentioned above. Doubling each time, after 1000 rounds there would be more at stake than there is money available in the world, or indeed many, many times the number of atoms in the universe.

If there is $100 trillion in the world that is 10^14, which is about 47 rounds to win all the money in the world.

If we assumed each casino had $10 billion, that would only take 34 rounds, so probably expected returns would be the best strategy. The “eventually” would not have kicked in to pout the Kelly criterion ahead. Or am I thinking about this the wrong way?

Harold, I have to be brief, but yes, even with respect to the original St. Petersburg paradox, one commonsense solution is to point out that in reality, no house can pay 50 quadrillion ducats and so the calculation (which yields infinite expected payout) can’t be right.

This effectively limits the number of rounds you can play, which presumably makes the “expected value” a better bet as the chances of you losing everything reduces from 1 to a more manageable number.

With a single round, we can intuitively see that the highest expected return is gained by placing all your stake on the bet. Most of us would not therefore place all our worldly wealth on that single bet. We decide how much we could afford to lose vs how much we need or want to gain and place all that on the single bet. We separate our “gambling money” from the rest, but that is an artificial separation. When I was thinking about this I was unconsciously thinking about the amount I had chosen to gamble rather than my entire wealth. So do many others. Latane talks of a gambler who could bet $1 on a coin toss. Why $1? Why not everything? Ophir talks of a wager of $100. How was that figure arrived at? Again, why not the persons entire wealth?

Treating each round as a separate one, we simply make that same calculation based on our new wealth from the winnings of the previous round. If we do not think it reasonable for the initial bet to be based on entire wealth, then why the subsequent ones?

Thx. In the games department, this was useful.