Modified: March 24, 2025
kelly criterion
This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.We are given the opportunity to bet some fraction of our wealth on a coin flip with probability . We can repeat this as many times as we like. How much should we bet?
This seems like a tricky question because it has the form of a sequential decision problem: we can imagine a branching tree of decisions for how much to bet at each step, depending on how the previous bets have gone. Just as finding the optimal first move in chess would require solving the entire game, in the worst case determining the optimal first bet might require gaming out its potential repercussions into the infinite future.
The answer also depends on our utility function wrt money. However there are (at least) two natural choices for which the sequential decision problem simplifies dramatically: linear and logarithmic utility.
Formally, let be a random variable for our wealth after steps, and represent the coinflip at step . If at each step we bet fraction of our current wealth, then our wealth will be multiplied by , that is, either or depending on the outcome of the coinflip. Then our wealth after steps will grow as
In the case of linear utility, we try to maximize
where we can move the expectation inside the product because each step is independent of the others. This shows that we can simply maximize independently at each step, which simply means taking (betting the entire bankroll) at all steps assuming that the 's have positive expectation. This is a counterintuitive strategy, since in the vast majority of cases we will lose everything, but those losses are offset by the enormous compounding of wealth in the world where we win all bets (which happens with probability ).
The other (and debatably more natural) assumption is that of logarithmic utility, where we try to maximize
Taking logs converts compounding into a simple sum, where linearity of expectation works so that we need only consider the expectation at a single step, i.e., we need only maximize the expected growth rate
This can be solved by straightforward calculus, setting the derivative with respect to equal to zero,
and solving for the optimal bet (the Kelly criterion),
Let's plug in numbers for intuition. If , the Kelly criterion says that we should bet of our wealth at each round. On the other hand, if , we should bet of our wealth.
Since Kelly betting is maximizing the expected logarithmic growth rate of your wealth, it follows that, over time, it gives you more money than any other betting procedure with probability approaching 1. More rigorously:
- Your overall growth is the product of individual growth rates, so the average growth rate is the geometric mean of the growth
- Thus your average log growth rate is the arithmetic mean of the individual log growth rates
- As goes to infinity, we can view this arithmetic mean as an expectation, which converges to the expected value with high probability by the law of large numbers
- Any other procedure (that doesn't optimize the expected log growth rate) will yield on average a lower expected log growth rate. But the actual log growth rate converges to its expectation over time! And this is uniquely true of the log growth rate since that's what gets to us a sum over timesteps that we can frame as an expectation.
In this sense, the Kelly criteria is an instance of geometric rationality: maximizing a geometric expectation rather than an arithmetic expectation.
A more memorable formulation
In general the optimal fraction of our bankroll to bet can be expressed as
where is the true probability of winning, assumed known to me, and are the betting odds () implying a book (or market) probability .
e.g., if we know there is a probability of winning but the bookie thinks it's , they'll offer odds of (), so our edge is
Alternative arguments for the Kelly criterion
There are some alternative goals that also lead to the Kelly criterion: for example, maximizing wealth in the median outcome. See:
- https://www.lesswrong.com/posts/zmpYKwqfMkWtywkKZ/kelly-isn-t-just-about-logarithmic-utility
- https://www.lesswrong.com/posts/DfZtwtGD6ymFtXmdA/kelly-is-just-about-logarithmic-utility
- https://www.lesswrong.com/posts/BZ6XaCwN4QGgH9CxF/the-kelly-criterion
Another take is that it corresponds to proportional representation of subagents with differing beliefs about the future. https://www.lesswrong.com/posts/o3RLHYviTE4zMb9T9/tyranny-of-the-epistemic-majority
I have two future selves, one for a heads outcome and one for a tails outcome. Say : the heads-outcome self is more likely to exist than the tails-outcome self. If I have $100, and want to maximize expected utility, I bet it all on heads. But in a sense that's unfair. Maybe I prefer proportional representation. My heads-outcome self should get $60 (to bet on heads), and my tails-outcome self should get $40 (to bet on tails). The net effect is that I bet $20 on heads.
In general, I will bet a portion of my bankroll on heads. This is the Kelly criterion!
This aggregates nicely. If Kelly also has , Kelly believes , so Mary believes and thus bets $200 * (0.35 - 0.65) = -$60 on heads, i.e., $60 on tails. This is equivalent to the net bet of me and Kelly!
The nice conclusion here is that we have an aggregation procedure where we let sub-agents bet independently, in proportion to how likely they are to be correct. And it doesn't matter where we draw the boundaries of the system!
It also turns out that the resulting evolution of the bankrolls is exactly Bayesian updating. That is: Mary started with a belief that my hypothesis () and Kelly's hypothesis () were equally likely to be correct (and odds ratio of ). The actual outcome of heads was six times more likely under my hypothesis, so she updates her odds ratio to in favor of me (i.e., if these are the two hypotheses, I have a 6/7 chance of being correct), and now believes .
This is equivalent to noticing that Kelly and I started with equal bankrolls, but my bankroll increased by 20 (my $2p-1), while Kelly's decreased by 80 (her $2p - 1). So my new bankroll is $120 and hers is $20, i.e., my bankroll is now six times as large as hers. This is exactly equivalent to Mary's Bayesian updating!
scratch for thinking!
This is all cool but I don't feel like I totally grok it.
one way to get at this is to start by saying I have a bunch of subagents with different beliefs. they start with equal 'influence' (probability of being right). how do I aggregate those beliefs?
let each subagent bet on its beliefs. it has a distribution over future states of the world. if the world is just a coinflip, this reduces to a probability and we can say that subagents bet influence using the Kelly criterion. then the aggregate belief is the average of their , and the updates are Bayesian updates.
does this work for multi-valued outcomes? say we have possible future states and a distribution . How do you bet on this? I guess for each possible outcome you choose how much to bet on it, so we have bets . You might get even odds for each bet, but maybe there's some market rate where the bookie puts probability on each outcome. if that outcome comes up, you win a quantity proportional to your bet and the odds against that bet.
actually it seems equivalent to consider the proportional representation argument? you have different 'subselves' each of which control resources proportional to their probability of existing. You let each of them bet on its preferred outcome. This is effectively spreading your bets: you take , betting on each outcome proportional to probability. Then each one bets all its money () and wins .
todo:
- work out the max log growth rate thing. it's gotta correspond to a cross-entropy maximization?