In probability theory and intertemporal portfolio choice, the Kelly criterion, Kelly strategy, Kelly formula, or Kelly bet, is a formula used to determine the optimal size of a series of bets. In most gambling scenarios, and some investing scenarios under some simplifying assumptions, the Kelly strategy will do better than any essentially different strategy in the long run (that is, over a span of time in which the observed fraction of bets that are successful equals the probability that any given bet will be successful). It was described by J. L. Kelly, Jr in 1956.^{[1]} The practical use of the formula has been demonstrated.^{[2]}^{[3]}^{[4]}
Although the Kelly strategy's promise of doing better than any other strategy in the long run seems compelling, some economists have argued strenuously against it, mainly because an individual's specific investing constraints may override the desire for optimal growth rate.^{[5]} The conventional alternative is expected utility theory which says bets should be sized to maximize the expected utility of the outcome (to an individual with logarithmic utility, the Kelly bet maximizes expected utility, so there is no conflict; moreover, Kelly's original paper clearly states the need for a utility function in the case of gambling games which are played finitely many times^{[1]}). Even Kelly supporters usually argue for fractional Kelly (betting a fixed fraction of the amount recommended by Kelly) for a variety of practical reasons, such as wishing to reduce volatility, or protecting against nondeterministic errors in their advantage (edge) calculations.^{[6]}
In recent years, Kelly has become a part of mainstream investment theory^{[7]} and the claim has been made that wellknown successful investors including Warren Buffett^{[8]} and Bill Gross^{[9]} use Kelly methods. William Poundstone wrote an extensive popular account of the history of Kelly betting.^{[5]}
Statement
For simple bets with two outcomes, one involving losing the entire amount bet, and the other involving winning the bet amount multiplied by the payoff odds, the Kelly bet is:

f^{*} = \frac{bp  q}{b} = \frac{p(b + 1)  1}{b}, \!
where:

f* is the fraction of the current bankroll to wager, i.e. how much to bet;

b is the net odds received on the wager ("b to 1"); that is, you could win $b (on top of getting back your $1 wagered) for a $1 bet

p is the probability of winning;

q is the probability of losing, which is 1 − p.
As an example, if a gamble has a 60% chance of winning (p = 0.60, q = 0.40), and the gambler receives 1to1 odds on a winning bet (b = 1), then the gambler should bet 20% of his bankroll at each opportunity (f* = 0.20), in order to maximize the longrun growth rate of the bankroll.
If the gambler has zero edge, i.e. if b = q / p, then the criterion recommends the gambler bets nothing.
If the edge is negative (b < q / p) the formula gives a negative result, indicating that the gambler should take the other side of the bet. For example, in standard American roulette, the bettor is offered an even money payoff (b = 1) on red, when there are 18 red numbers and 20 nonred numbers on the wheel (p = 18/38). The Kelly bet is 1/19, meaning the gambler should bet onenineteenth of his bankroll that red will not come up. Unfortunately, the casino doesn't allow betting against something coming up, so a Kelly gambler cannot place a bet.
The top of the first fraction is the expected net winnings from a $1 bet, since the two outcomes are that you either win $b with probability p, or lose the $1 wagered, i.e. win $1, with probability q. Hence:

f^{*} = \frac{\text{expected net winnings}}{\text{net winnings if you win}} \!
For evenmoney bets (i.e. when b = 1), the first formula can be simplified to:

f^{*} = p  q . \!
Since q = 1p, this simplifies further to

f^{*} = 2p  1 . \!
A more general problem relevant for investment decisions is the following:
1. The probability of success is p.
2. If you succeed, the value of your investment increases from 1 to 1+b.
3. If you fail (for which the probability is q=1p) the value of your investment decreases from 1 to 1a. (Note that the previous description above assumes that a is 1).
In this case, the Kelly criterion turns out to be the relatively simple expression

f^{*} = p/a  q/b . \!
Note that this reduces to the original expression for the special case above (f^{*}=pq) for b=a=1.
Clearly, in order to decide in favor of investing at least a small amount (f^{*}>0), you must have

p b > q a . \!
which obviously is nothing more than the fact that your expected profit must exceed the expected loss for the investment to make any sense.
The general result clarifies why leveraging (taking a loan to invest) decreases the optimal fraction to be invested, as in that case a>1. Obviously, no matter how large the probability of success, p, is, if a is sufficiently large, the optimal fraction to invest is zero. Thus, using too much margin is not a good investment strategy, no matter how good an investor you are.
Proof
Heuristic proofs of the Kelly criterion are straightforward.^{[10]} For a symbolic verification with Python and SymPy one would set the derivative y'(x) of the expected value of the logarithmic bankroll y(x) to 0 and solve for x:
>>> from sympy import *
>>> x,b,p = symbols('x b p')
>>> y = p*log(1+b*x) + (1p)*log(1x)
>>> solve(diff(y,x), x)
[(1  p  b*p)/b]
For a rigorous and general proof, see Kelly's original paper^{[1]} or some of the other references listed below. Some corrections have been published.^{[11]}
We give the following nonrigorous argument for the case b = 1 (a 50:50 "even money" bet) to show the general idea and provide some insights.^{[1]}
When b = 1, the Kelly bettor bets 2p  1 times initial wealth, W, as shown above. If he wins, he has 2pW. If he loses, he has 2(1  p)W. Suppose he makes N bets like this, and wins K of them. The order of the wins and losses doesn't matter, he will have:

2^Np^K(1p)^{NK}W \! .
Suppose another bettor bets a different amount, (2p  1 + \Delta)W for some positive or negative \Delta. He will have (2p + \Delta)W after a win and [2(1  p) \Delta]W after a loss. After the same wins and losses as the Kelly bettor, he will have:

(2p+\Delta)^K[2(1p)\Delta]^{NK}W \!
Take the derivative of this with respect to \Delta and get:

K(2p+\Delta)^{K1}[2(1p)\Delta]^{NK}W(NK)(2p+\Delta)^K[2(1p)\Delta]^{NK1}W\!
The turning point of the original function occurs when this derivative equals zero, which occurs at:

K[2(1p)\Delta]=(NK)(2p+\Delta) \!
which implies:

\Delta=2(\frac{K}{N}p) \!
but:

\lim_{N \to +\infty}\frac{K}{N}=p \!
so in the long run, final wealth is maximized by setting \Delta to zero, which means following the Kelly strategy.
This illustrates that Kelly has both a deterministic and a stochastic component. If one knows K and N and wishes to pick a constant fraction of wealth to bet each time (otherwise one could cheat and, for example, bet zero after the K^{th} win knowing that the rest of the bets will lose), one will end up with the most money if one bets:

\left(2\frac{K}{N}1\right)W \!
each time. This is true whether N is small or large. The "long run" part of Kelly is necessary because K is not known in advance, just that as N gets large, K will approach pN. Someone who bets more than Kelly can do better if K > pN for a stretch; someone who bets less than Kelly can do better if K < pN for a stretch, but in the long run, Kelly always wins.
The heuristic proof for the general case proceeds as follows.
In a single trial, if you invest the fraction f of your capital, if your strategy succeeds, your capital at the end of the trial increases by the factor 1f + f(1+b) = 1+fb, and, likewise, if the strategy fails, you end up having your capital decreased by the factor 1fa. Thus at the end of N trials (with pN successes and qN failures ), the starting capital of $1 yields

C_N=(1+fb)^{pN}(1fa)^{qN}.
Maximizing \log(C_N)/N, and consequently C_N, with respect to f leads to the desired result

f^{*}=p/aq/b .
For a more detailed discussion of this formula for the general case, see.^{[12]} There, it can be seen that the substitution of p for the ratio of the number of "successes" to the number of trials implies that the number of trials must be very large, since p is defined as the limit of this ratio as the number of trials goes to infinity. In brief, betting f^{*} each time will likely maximize the wealth growth rate only in the case where the number of trials is very large, and p and b are the same for each trial. In practice, this is a matter of playing the same game over and over, where the probability of winning and the payoff odds are always the same. In the heuristic proof above, pN successes and qN failures are highly likely only for very large N.
Bernoulli
In a 1738 article, Daniel Bernoulli suggested that, when one has a choice of bets or investments, one should choose that with the highest geometric mean of outcomes. This is mathematically equivalent to the Kelly criterion, although the motivation is entirely different (Bernoulli wanted to resolve the St. Petersburg paradox).
The Bernoulli article was not translated into English until 1956,^{[13]} but the work was wellknown among mathematicians and economists.
Many horses
Kelly's criterion may be generalized ^{[14]} on gambling on many mutually exclusive outcomes, like in horse races. Suppose there are several mutually exclusive outcomes. The probability that the kth horse wins the race is p_k, the total amount of bets placed on kth horse is B_k, and

\beta_k=\frac{B_k}{\sum_i B_i}=\frac{1}{1+Q_k} ,
where Q_k are the payoff odds. D=1tt, is the dividend rate where tt is the track take or tax, \frac{D}{\beta_k} is the revenue rate after deduction of the track take when kth horse wins. The fraction of the bettor's funds to bet on kth horse is f_k. Kelly's criterion for gambling with multiple mutually exclusive outcomes gives an algorithm for finding the optimal set S^o of outcomes on which it is reasonable to bet and it gives explicit formula for finding the optimal fractions f^o_k of bettor's wealth to be bet on the outcomes included in the optimal set S^o. The algorithm for the optimal set of outcomes consists of four steps.^{[14]}
Step 1 Calculate the expected revenue rate for all possible (or only for several of the most promising) outcomes: er_k=\frac{D}{\beta_k}p_k=D(1+Q_k)p_k.
Step 2 Reorder the outcomes so that the new sequence er_k is nonincreasing. Thus er_1 will be the best bet.
Step 3 Set S = \varnothing (the empty set), k = 1, R(S)=1. Thus the best bet er_k = er_1 will be considered first.
Step 4 Repeat:
If er_k=\frac{D}{\beta_k}p_k > R(S) then insert kth outcome into the set: S = S \cup \{k\}, recalculate R(S) according to the formula: R(S)=\frac{1\sum_{i \in S}{p_i}}{1\sum_{i \in S } \frac{\beta_i}{D}} and then set k = k+1 ,
Else set S^o=S and then stop the repetition.
If the optimal set S^o is empty then do not bet at all. If the set S^o of optimal outcomes is not empty then the optimal fraction f^o_k to bet on kth outcome may be calculated from this formula: f^o_k=\frac{er_k  R(S^o)}{\frac{D}{\beta_k}}=p_k\frac{R(S^o)}{\frac{D}{\beta_k}}.
One may prove^{[14]} that

R(S^o)=1\sum_{i \in S^o}{f^o_i}
where the right handside is the reserve rate. Therefore the requirement er_k=\frac{D}{\beta_k}p_k > R(S) may be interpreted^{[14]} as follows: kth outcome is included in the set S^o of optimal outcomes if and only if its expected revenue rate is greater than the reserve rate. The formula for the optimal fraction f^o_k may be interpreted as the excess of the expected revenue rate of kth horse over the reserve rate divided by the revenue after deduction of the track take when kth horse wins or as the excess of the probability of kth horse winning over the reserve rate divided by revenue after deduction of the track take when kth horse wins. The binary growth exponent is

G^o=\sum_{i \in S}{p_i\log_2{(er_i)}}+(1\sum_{i \in S}{p_i})\log_2{(R(S^o))} ,
and the doubling time is

T_d=\frac{1}{G^o}.
This method of selection of optimal bets may be applied also when probabilities p_k are known only for several most promising outcomes, while the remaining outcomes have no chance to win. In this case it must be that \sum_i{p_i} < 1 and \sum_i{\beta_i} < 1.
Application to the stock market
Consider a market with n correlated stocks S_k with stochastic returns r_k, k= 1,...,n and a riskless bond with return r. An investor puts a fraction u_k of his capital in S_k and the rest is invested in bond. Without loss of generality, assume that investor's starting capital is equal to 1. According to Kelly criterion one should maximize \mathbb{E}\left[ \ln\left((1 + r) + \sum\limits_{k=1}^n u_k(r_k r) \right) \right]
Expanding it to the Taylor series around \vec{u_0} = (0, \ldots ,0) we obtain
\mathbb{E} \left[ \ln(1+r) + \sum\limits_{k=1}^{n} \frac{u_k(r_k  r)}{1+r}  \frac{1}{2}\sum\limits_{k=1}^{n}\sum\limits_{j=1}^{n} u_k u_j \frac{(r_k r)(r_j  r)}{(1+r)^2} \right]
Thus we reduce the optimization problem to quadratic programming and the unconstrained solution is \vec{u^{\star}} = (1+r) ( \widehat{\Sigma} )^{1} ( \widehat{\vec{r}}  r )
where \widehat{\vec{r}} and \widehat{\Sigma} are the vector of means and the matrix of second mixed noncentral moments of the excess returns.^{[15]} There are also numerical algorithms for the fractional Kelly strategies and for the optimal solution under no leverage and no short selling constraints.
See also
References

^ ^{a} ^{b} ^{c} ^{d}

^

^

^

^ ^{a} ^{b}

^

^

^

^

^

^

^ THE KELLY CRITERION IN BLACKJACK, SPORTS BETTING, AND THE STOCK MARKET by Edward O. Thorp Paper presented at: The 10th International Conference on Gambling and Risk Taking Montreal, June 1997

^

^ ^{a} ^{b} ^{c} ^{d} Smoczynski, Peter; Tomkins, Dave (2010) "An explicit solution to the problem of optimizing the allocations of a bettor’s wealth when wagering on horse races", Mathematical Scientist", 35 (1), 1017

^ Nekrasov, Vasily(2013) "Kelly Criterion for Multivariate Portfolios: A ModelFree Approach"
External links
This article was sourced from Creative Commons AttributionShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, EGovernment Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a nonprofit organization.