Resumo:
We prove that high simulated performance is easily achievable after backtesting a relatively small number of alternative strategy configurations, a practice we denote “backtest overfitting”. The higher the number of configurations tried, the greater is the probability that the backtest is overfit. Because most financial analysts and academics rarely report the number of configurations tried for a given backtest, investors cannot evaluate the degree of overfitting in most investment proposals.
The implication is that investors can be easily misled into allocating capital to strategies that appear to be mathematically sound and empirically supported by an outstanding backtest. Under memory effects, backtest overfitting leads to negative expected returns out-of-sample, rather than zero performance. This may be one of several reasons why so many quantitative funds appear to fail.
The authors’ argument is that, by failing to apply mathematical rigour to their methods, many purveyors of quantitative investment strategies are, deliberately or negligently, misleading clients.
It is reasonable to want to test a promising investment strategy to see how it would have performed in the past. The trap comes when one keeps tweaking the strategy until it neatly fits the historical data. Intuitively, one might think one has finally hit upon the most successful investment strategy; in fact, one is likely to have hit only upon a statistical fluke, a false positive.
This is the problem of “over-fitting”, and even checks against it – such as testing in a second, discrete historical data set – will continue to throw up many false positives, the mathematicians argue.
Do not despair. The paper does not conclude that history is bunk, just that backtesting ought to require more statistical thought than investment managers need to display to make a sale to investors.
[...]
Nenhum comentário:
Postar um comentário