Thursday, October 25, 2012

A Crack in the Foundation of Economics -- More Readings

Last year I did a post on a mathematical error that has dictated the direction of important work in economics, and more especially finance. The discovery of this error, by U.K. mathematician Ole Peters, has slowly gained some recognition, though for some reason the journal where the original paper was published has not been willing to publish this correction.

At its root the error is obscure -- as would inevitably be the case for it to have persisted for so long and for its incorrect conclusion to be relied on by such luminaries as Paul Samuelson and Kenneth Arrow.  But more has been published about it after my post which do a better job at explaining the problem and its implications. So for those who are interested -- and you will be interested if you think about how a portfolio grows over time, how the policy for a group relates to the results for individuals, or the implications (correct and mistaken) of the St. Petersburg paradox -- I am providing links to them here:

The first is an interview with Ole by Michael Mauboussin, and the second is a paper by a group at Tower Watson. It is significant that the bulk of the notice for this is coming from industry rather than academics, and that the core group that is providing notice is the affiliated with the interdisciplinary Santa Fe Institute.


  1. This is good stuff, but I'd say a lot of people in the trading/hf industry have learned this from experience.

    good blog btw

  2. Paul Davidson has been writing about ergodicity vs. non-ergodicity in economics for about 40 years. As he has noted with detailed citations, this distinction was also central to Keynes's critique of "classical" economics in his General Theory (but was also absent from the analytical framework developed by the "Keynesians"). Kudos to you and others pursuing this matter further (and I've been an admirer of your work and also the Santa Fe Institute for some time), though I hope due credit is given to past critics that the profession had previously been able to keep out of the journals and textbooks. That would be the scholarly thing to do.

  3. Rick;

    I am having trouble seeing this as an error.

    Peters has come up with an interesting view about the St. Petersburg paradox. There are basically three ways to explain why reasonable people don't pay an unlimited amount to play that lottery: there is a bound to how much money they need, there is a bound to how much time they have to play the game, and there is a bound on how much money the bank has to pay out. The first option is well studied, the second not nearly as well researched, and the third is usually simply dismissed.

    Peters is adding to the research in the second option. So, what has my brief summary missed? Thanks.

  4. Well I guess even a stopped clock is correct twice a day. I don't know if this is original to Peters but his expositions and those of the linked articles makes it so obvious it is like a breath of fresh air. It is along the line of my logic from way back in building stochastic models (with repeating code in a language that always had branching capability despite some ridiculous claims) where one has to value/hedge over time and not space since the early 1990s but for which I have been abused by the so-called elite of the quant world.

    This is the first thing of value I have extracted from your blog.

    Maybe quoting Marx was also just a random event.

  5. Isn't a fundamental problem the misrepresentation of uncertainty as "risk" - known probability?

  6. The people that they wrote the paper at Tower Watson have no basic knowledge of probability theory. They write:

    "We have been advised that some people, anchored in ensemble averages, will find this result hard to accept and so warrants further explanation. Consider betting $1 on the coin toss and playing for two rounds. Because it is a fair coin, one round will be heads and the other tails. If we get heads first, our $1 becomes $1.10 (a 10% gain). In the second round we get tails, a 10% loss. 10% of $1.10 is $0.11, so we end up with $0.99. Alternatively, if we get tails first, our $1 falls to $0.90. The subsequent heads wins us $0.09 and we, again, end up with $0.99. In the real world we can get lots of heads in a row, but we can also get lots of tails in a row. If the coin is fair we will see half of each side and an expected loss of 1% per round. This illustrates the fundamental difference between an additive dynamic and multiplicative dynamic."

    No, dammit it! A fair coin does not mean that in two toses you get heads and then tails or tails and then heads. This is ludicrus to say the least. The probability space is {h,h), {t,t,}, {h,t}, {t,h} all with equal probability of 1/4.

    You people, do you read these papers before you publish them?

  7. Very happy to see you revisit this post from last year. I liked it then and I like it even more now. Very important stuff. Coming from Mathematician/Economist, for what it's worth.

  8. I second what STF says. Paul Davidson has been highlighting this issue for years:

  9. About the only sure (or, one might say, ergodic) thing is that payout won't be there for the most when it is needed. Yet, those who run the games will see to it that their pockets get filled. And, those in power who have pockets that are sinks, too, will do alright.

    There are more cracks than this one with which we can grapple as time goes on.