This post is an introduction to a

__detective story about an error__. An error that passed undetected by some of the greatest minds of the twentieth century, and led economics down a path that now must be cast into question. It is an error from 1934 Vienna that has lain hidden for the better part of a century, uncovered in 2011 in the academic halls of Imperial College of London. It is an error that is both obvious and startling after the fact, and is the result of a calculation that literally is off by an order of infinity.
A few weeks ago I attended a conference
sponsored by the Santa Fe Institute, where I participated on a panel
with Henry Kaufman, Bill Miller and Marty Leibowitz. The conference
topic was Forecasting in the Face of Risk and Uncertainty. One of
the presentations was by Ole Peters, from the Department of
Mathematics at the Imperial College of London. His presentation
compared time series analysis with ensemble analysis. Time series
analysis takes one realization of a process and runs it over a very
long time period and then looks at the distribution over the course
of that run, whereas ensemble analysis creates many copies of the
process and runs these over a shorter period, and then looks at the
distribution of those results. Time series analysis is what you see
over many years in one universe, ensemble analysis is what you see
when you take many universes and integrate across them to look at the
distributional properties.

Even if we use the same process for
generating the paths as we do for the time series, these two
approaches can lead to surprisingly different results for the
ultimate distribution. This will always be true if a process is not
ergodic, that is if it doesn't have the properties of creating a
defined and unique distribution and leading to that distribution
without regard to the starting point. Another way to think of an
ergodic process is that over time it visits every possible state in
proportion to its probability, and does so with the same proportions
no matter where you start the process off. And one of the keystone
processes analyzed in economics, the process of the inter-temporal
compounding of wealth, is an example of a non-ergodic process.

__Peters presents__a disarmingly simple example to show the difference between the time series and ensemble approaches for this process. Using a progression of simulated coin flips, he shows a case where the ensemble approach has a positive mean while the time series approach has one that is negative. On average people will make money while for the individual wealth will follow a straight line (on a log scale, at least) toward zero.
As

__Peters recounts__in his presentation, economics has been almost unwavering in applying the ensemble approach. The reason is that in 1934 the Austrian mathematician Karl Menger wrote a paper that rejected unbounded utility functions. These unbounded functions include, for example, logarithmic utility, a particularly useful one because it corresponds to exponential growth, and thus is a natural for many time series processes (like compounded returns). Because his result is wrong, the motivation for focusing on the ensemble approach is ill founded. And, to make matters worse, in many important cases it is a time series approach that makes the most sense. After all, we only live one life, and we care about what is dealt to us in that life. If we enjoyed (and recognized that we enjoyed) reincarnation so that we could experience many alternative worlds – and better yet, if we experienced them all simultaneously -- perhaps it would be a different matter.
What is fascinating is that Menger’s
paper has been cited widely by notable economists, including
Samuelson, Arrow and Markowitz, Nobel laureates all. Peters recounts
a number of these: In 1951, Arrow wrote a clearer version of Menger's
argument, but failed to uncover the error while doing so. Ironically,
by performing this service he helped propagate the development of
economic theory along the wrong track. Arrow more recently wrote that
"...a deeper understanding [of Bernoulli's St. Petersberg
paradox] was achieved only with Karl Menger’s paper”. Markowitz
accepted Menger's argument, stating that “we would have to assume
that U[tility] was bounded to avoid paradoxes such as those of
Bernoulli and Menger”. Samuelson waxed effusive regarding Menger's
1934 paper: “After 1738 nothing earthshaking was added to the
findings of Daniel Bernoulli and his contemporaries until the quantum
jump in analysis provided by Karl Menger”, and further that “Menger
1934 is a modern classic that stands above all criticism”. (And up
until Peters’ paper, it seems it did indeed stand above all
criticism, if there was any at all). That the paper was such a focus
for so stellar a group of economists gives you a hint of its
importance to the path economics has taken.

Not that the error is all that obscure,
at least with the benefit of hindsight and a clear exposition. It
boils down to Menger saying that the sum of a quantity A and a
quantity B tends to infinity in the limit. Menger shows that A tends
to infinity, and then argues that because of this, it doesn't matter
what is going on with B, because infinity plus anything else is still
infinity. Unless, of course, it happens that B is tending toward
negative infinity even faster. Which, it turns out, is the case. So
the sum, rather than having infinity as its limit, has

*negative*infinity as its limit!