Tuesday, February 1, 2011

Why are We “Irrational”: The Path from Neoclassical to Behavioral Economics 2.0

A few months ago I discussed the failing of econophysics, and more generally, the economic paradigm that treats people like computers and views economic dynamics like physics. The natural follow up question is, “What can you say that is constructive?” The answer is an emerging approach to behavioral economics.
Over the past few decades it has dawned on some researchers that we don't make decisions the way most economists think we should. And as a result behavioral economics has become a burgeoning field of study. Initially, the bulk of this field consisted of cataloging behavior deemed aberrant and anomalous. That is, the underlying assumption was that the economic view of decision making is the correct one, and the economists need to see where people get it wrong. Thus, we had descriptions of behavioral economics such as “exploring limited rationality” and developing models for the “systematic imperfections in human rationality.” When inconsistencies between behavior and theory were demonstrated, the most charitable response from the neoclassical school was that maybe there was a missing factor; the theory was correct but not well parametrized. Unlike similar fields in psychology and biology, little time was spent on understanding how people think, why they think the way they do, and the ways the bedrock assumptions of economics based on mathematical methods and axioms of behavior might be off the mark.
And they probably are off the mark because, after all, neoclassical economics is missing half the story. It has left out any consideration of the context in which people make decisions, how that relates to people's varied experience, environment, and the uncertainty they harbor about how the world might change in unanticipated ways -- ways that cannot be captured through an enumeration of the probabilities of the possible states of nature. One field that does take this important (for humans) context into account is called behavioral ecology. It is not as well known in economics as it is in biological and psychological studies of behavior. Now, behavioral economics is incorporating this psychological realm.
This new approach is a quiet revolution that may transform the way we look at economic behavior. The era of mathematical, axiomatic views of human behavior will give way to approaches that start with how people look at decision making, understanding why they do that, and then understanding why that approach might have arisen evolutionarily and how it, rather than the utility maximization approach that has dominated the field for two generations, moves us closer to reality.
Following is a critique of the neoclassical approach, and the initial and perhaps still dominant approach of what might be called Behavioral Economics 1.0, within the context of behavioral ecology. A key proponent of behavioral ecology is Gerd Gigerenzer. I rely on his writings, including his book Rationality for Mortals, in much of the discussion below.

Assumption: We are Logicians

The seminal work on which behavioral economics 1.0 rests is that of Kahneman and Tversky. Using carefully posed questions, they plumb the ways people fail as rational beings, where rational means making decisions in a way consistent with the rules of logic. They find that the same question posed in different but logically equivalent ways leads to different results. They catalog these aberrations as demonstrating human tendencies toward heuristics, biases, frames, and other devices.
The notion here, which was then embraced by the first wave of behavioral economists, is that if nothing else, a rational human should act logically. The problem with this is that for humans logic cannot be considered apart for context, such as the usage and norms of language. For example, does anyone really think that when Mick Jagger sings “I can't get no satisfaction” he actually means he can get satisfaction? If you are parsing like a logician, that is what you think, because you are operating in the absence of context, namely how people use language. Language usage and the mode of conversation are among the clearest examples of how context and norms matter. If someone says “I'm not going to invite anyone but my friends and relatives,” does anyone really think that means he will only invite that subset of people who are both his friends and also his relatives? Again, that will be the takeaway for someone parsing like a logician. These two examples are simplistic, but if you look at the work used to establish the failure of logic and inconsistencies based on framing and the like, they are fairly illustrative.
The bedrock of much of behavioral economics assumes that we should follow the rules of logic, and when we don't, that it is suggestive of a behavioral bias or anomaly; the axioms are right, and we are flawed. The objective is to uncover those flaws. A classic example of the problems that come from this assumption is shown by this question posed by Kahneman and Tversky, and critiqued by Gigerenzer:
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student she was deeply concerned with issues of discrimination and social justice and also participated in anti-nuclear demonstrations.
Which of two alternatives is more probable:
A. Linda is a bank teller.
B. Linda is a bank teller and is active in the feminist movement.
The vast majority of U.S. college students who were given this question picked B, thus scoring an F for logical thinking. But consider the context. People are told in detail about Linda, and everything points to her being a feminist. In the real world, this provides the context for any follow up. We don't suddenly shift gears, going from normal discourse based on our day-to-day experience into the parsing of logic problems. Unless you are a logician or have Asperger's, the term “probable” is going to be taken as “given what I just described, what is your best guess of the sort of person Linda is”. Given the course of the question, the bank teller is extraneous information, and in the real world where we have a context to know what is extraneous, we filter that information out.
Demonstrating our failures to operate within the framework of formal logic is more a manifestation of logic not being reconciled to context than it is of people not being logical. Much of Kahneman and Tversky's work could just as well have been directed toward the failures of formal logic as a practical device than the failures of people to think with logical rationally.
Assumption: We are Mathematicians
In going up against the neoclassical paradigm, behavioral economics sets itself against mathematical structure. A mathematician entering the world of economics begins with a set of axioms. That is just the way mathematics works. And one of those axioms is that people think like mathematicians. In starting this way, they fail to consider how people actually think, much less how that thinking is intertwined with their environment and the context of their decisions.
The mathematical approach is to assume that, absent constraints on cognitive ability, people will solve the same sort of problem a mathematician will solve in decision making: one of optimization. Then, recognizing that people cannot always do so, they step back to concede that people will solve the optimization problem subject to constraints, such as limited time, information and computational power. Of course, if computational power is an issue, then moving into a constrained optimization is moving in the wrong direction, because the new problem may be even more difficult that the unconstrained one. But given the axioms, what else can you do?
It doesn't take much familiarity with humans – even human mathematicians – to realize we don't actually solve these complex, and often unsolvable, problems. So the optimization school moves into “as if” mode. “We don't know how people really think (and we don't care to know) but we will adjust our axioms to assume they act 'as if' they are optimizing. So if we solve the problem, we will understand the way people behave, even if we don't know how people's mental processes operate in generating their behavior.”
Behavioral economics 1.0 does not fully get away from the gravitational pull of this mathematical paradigm. Decision making is compared to the constrained optimization, but then the deviations are deemed to be anomalies. Perhaps this was a necessity at the time, given the dominance of the neoclassical paradigm. But academic politics aside, it might be better to ask if the axioms that would fit for a mathematician are wrong for reality. After all, I could start a new field of economics where I assert as an axiom that people make decisions based on astrology, and then enumerate the ways they deviate from the astrological solution. Of course, people will throw stones at such an axiom, but I do have evidence that there are people who operate this way, which is more, as far as I can tell, than the optimization school has.
Behavioral economics of the 2.0 variety, patterned after the context-laden methods of behavioral ecology, does not take mathematical optimization as its frame, so to speak. And the more it delves into how people actually think – work that naturally originated in psychology rather than economics – we find that people employ heuristics: rules of thumb that do not look at all like optimization.

Assumption: We are Probability Theorists

Behavioral economics recognizes that we operate in an uncertain world, and so assumes people not only act “as if” they optimize, but do so under uncertainty. Things then get really complicated, because we have not only added constraints but also made the problem stochastic.
Heuristics take a different approach to this problem; they overcome the uncertainty by applying coarse and robust rules. They do not try to capture all of the nuances of the possible states and their probabilities. They operate in a different way, unrelated to optimization. They use simple approaches that are robust to changes in states that might randomly occur.
This turns out to be better because it recognizes an important aspect of our environment that cannot be captured even in a model of constrained optimization under uncertainty: There are things that can happen which we cannot anticipate, much less assign a probability to. In such an environment, the best solution is one that is coarse. And, being coarse and robust leads to another anomaly for those who are looking through the optimization lens. In a robust and coarse rule, we will ignore some information, even if it is costless to employ. (This is a point of a paper I co-authored years ago in the Journal of Theoretical Biology, one that, like much of the argument in this post, has been embraced in behavioral ecology while passed over in behavioral economics).
Let's consider environmental context again to see why the apparently rational appeals based on the application of probability theory might be off the mark. At Caltech, Antonio Rangel is looking at how the brain lights up when various problems are posed to subjects. It turns out the problems related to large losses affect different parts of the brain than problems that seem, from a probability standpoint, to be nothing more than a reflection of problems that look at the potential for large gains. This might provide physiological evidence to support the irrationality observed by many in behavioral economists. Or it might be that it demonstrates these apparent biases were wired deep in our evolutionary past, and that they might be what is rational given that past.
Today it is not hard to envision a windfall gain that is similar in magnitude to a large loss. We can hit the lottery; we can build up wealth to last our lifetime. We can do that because of relatively new social and economic structures that allow us to save our wealth, and a legal structure backed up by a police force that gives us confidence that we and our possessions will be around long enough for us to enjoy them.
If we go back far enough, and not so far in terms of evolutionary time, the only good thing that could happen is capturing a large animal, or rebuffing the most recent tribal raids. Anything good was short-term and could easily be reversed. On the other hand, the negative tail was long and ominous. Even short of the not insubstantial risk of losing one's life or that of one's family (and with it one's future support), there was the risk of crippling injury, floods, and any number of other calamities. Include in these a gnawing realization that there were calamities that could not even be envisioned. In that world, it is not surprising that the brain circuitry would be wired differently for gains and losses. In that world, mapping gains and losses with any notion of symmetry is what would be irrational.
This use of robust and information-sparse heuristics again stems from context. We make our decisions in the context of our environment at the time, and our experience with how the world works. In that world, we have to ignore information because much of it is likely to be irrelevant.
Mathematical optimization can be correct in its purified world and we can be rational in our world, without optimization as the benchmark. It is a truism that if we inhabit a world that fully meets the assumptions of the mathematical problem, it is irrational to deviate from the solution of the mathematical optimization. So either we catalog our irrationality and biases, or we ask why the model is wrong. The invocations of information cost, limited computational ability, missing risk factors are all continually shaving off the edges of the square peg to jam it into the round hole. Maybe the issue is not that we are almost there, and with a little tweaking we can get the optimization approach to work. Rather, logical models may not be the right approach for studying and predicting human behavior.
It deserves repeating that the use of heuristics and the deliberate limits on the use of information as employed in the Gigerenzer worldview are not part of an attempt at optimization, real or “as if”. It is not a matter of starting with optimization and, in some way, determining how to achieve something close to the mathematically optimal solution. It is a different route toward decision making, one that, unfortunately for economists and mathematicians, is most likely the way people actually operate.
Logic, math and probability are all context independent. That is where their power lies; they will work as well on Mars as on Earth. But heuristics can take into account context and norms, an awareness of the environment, and our innate understanding that the world may shift in unanticipated way. As with many new paradigms, the new route to behavioral economics adds a critical part of the world that the old one ignored. Perhaps it was ignored for the same reason physics assumes a perfect vacuum. Or perhaps because the field became overrun with mathematicians, and as Kuhn has said, a new paradigm such as this will only successfully assert itself once the older generation dies off.


  1. http://truthonthemarket.com/2010/12/06/henry-manne-on-behavioral-overreach/

  2. 1. This chronology of "behavioral economics" is importantly wrong.

    Long before T&K work on framing, many people were doing painstaking work the variety of expected utility models. As most of this work was being done by mathematical psychologists and philosophers, most of it was unknown to economists.

    The goal was always to find more general versions of the standard VM calculus that could accommodate serious challenges to the VM axioms - especially the Allais paradox.

    No one had any doubt that VM axioms were not descriptive, the question was were they prescriptive?

    Many proponents of bounded rationality thought it was unlikely that the VM axioms were even prescriptive.

    The most sophisticated defence/use of a variety of the VM model is Smart Choices, Hammond, Keeney, and Raiffa. If you think that recommending optimizing is wrong, then this is the book that you should be attacking.

    (My own research was with the coherence of the notion of trade-offs and preference relations formed by pairwise comparisons - the latter being the canonical formulation of preference.)

    2. The heuristics crowd will eventually come around to formulating their mathematical theory of the trade-offs that they think underly the successful heuristics.

    It is far to early to know whether these models will have a similar functional form to the VM models.

    What T&K succeeded in showing was that some decisions if based on first instinct are wrong - and that you have to use a pencil and paper to the get the right result. Gigerenzer, and others, have succeeded in showing that there are other decisions which you get pretty good results eschewing pencil or paper.

    3. Finally, to recommend decisions rules based upon a romantic version of life on the savanna and how that effected brain development strikes me as wildly inappropriate for anyone with a scientific background.

  3. On point #3, I am making a conjecture that the physiological differences in how we treat bad and good uncertainty can be justified as a vestige of our evolution. I admit I am coming up with this out of thin air, but as far as I can tell, that is the common, and I think unavoidable, approach.

  4. Evidence in support of this conjecture might be the demonstrably shorter life spans of our ancestors. Mortally bad days appear to have come sooner in the lives of our ancestors than in ours. Is it reasonable to conclude that those days came sooner because mortally bad events were either greater in frequency or less likely to be identified and effectively defended against? Is it reasonable to extrapolate that over five to seven million years of evolution, natural selection might show a bias toward those who effectively perceive and respond to risk? What part of this formulation represents a romantic version of life on the savannah? Seems fairly Darwinian to me.

  5. Re: We are Logicians.

    Let me deal with your observations about context, filtering out information and formal models. Because, I believe you have drawn the wrong conclusion. Formal logic is just fine as a prescriptive tool, and nothing either T&K or G raises in Linda example.

    Philosophers of Logic have long since emphasized (3) distinct aspect about formal model building - using the toy example of translating the english if, then, as material implication.

    1. Formal models are translations from the natural language, which
    2. Are truth functional, and
    3. Preserve pre-formal validity.

    This is why T&K's translation of the Linda Bank Teller problem into the formal calculus of probability, with "and" translated as union is suspect.

    First, they offer no justification for why all these people are wrong - apart from their model saying so. Their translation fails 3.

    Second, there are translations in the probability calculus which satisfy 3 and are consistent.

    For example, we could translate A. as BT & -F and the sentence B as BT & F. Given the evidence, there is nothing contrary in the prob(BT&F) > prob(BT&-F). Indeed, it would be an odd use of English if statement A was meant to include all the cases in which Linda was a feminist!

    It is, in fact, very difficult to show that people both fail to reason logically and that your formal model of their reasoning preserves pre-theoretic validity.

    People may be illogical, models may be wrong.

    But, it is very hard to show that a group of people are illogical because your formal model of their reasoning is not consistent with some axioms. If you don't know what they are saying, it is pretty unlikely that there exists a truth functional translation which preserves pre-theoretic validity - a formal model - from which to investigate the reasoning.

    In conclusion, the students who chose B as more probable than A are perfectly fine formal logicians - because they reasonably treated A as the sentence "Linda is a Bank Teller and is not a feminist."

  6. I agree that the students who chose B are perfectly fine logicians because the fundamental assumption of behavioral economics, that we should follow the rules of logic, is absurd on its face. When you are a hammer everything is a nail. When you are a logician, everyone is inherently logical. A formal logician sees aberrant behavior as behavior of a formerly logical person.

    To an earlier point, Why would one expect that the heuristics crowd would formulate a mathematical theory of trade-offs underlying successful heuristics?

    It would seem that, in the context of academic expectations, it would be consistent for the heuristic crowd to formulate a mathematical theory of the trade-offs underlying successful heuristics; however, in the context of heuristic thinking it would be logically inconsistent to formulate a mathematical theory unless they first posit that the discipline of mathematics is in and of itself a highly sophisticated heuristic language--heuristics on steroids, as it were. The first case illustrative of this assertion would be the recognition that mathematics relies on symbolic language, that is, heuristic shorthand, to represent ideas.

    The core debate seems to be one of scope: where in our attempt to represent reality do we draw the distinction between that which is a purely heuristic representation and that which is a purely mathematical representation? It would seem that their purest states would neither represent reality because both are inherently symbolic, therefore inherently coarse, therefore inherently heuristic. Then, of course, the question becomes, Which is more useful? And our debate repeats in a pointless infinite loop.

    The act of formulating a mathematical theory of heuristics without the recognition that human language in all of its manifestations, mathematics included, is heuristic would seem to render the theory null at the outset. Heuristics are a relative and not absolute means of assessing problems.

    In the human economy of ideas, the academic marketplace is, of course, one of the most powerful when it comes to assigning value to ideas. Distilling ideas into mathematical theories is fundamental to assigning value; hence, in order to compete in this marketplace the heuristic crowd will, of necessity, formulate a mathematical theory that will be fundamentally flawed at its foundation. The theory will not be heuristic because there are generally accepted and different meanings to the words "heuristic" and "mathematical" and that which is heuristic cannot also be mathematical.

    Speaking of absurdity, to my way of thinking these representations devolve into a pointless interdisciplinary squabble over who has the most elegant and functional language for describing human behavior.

    I agree that logical models may not be the right tool for studying and predicting human behavior. Given that the function of the brain in all creatures appears to be to evoke movement toward food or reproduction and away from becoming food, and that these fundamental drives are at the foundation of all relationships between and within species, it would seem logical that biochemistry might, after all, be the more elegant and functional language for understanding economics. One compelling reason for believing this is so is that biochemistry is an agent having the potential to affect every molecule in nature, whereas mathematics and logic reside only in the human mind, so far as we know. It would seem logical that the richest source of evidence for understanding the economy of movement in nature would be nature itself and certainly not in semantic exercises in search of the meaning of meaning and the mathematical derivatives thereof.

    My money is on brain science and the chemical logic of neurotransmitters. I believe our most fruitful gains in understanding human economic behavior will be distilled from the neurochemistry driving hunger, sex, and addiction. Logic and mathematics will be necessary and useful tools in this quest but inevitably secondary.

  7. Re: - Michael Webster.
    I sense here a rather analytical approach to the problems/scenarios discussed (almost too logical) - a lack of understanding of the importance and significance of 'wholeness' within the reality of existence - Total Interdependence of Existence (TIE).

    While perhaps conjecture, I would suggest that recent implications arising out of epigenetics may in fact mean that our prevailing current selves are more a product of past experience passed on through genetics than we might currently like to admit.

    The process of mind, based as it is on the 'interpretation' of brain state as created both by sensory stimulation and mental process initiated by cellular modification due to response to condition caused by 'representational' stimulation to both body and brain, can only ever be based on a very limited but unique exposure to the totality of reality and consequentially can only ever produce a prevailing state of mind that is unique and highly prone to error. No two individuals will ever have the same state of mind i.e. will never think in exactly the same way - or in other words logic, in human terms, is itself unique. Hence the difficulty of achieving reliable economic modelling.

  8. To people trying to rationalise choosing B in the Linda the bank teller problem:

    I think there are some better examples of these behavioural irrationalities. One would be the famous "framing effect" also from Kahneman and Tversky:


    Another would be the "cognitive reflection test". Have a go at it (on average less than half of people get all three questions correct):


  9. Can we simply say that people tend to be irrational and quite often do not follow the rules of logic? I would say yes. Then my point is to what extent we can "predict" this irrationality and with which tools. However the reality is that sometimes it's not irrationality but stupidity or similar behaviours. And THE BASIC LAWS OF HUMAN STUPIDITY are more difficult to frame than irrationality

  10. @ Garth, in an email exchange I had with Dan Airely, he stated:

    "I think that part of the issue is that we have very general algorithms to make decisions and that we apply them broadly, and to cases that they were not designed for."

    I agree with this, and the conclusion is that sometimes we have to review these general algorithms by working things out with paper and pencil.

    It is completely unlikely that any reductivist program -reducing decisions to brain phenomena is going to work. This is crappy science.

    @JDE, how would you know if people did or did not have the same state of mind? If my state of mind is different from yours, then how would you know this? Presumably there would be some way to compare on an objective level these two states of minds to see that one was different from another. What do you propose, then?

    @pnewall, Nobody doubts the attractiveness and cleverness of many of the T&K problems. What is at issue is what follows for a normative theory of choice, given that the normative theory fails descriptively. Many argue, and I agree for the most part, that very little follows.

    For example, using a different field, if children don't do their arithmetic correctly, even if they make the same error predictably, we don't rush to a "behavioral theory" of adding. The kid is just wrong, and hopefully will learn how to correct their error.

    @MG, yes people can make mistakes. How should we correct them? With a good normative theory which is descriptively false, or some other way?

  11. But we can use the behavioural theory to help the kid do better!

    For example, many investors in defined-benefit investing plans use "naive diversification" -- putting roughly equal amounts of dollars into each investment category.

    Knowing this, we can set up the list of categories so that a naive diversifier will end up with a sensible portfolio; hopefully taking neither too much or too little risk for the majority of people.

  12. Rick,
    Nice post. You did a great job of distilling Gigerenzer. Regarding "coarse and robust" decision-making in the wild, you and your readers might be interested in and enjoy what I regard as the classic article on this subject, Ronald Heiner's "Origin of Predictable Behavior" (The American Economic Review, Vol. 73, No. 4, Sept., 1983). Heiner is superb in defining and quantifying the conditions in which coarse and robust heuristics and rules are superior to--and indeed MUST be superior to--optimization based decision making.
    A non-gated version is available here:

  13. @michael webster.
    It is not so much a case of knowing that two or more people have the same state of mind as realising that it is impossible for that to be the case.

    The inability of two material objects to exist in the same place at the same time, space-time occupancy, ensures that our individual exposure to existence is only ever unique. Combine this with the uniqueness of state that is evolution and consider how the physical state of the brain, as it exists in the present, is a product of past exposure to existence and its uniqueness become obvious. Since mind is a product of the physical state of the brain, the uniqueness of mind follows.

  14. To people debating over choosing A or B in the quick problem... Draw it as a Venn diagram. B is entirely engulfed by A. A is true for ANY case which B is true, and some which it is not. B is never true if A is not true.

  15. There is a very good account of the history of behavioral econ on a YouTube video by a, EU guy.

    If offers evidence that BE was a very purposeful marketing campaign to rescue classical econ by a handful of foundation leaders and highly ideological and programmatic. It worked. Nobel Prizes count.

    In fact, BE, like econ, is not an evidence-based body of knowledge and primarily ideological ideas theories with not ability to dis confirm. Yet a Nobel Prize every year!

    Here is a bit of reality testing -- http://bizbrain.tumblr.com/post/3870286757/real-science-skepticism-over-behavioral-econ

    "Nudge" is just another pop fad and desperate stealing from psychology and cognitive neurosciences to keep credibility. No Nobel's for those experimetnal disciplines and real sciences!