In the classical model of decision making we identify our preferences through systematically valuing each available alternative by multiplying its probability with its ‘utility’. Such a hyper-rational approach is clearly unrealistic and fails to provide a descriptive account of how we actually behave. Whilst most of our judgements must, either implicitly or explicitly, combine some assessment of potential outcomes and probabilities, our perception of these is dominated by our behavioural biases.
Of particular interest for this post is our handling of probability. This encompasses both how we ascertain a level of probability for a given occurrence and how we use that probability to inform our decision making. In most scenarios the choices we make are in a state of uncertainty – meaning that we are never aware of the true probability properties of a given decision; thus our views are highly subjective and context dependent.
A number of anomalies have been identified in our treatment of probability; most prominent is Tverksy and Kahneman’s Cumulative Prospect Theory (1992), which posits that individuals tend to overweight low probability events with extreme outcomes. This behaviour has been displayed in a number of fields such as lotteries (Clotfelter & Cook, 1990) and horse racing (Snowberg & Wolfers, 2010), where the favourite is ‘underbet’ and the longshot ‘overbet’.
Whilst there is evidence to support an inclination to overweight low probability, extreme events; research on insurance somewhat contradicts this notion. Kunreuther & Pauly (2004) showed that individuals often failed to protect themselves from the consequences of catastrophic loss, even in situations where the cost is subsidised and the policy is undervalued (Camerer & Kunreuther, 1989). Thus, in certain circumstances, rather than overstate the probability of an extreme event, there is a tendency to disregard it completely.
McClelland, Schulze and Coursey (1993) studied insurance behaviour in a laboratory setting and found that insurance against low probability risks possessed a bimodal distribution – that is participants were prone to either dismiss or exaggerate the risk presented. Slovic, Fischoff, Lichtenstien, Corrigan & Combs (1977) reported that individuals were reticent to purchase insurance when the probability fell below some threshold – a threshold that was seemingly specific to each individual.
That there appears to be bimodal treatment of low probabilities, suggests that there is a probability threshold inherent in decision making, wherein the presumed probability of an occurrence must be above a particular level for it to be deemed worthy of consideration. In the context of disaster insurance, Kunreuther & Pauly (2004) state that individuals need to be sure that the probability of an occurrence is above a threshold level before they even begin to search for detailed information on the value of protection.
The existence of a probability threshold might be considered an effective adaption allowing us to filter ‘noise’, limit worry and focus our attention on areas we deem pertinent. Furthermore, given that it is often difficult to evaluate the probability of an event, a simple ‘relevance heuristic’ that allows us to discard risks seems prudent and efficient. However, this behaviour is highly problematic – not only is it injudicious to entirely reject certain high consequence risks, our ability to accurately gauge probabilities is severely limited and impacted by a swathe of behavioural biases including availability, recency and salience (we will cover these in more detail in later posts). Thus, we may be ignoring a particular risk that we subjectively perceive to carry a low probability, when in reality the likelihood is significantly greater.
The implications of this phenomenon are important for all investors – encompassing individual stock decisions to macroeconomic postulations – as our ability to understand probabilities and appropriately consider risks to paramount in the evaluations we make. There is no simple solution to this issue, but there are a number of partial remedies:
– Give specific probabilities to risks: We should be more willing to ascribe probabilities to risk factors. This is not with any expectation of our estimation being right, we will almost certainly be wrong; but rather it allows us to be open and explicit about our thinking, and compare it to how our portfolios are positioned. If we are running scenario analysis we should not simply look at the potential magnitude, but be explicit about the perceived likelihood of such an event. Formally assigning probabilities to risks also allows us to actively engage with our views, track how these evolve and react to new information.
– Source opinions from across a team: Our perspectives on probabilities are highly subjective and will often vary significantly between individuals, comparing and combining probability expectations may serve to offset or counter some of our individual biases.
– Take an external view: The caveat to the above is that the cognitive diversity within a team can often be limited. Obtaining perspectives from people remote from your group, and with no vested interest in giving a particular view, should enhance the breadth of thought.
Such activities are, of course, no panacea; we are poor at forecasting and often risks will arise that were never previously considered, let alone discarded (black swans, unknown unknowns…). Furthermore, our commitment to our own beliefs are often so resolute as to be unshakeable, thus the impact of the above approaches may be limited. These factors notwithstanding, the existence of a probability threshold and our propensity to ignore risks has profound ramifications for our investment decision making, and even small changes to our behaviour could garner material benefits.
Key reading:
Camerer, C. F., & Kunreuther, H. (1989). Decision processes for low probability events: Policy implications. Journal of Policy Analysis and Management, 8(4), 565-592.
Clotfelter, C. T., & Cook, P. J. (1990). On the economics of state lotteries. The Journal of Economic Perspectives, 4(4), 105-119.
Kunreuther, H., & Pauly, M. (2004). Neglecting disaster: Why don’t people insure against large losses?. Journal of Risk and Uncertainty, 28(1), 5-21.
McClelland, G. H., Schulze, W. D., & Coursey, D. L. (1993). Insurance for low-probability hazards: A bimodal response to unlikely events. In Making Decisions About Liability And Insurance (pp. 95-116). Springer Netherlands.
Slovic, P., Fischhoff, B., Lichtenstein, S., Corrigan, B., & Combs, B. (1977). Preference for insuring against probable small losses: Insurance implications. Journal of Risk and insurance, 237-258.
Snowberg, E., & Wolfers, J. (2010). Explaining the Favorite–Long Shot Bias: Is it Risk-Love or Misperceptions?. Journal of Political Economy, 118(4), 723-746.
Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and uncertainty, 5(4), 297-323.