Owning Quant Funds is Not Easy

2018 was a horrendous year for many quantitative funds and their investors (I speak from personal experience).  Although I do not wish to add to the commentary on the drivers of this particularly difficult period, it has brought into sharp contrast how different owning a systematic strategy is to holding a fund with a more traditional, human-led investment approach. Whilst both sets are often rightly grouped under the active banner, this definition belies the specific behavioural challenges investors face when holding a quant strategy – particularly when performance is poor:

Nobody Else to Blame – I have written previously about active fund investors suffering from a form of reverse disposition effect, that is a propensity to run winners and cut losers (unlike individual stock pickers).  This is because fund selectors benefit from an attractive form of optionality – if the fund we have chosen delivers outperformance then it is due to our superior selection skills, whereas if it struggles we can claim that the underlying fund manager is behaving in a manner that is inconsistent with our expectations (a healthy dose of outcome bias is also at play here).  This argument, however, does not hold for quant funds – in most cases we are investing in a defined system or process, if the strategy fails then it is far more difficult to apportion responsibility elsewhere – the process hasn’t changed, you picked it and it didn’t work.  Unlike qualitatively driven funds, there is no get out of jail free card.

Curse of Consistency – Somewhat ironically, the majority of quant funds possess characteristics that are consistent with what most fund selectors say they seek in traditional active managers – a clear philosophy and a disciplined investment process / decision making structure that will be applied diligently through varying market conditions. Unfortunately, whilst prudent on paper, the stated preferences of most fund selectors do not really hold under stress. When active funds suffer marked underperformance the reaction of investors is typically not: ‘I’m a glad you are remaining faithful to your process through this difficult time’, but rather: ‘things are going wrong, show me what you are doing about it’. This attitude is a major problem for quant funds as in most circumstances their reaction to poor performance should be to consistently apply the process on the basis that it will deliver over the long run. A strategy doing the same thing when it is not working for a sustained period is often unpalatable for investors, even if it is the right approach to adopt.

Does the Factor Still Work? Perhaps the most significant problem for investors in quant funds pertains to factor based strategies, which are seeking to exploit market anomalies to deliver a risk premium.  Owning such strategies requires a belief that the underlying factors exist (are robust) and will persist. It is this latter point that is the most challenging. Given that we can never have certainty why a particular factor has delivered a premium (we can only opine), we can equally never be sure as to whether it will continue to work. Perfectly valid factors can struggle for long spells and it is difficult / impossible to decipher whether these are the result of a structural shift extinguishing the factor premium, or a ‘temporary’ phenomenon. This uncertainty makes the task of myopic investors persisting with such strategies particularly difficult. Even if we pick the right factors we will have to sit through long periods when everybody is telling us they are broken.

Good Decision / Bad Outcome – Most quant funds are structured based on decision rules / algorithms that deliver on average, when applied over the long-term.  By definition, this means that there will be phases when they do not and, with a liberal dose of leverage applied, these can be painful.  Even a strategy with a high Sharpe Ratio, investing in proven factors, is prone to experience drawdowns that can be multiples of the long-term expected volatility.  Averages hide a multitude of sins, and sensible decisions can come to look anything but.

Black Box Stigma:  Quant funds unquestionably carry a stigma. They are blamed for a variety of ills, including (simultaneously) subdued market volatility and extreme bouts of volatility (apparently severe short-term market declines only began occurring with the onset of algorithmic trading). Of course, we should never invest in something we don’t understand – but this applies to all types of strategies.  How much do we really know about the genuine drivers of decision making in a human-led investment process? Is the behaviour of a systematic trend following strategy more opaque than a discretionary global macro manager?

Discussing quantitative funds into one homogenous group is not particularly helpful and obscures the sheer array of approaches that can be broadly classified in this cohort.  Each strategy should be assessed on its own merits – there are bad quant strategies as there are poor qualitative strategies.  Investors, however, need to be acutely aware of the distinct behavioural challenges that arise from owning systematic strategies and be prepared to manage them if they are to successfully invest in such approaches.


Can More Information Lead to Worse Investment Decisions?

It is without question that investors now have easy access to more information than ever to guide decision making; optically, this surfeit of data appears to be a positive – who doesn’t want more ‘evidence’ to inform their judgements? Yet there are a number of potential drawbacks, most notably the challenge of disentangling signals from a blizzard of noise in order to make consistent decisions.  For this post, I want to specifically address the potential consequences of information growth and its impact on our precision and confidence levels.  Whilst we often believe that more information can improve our accuracy (the number of correct decisions we make), in certain situations all it may be doing is increasing our (unfounded) confidence.

There have been a number of studies in this area, the majority of which reach similar conclusions.  Tsai, Klayman and Hastie (2008)[i], tested the impact of additional information on an individual’s ability to predict the results of college football games and their confidence in doing so correctly.  Participants in the study had to forecast a winner for a number of games based on anonymised statistical information.  The information came in blocks of 6 (so for the first round of predictions the participant had 6 pieces of data) and after each round of predictions they were given another block of information, up to 5 blocks (or 30 data points), and had to update their views.  Participants were asked to predict both the winner and their confidence in their judgement between 50% and 100%. The aim of the experiment was to understand how increased information impacted both accuracy and confidence.  Here are the results (taken directly from the study):


The contrasting impact of the additional information is stark – the accuracy of decision making is flat, decisions were little better with 30 statistics than just 6, however, participant confidence that they could select the winner increased materially and consistently.  When we come into possession of more, seemingly relevant, information our belief that we are making the right decision can be emboldened even if there is no justification for this shift in confidence levels.

For this research, the blocks of information were provided at random and the participants were amateurs – would the same relationship hold for professionals who were able to select the information they believed to be most pertinent?  An unpublished 1973 study by Paul Slovic (cited by the CIA[ii]), takes a similar approach but in this case with experienced horse race handicappers. Unlike in the college football study, the handicappers were allowed to rank the available information by perceived importance (from a list of 88 variables) and then had to predict the winner of an anonymised race when in possession of 5 pieces of information, then 10, 20 and 40 (by order of their specified preference / validity).  The results obtained were consistent with the aforementioned football study – accuracy was consistent despite more information becoming available, but confidence increased as the number of available statistics rose.

There are two important issues for investors to consider when looking at this type of outcome: i) There are probably less relevant pieces of information than we think, ii) There are a number of negatives around the accumulation of too much information – one of which is overconfidence.

More information does not necessarily lead to better decisions: In the investment industry it can often feel as if it is the amount of information or evidence that matters, rather than its validity.  Provided a research report is long enough, the conclusion must be sound.  I would contend, however, that for many investment decisions there are only a handful of information points that are relevant, distinct, and materially impact the probability of a positive outcome.  If this is the case, why is there such a desire for more and more information?

– We don’t know what that relevant information is, therefore we include everything we can find.

– We struggle to realise that many pieces of information are telling us the same thing.

– In random markets, noise can be mistaken for relevant information.

– If a decision goes wrong, we at least want to show that we did a lot of research to support it.

– It is difficult to sell our investment wares if we simplify our decision making to a select few variables.

– It we make simple decisions based on a narrow range of information we can look lazy, inept and unsophisticated.

– We feel more comfortable / confident in a decision if it is ‘supported’ by more evidence.

– It is possible that information that was once relevant ceases to be so because of some ‘regime shift’.

This combination of factors (and others I have failed to mention) means that it is incredibly difficult not to focus more on the accumulation of information rather than seek to identify the information that matters.

More information can lead to overconfidence: It is not simply the case that more information might not result in greater decision making accuracy, but that it can lead to us becoming more overconfident and poorly calibrated in our judgements.  Whilst we often believe that ‘new’ information bolsters the case supporting our choices, on many occasions this additional evidence may simply be a repetition of prior information (merely in a different guise) or be erroneous with no predictive power (a major problem in an environment marked by uncertainty and randomness where things that look like they matter, actually do not).  As we receive more information, therefore, we are prone to believe that we are more accurate in our decisions, when there is often no justification for this. This can create an anomalous situation where behaviour consistent with being diligent and thorough, actually results in worse investment decisions being made.

Judging the balance between carrying out sufficient research and becoming overly confident by collecting reams of superfluous data is fraught with difficultly, however, all investors should think more about what is the most relevant information, rather than concentrate simply on the accumulation of more.  For professional investors, a simple idea is to decide which pieces of information they would use if there was a restriction (of say only 5 or 10 items) and then monitor the outcomes of decisions made utilising only these select variables.  Such an approach forces us to think about what evidence really matters to us, whether it is effective and what value we might add over and above such a basic method.

[i] Tsai, C. I., Klayman, J., & Hastie, R. (2008). Effects of amount of information on judgment accuracy and confidence. Organizational Behavior and Human Decision Processes107(2), 97-105.

[ii] https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/psychology-of-intelligence-analysis/art8.html