Active management is about delivering good returns within an acceptable level of risk. During the past few years there has been a considerable rise in the use of bespoke return objectives. Bespoke risk objectives are not yet as common, but consultants (and clients) are catching on fast. For example, a client wishing to set its fund manager a risk objective might set it the target of keeping the underperformance in any given year to below, say, 3%. Alternatively, the risk objective could be framed in terms of keeping the ‘tracking error’ (usually the forward-looking tracking error) of the portfolio within some suitable range – for example, within the range 1–4% a year.
But how easy is it in practice to control risk? Is it as easy as turning a risk ‘dial’ up or down? And how reliable are attempts to impose risk parameters on fund managers?
It is in theory possible to increase or reduce your level of risk merely by scaling up or down every single active position within the portfolio by a single constant multiplier. But risk objectives do not exist in some kind of splendid isolation, divorced from other aspects of portfolio management – or, for that matter, personnel management. As soon as human judgement comes into play, differences will appear that may disrupt this simple linear scaling.
Furthermore, human judgement appears in several other guises. For example, the first sort of risk objective described above, a simple underperformance avoidance objective, is suspiciously like one of the objectives that was included in the investment management agreement between Unilever pension scheme and Merrill Lynch Investment Management (MLIM). Given the reputed size of the out-of-court settlement eventually paid by MLIM to the scheme, lawyers working for fund managers will make every effort to ensure that risk objectives are ‘aspirational’ in nature, instead of imposing strict legal liabilities.
With simple underperformance objectives you also need to be aware that they will almost inevitably be breached given enough time. For example, if I continuously run a forward-looking tracking error of, say, 3% a year then there should be a probability of about 16% of under-performing by more than 3% over any given calendar year. We here make the usual assumptions about returns being normally distributed around zero etc apply and that the model used to calculate the tracking error is appropriate. Simple probability theory indicates that under the same assumptions it is more likely than not that over the next five years there will be at least one calendar year when the under-performance is worse than 3%. And over the next 31 years, it is more likely than not that there will be one calendar year when the under performance is worse than 6%.
You might therefore conclude that there is a compelling case for expressing risk targets in the form of forward-looking tracking errors (or other similar risk measures) rather than simple underperformance triggers. And indeed the majority of explicit risk objectives do now seem to be framed in this fashion. But these sorts of objectives still do not eliminate human judgement. They just make the subjectivity less obvious.
All risk systems make assumptions about what might be the key drivers influencing future performance. The choice of these risk factors is heavily influenced implicitly or explicitly by what factors seem to have had most influence in the relatively recent past. This reliance on past data, implicit or explicit, inherently limits the accuracy of any possible risk model that might be developed. It is not just because the past is not necessarily a good guide to the future. It is also because there is not enough data available from the past to identify all the risk factors accurately. This latter point is not one that you often hear about from suppliers of risk modelling systems.
What happens if a risk system does not include a factor that is actually quite important to how stocks behave? For example,in the past few years, our UK equity team have focused particularly strongly on companies that were well placed to benefit from change in all its various guises. Thus, for example, we tended to emphasise visibility of earnings, strong market positioning versus competitors, companies with relatively low debt, etc. Focusing on this sort of stock has been very helpful to our flagship UK equity pooled pension fund over the past three years. Figure 1 shows that the fund’s information ratio has been higher than that of nearly all of its competitors over this period, particularly those competitors running similar risk profiles.
This particular type of stock characteristic does not obviously fit well with the sorts of risk factors that, say, the Barra UK equity model uses. It is of course up to providers of risk systems to choose how they construct their risk models. However, if a risk model misses a style that is actually of importance to portfolio behaviour, then it will typically underestimate the potential volatility of any portfolio that expresses this style strongly. This perhaps explains why estimated tracking errors for our UK equity portfolios, as calculated using the Barra risk engine, seem generally to have understated their actually observed tracking errors over the past few years.
But is it realistic to assume that with sufficient care and attention all such potential weaknesses of risk models could be eliminated? Actually, the underlying mathematics means that no risk system is ever likely to capture all the main factors that might drive markets in the future in a totally robust fashion.
The cornerstone of most risk models is a large correlation or covariance matrix describing the interactions between all the different securities covered by the risk model. If, say, the model covers 1,000 securities, then this matrix contains 1,000 ¥ 999/2 = 499,500 elements. But suppose the risk model is implicitly or explicitly dependent on the behaviour of these securities on monthly data over the past three years. There are then only 36,000 monthly observations on which the matrix depends. As a result, most of the 499,500 elements of the covariance matrix are not independent of each other.
Indeed, with 36 months’ worth of data it is only mathematically possible to identify at most 35 distinct risk factors from this data, however many securities are being analysed. This is because all possible time series that are 36 elements long (and are on average zero) can be reconstructed by some linear combination of 35 suitably chosen base series. Longer data series (or possibly ones that involve more frequent sampling of data) do potentially allow you to identify more risk factors. But the older the data the less relevant it may be as a guide to how securities might behave in the future (particularly new issues!), and the more frequent the sampling the more the risk system may be focusing on factors that wash out over longer time frames.
And when you analyse, say, 36 months worth of monthly data further you often find that most of the factors that you can actually extract from the data seem statistically indistinguishable from what might arise at random. This is illustrated in Figure 2. It shows the contribution to predictive power of different factors influencing stocks in the MSCI USA Index. The factors have been identified by a pure statistical analysis of the past data (and ordered so that the most important factors, for risk control purposes, are at the left hand end). It then compares these contributions with what you would expect to see at random were every single stock completely independent of each other. Only a relatively small number of factors seem, when added to the model, to contribute significantly more predictive power than you would expect to see purely by chance.
So it is in principle possible to design investment processes that scale up or down risk pretty effectively, given careful attention to man management and other qualitative aspects of investment management. For example, we offer both ‘Core’ and ‘High alpha’ variants of several of our products. But risk measurement is an inherently uncertain activity, limited by an inescapable lack of data. Whilst the risk ‘dial’ can be turned up or down given a suitable investment process, there is no sure anchor point around which any such dial can be positioned.
Malcolm Kemp is a director of Threadneedle Pensions and head of quantitative research at Threadneedle in London
No comments yet