The days are long gone when fund managers simply got a pat on the back from clients for reporting they had miraculously caused their assets under management to grow yet again. Instead, their success or failure is now measured against a benchmark performance. Yet, investment consultants, who may well have introduced the client to the fund manager in the first place, have, until recently, escaped such close scrutiny.
Paul Myners, in his review of institutional investment conducted for the UK government, arguably recognised this difference. He said that trustees should not terminate manager appointments ‘before the expiry of the evaluation period for underperformance alone’, aware that many trustees were at least measuring the performance. He went on to recommend that they monitor not only the performance of fund managers, but also the performance of their advisers (and also the trustees’ own performance).
This has received a mixed response from the investment consultant community. Most have nodded in sympathy with the recommendation, but then drawn breath at how the subtle and difficult job of the consultant can possibly be measured. Common objections are that ‘I don’t take any decisions’, ‘I just give advice’, or ‘it’s not what I advise but how well I communicate the advice’.
These are valid concerns – the role of the consultant is rarely as well defined as the role of the investment manager. Indeed, it is another area that Myners has focused on ie, greater clarity in the decision-making process. This has led to the UK government objective for Pension Scheme Trustees that “those responsible for decisions in law are those taking them in fact”. But these concerns can also be used as a smoke screen to avoid proper assessment of the consultant’s capabilities. This article therefore discusses how a consultant’s performance can be measured, and what the pitfalls are.
To begin to answer this question it is first necessary to establish what the consultant actually does. The ‘investment cycle’ depicted below, covers the traditional areas of advice.
Typically, consultants will assist their clients in establishing objectives for investing in the assets – this might be to achieve as high a return as possible, or it might be to minimise a specific risk. They will then advise on a broad asset mix or investment strategy to best achieve these objectives (this generally involves compromise since the objectives often conflict). Finally, they will advise on suitable manager arrangements – both in terms of the types and combinations of managers and specific institutions. The arrangements will then be subject to monitoring on an ongoing basis. While these activities overlap each other, they are a helpful way of dividing up the role.
Of these steps, the manager’s performance is most susceptible to objective measurement. Each manager will be measured against a performance benchmark, and the consultant could, and arguably should, take responsibility for this performance. Success or failure could guide the trustees as to whether the consultant should be retained or replaced, or lead to an adjustment to the fees paid to the consultant.
There are complications and questions. Should consultants be measured on the basis of the managers they shortlisted, or on the specific manager chosen by the trustees (assuming this final decision rests with them)? Is it best to focus on individual managers or the overall arrangement (especially relevant if the overall arrangement deliberately combines two managers with complementary management styles)? Should the consultant be measured on the basis of the manager’s performance or on how efficiently results were achieved (ie, on the risk-reward trade-off or ‘information ratio’)? What if the trustees sack an underperforming manager against the advice of the consultant, or delay the implementation of a recommendation from the consultant? Over what timeframe should the performance be measured? Is it right to assess the consultant on the performance of a small number of managers when performance is down to a combination of skill and luck on the investment manager’s part?
These are all important questions, many of which highlight just how vital it is to be clear about who is taking the decisions, especially if the consultant’s remuneration level depends on it. A good start point, perhaps, is at least to know how well the investment consultant performs as a ‘house’. Put another way, how have the consultant’s ‘buy’ recommendations performed, has this performance been achieved with an acceptable level of risk, and has it been achieved without chopping and changing too often? This not only provides trustees with a track record, but also with a very clear incentive for all those involved in establishing the ‘buy’ recommendations to get the decisions right. A method used by our firm is to measure average performance over different investment categories in a year.
So far, so good. However, it is well known that choice of managers rarely has as much impact on overall asset performance as choice of asset classes. So how can advisers be measured in this area? This is a more difficult issue. Firstly, there are no obvious benchmarks. For example, if one pension scheme is pursuing a strategy geared towards good long-term returns, it is not appropriate to compare the performance achieved by that strategy against a different strategy geared to minimising a pension scheme’s funding level volatility.
A second problem is that, to a far greater extent than with manager choice, strategy is about probability. The investment consultant may advise that there is only a one in 20 chance that a particular strategy (for example a particular ration of equities to bonds) will fail to achieve a funding level in excess of a specific lower limit set by the trustees. But how will this be assessed? In 19 out of 20 cases, the consultant will be right (if his modelling was accurate) so should the consultant be patted on the back if the lower limit is not breached?
If the appetite for measurement grows, answers to this type of question are likely to evolve. One route may be to focus on whether the strategy has delivered the level of risk (ie, volatility) that was intended rather than a specific level of return (which the above example is based upon). The attraction of this is that the risk inherent in a particular strategy should be more consistent and therefore predictable than the return. However, even this will cause problems if the measurement period is too short. The table illustrates that volatility has been reasonably consistent over the past 20 years, though by no means static and, looking further, that there are periods of extreme volatility.
In the meantime, there are other aspects of the strategic decision that may be more susceptible to measurement and reward. For example, the trustees of a pension scheme, working with the consultant, may establish which of the major assets classes of quoted equities and government bonds they are comfortable with in terms of the expected risk-return profile. Can the consultant generate extra return or a better risk-return trade off than this strategy (eg, by holding corporate bonds as a substitute for government bonds, by establishing a more optimal mix of equities than the current arrangement, through exposure to alternative asset classes or by removing unwanted risks such as currency)? If so, this can be used as a measure of the consultant’s success.
Strategy and manager choice are, then, the two key areas affecting asset performance and hence a legitimate focus for assessing consultant performance. Measurement is not straightforward in either area, but is certainly possible.
The danger is that in focusing on these aspects the ‘softer’ skills of consultancy get forgotten. For example, it’s no good complimenting a consultant for putting together a superb equity strategy if, with a little more probing by the consultant, the trustees would have concluded that equities were not for them.
Similarly, no one will thank the consultant who says “I told you so” when a client has acted against the consultant’s advice. The consultant’s role is not just to give good advice but to communicate it well, too. This has always been the case and is likely to remain so.
Paddy Hagan is a senior consultant at Mercer Investment Consulting
in Leeds
No comments yet