Performance measurement continues to be an area of great interest and attention within the US investment community. We conducted a survey last year which showed that approximately 20% of those involved with performance have a CFA#.1 Even more have a masters in business administration.
The role has grown from simply providing annual rates of return to encompass compliance with presentation standards, analysis and interpretation of the results, monitoring the associated risks, and more. With this increased complexity, we’re finding salaries increasing as firms vie for the most talented resources.
Three areas within the performance area that are getting the greatest attention:
q the performance presentation standards;
q performance attribution;
q risk measurement.
The presentation standards continue to be an area of importance, as we typically find large crowds attending Association for Investment Management and Research (AIMR)-sponsored conferences and training sessions. And, as a consulting firm, we are regularly called upon to help firms comply. The standards have evolved significantly since they went into effect eight years ago, with Global Investment Performance Standards (GIPS) being the latest contributor to change.
While the standards were originally targeted to pension fund managers, we’ve found mutual fund managers, insurance companies, banks and even brokerage firms wanting to comply.
Last year, we conducted our fourth survey on these standards. For the first time, in addition to querying about the AIMR-PPS, we included questions about the GIPS.
While over 91% of the US managers either claimed compliance or planned to comply with the AIMR-PPS, less than 50% offered similar responses for GIPS. In fact, 25% said they had no plans to comply with GIPS with a further 21% saying they weren’t familiar with them. And why aren’t US managers complying with GIPS? Over 57% said they prefer the AIMR standards; 45% said they don’t believe the standards apply to them.2# And this is what I felt to be the case for some time. US managers have been confused about these new, “global” standards – and rightly so.
When the GIPS draft was circulated for comment, AIMR intentionally avoided distributing it within the US, to preclude the standards from becoming another “North American” standard. As a result, many US managers felt the standards weren’t applicable to them. Also, the prefix “global” makes US domestic managers believe they aren’t applicable.
On the client side, over 91% of US plan sponsors are familiar with the AIMR-PPS, while only 25% are familiar with GIPS, with almost 20% saying they hadn’t even heard of GIPS. It’s not surprising, therefore, that over 90% of the plan sponsors always or often inquire about compliance with the AIMR-PPS but less than 45% ask about GIPS.
Fortunately, the AIMR board of governors recently approved a redraft of the AIMR-PPS which transforms the AIMR standards into the US and Canadian country version of GIPS. By becoming a CVG, the GIPS standards become the core of the AIMR-PPS. And so, when US managers claim compliance with the AIMR standards, they will, at the same time, claim compliance with GIPS. This step should add a great deal of clarity as to what GIPS means.
Attribution is one of the hottest topics, as everyone, it seems, wants to identify the sources of their returns. With the complete absence of standards, however, there is much confusion as to how attribution should be measured and interpreted.
On the equity side, we’re finding interest in daily and transaction-based attribution, in spite of the need for very accurate data to insure the results are right. While we favour daily measurement to improve accuracy, daily reporting seems of minimal benefit.
There’s growing interest in geometric attribution (as opposed to arithmetic), although there seems to be much misunderstanding as to what the “geometric” designation means; some think it has to do with geometric linking, which is not the case.
In Europe, it’s been quite common to report excess return from a geometric perspective; this has not been the case in the US. And so, if geometric attribution is to succeed, education will be needed.
With a plethora of models, which can actually provide conflicting results, searching for an attribution system is extremely complex. Whether or not we’re comfortable with having an “interaction effect” adds to the challenge.
Fixed income attribution is growing in interest with loads of unanswered questions. Many software vendors use their equity models to provide this type of analysis, which is generally understood to be inappropriate. Much analysis and discussion are occurring as to the most appropriate method(s) to use, with no clear answers yet .
Managers regularly provide attribution statistics to their clients, in spite of the fact that we don’t know what they do with the information. I’ve been surprised when managers seem quite interested in providing even daily attribution statistics, without having any idea as to what their clients do, or are supposed to do, with the information.
Attribution is a complex area that needs further refinement and clarification. While the notion of formal standards may not be appropriate, given that this is an analytical measurement, education is definitely warranted.
Even though the Bank Administration Institute first recommended reporting risk statistics back in 1968,3# the area of risk measurement continues to be one with confusion and lack of clarity. First, what measure should we use? While information ratio seems to be quite common, its interpretation is anyone’s guess. Of late, I’ve made it a habit to ask conference speakers what the numbers mean; I’m still waiting for a precise answer.
It’s not unusual to hear that a ratio of 0.5 is “good”, while 1.0 or more is “superior”. But statistics have shown that it’s not unusual to find no managers achieving the 0.5 level, so how can these ratings apply?
Tracking error seems to be the most promising statistic, which also lends itself well to risk monitoring and management. By establishing acceptable ranges of tracking error, a manager can attempt to limit the risk they’re taking relative to the benchmark.
The most commonly used (and often most criticised) statistic remains standard deviation. This will probably continue to be the case for some time. Its appeal stems from its simplicity and use in other fields. We believe it will always have value.
Ultimately, managers will provide clients with a host of statistics, as one measure doesn’t do the trick. Risk is a very complex area and requires analysis from multiple perspectives. As we progress, I’m hopeful that we’ll be able to explain what these statistics mean and how they’re to be interpreted. Otherwise, they’re of little value.
The demand for information has outpaced the understanding as to what the numbers mean. Perhaps this isn’t so surprising in this information age. But being given pages and pages of statistics without an understanding as to what the numbers mean or what we’re supposed to do with them only adds to the confusion. And, because we’re supposed to know what the numbers mean, we’re often afraid to ask. And, when we do ask, it’s interesting that no one seems to know the answers! So, are some of these statistics actually meaningless? Let’s hope we find the answers soon.
David Spaulding is president of The Spaulding Group, based in New Jersey, and publisher of The Journal of Performance Measurement
1 Certified financial analyst designation, awarded by the Association for Investment Management and Research
2 Survey respondees could choose more than one response
3 Measuring the Investment Performance of Pension Funds, Bank Administration Institute, 1968, p11
No comments yet