A test of faith: An open letter to client-facing investment advisers Article added by Ron Surz on December 15, 2010
Joined: November 01, 2010
Ranked: #220 (225 pts)
Dear Consulting Colleagues,
I’m reaching out to you because I’m concerned that you may have misplaced your faith in those who are performing investment manager due diligence for you.
As many of you know, I’ve worked on consulting analytics for more than 30 years. I've seen significant advances in the tools that could and should be used in separating the winners from the losers, also known as investment manager screening. The problem is the finest tools are not being used by many investment manager researchers.
The active/passive luck/skill debate rages on, largely because the old tools cannot accomplish the very basic task of accurately identifying winners. The crying need for real due diligence was dramatically demonstrated by the Madoff Mess 1, which revealed problems that extend well beyond hedge funds to plain old vanilla long-only managers.
For the most part, client-facing advisers trust others to screen investment managers, and this trust is being placed more and more in outsourced due diligence. I’ve been to these service providers, and gotten almost nowhere, although some have updated their tools.
Because you need to focus on more important matters, outsourcing manager due diligence makes all the sense in the world. Some, like David Loeper, CEO of Financeware, argue that you shouldn’t even bother outsourcing because the value of investment manager due diligence is much less than what you pay for it. I particularly agree with Dave in those cases where the quality of research is poor, but Dave’s point is that the costs of identifying potential talent outweigh the benefits regardless of the quality of research.
The counter to the Loeper position is Meir Statman’s “What do Investors Want2,” which started as a great article and has now been published as a book with the same title. Clients want the entertainment and bragging rights that come with active managers; they want the “utilitarian and expressive benefits.”
Anything worth doing is worth doing right. Presumably, you’re buying due diligence because it’s what your clients want, and you should get what you pay for.
In his article, Meir states:
Today’s money managers say they compete with other money managers by generating the highest alpha. They denigrate the role of marketing. Yet each money manager has read stories about other money managers with low alphas who snatched clients through clever marketing.
The marketing genie is out of the bottle. Do you believe in magic?
So, here’s the test of faith that will help you determine if you and your clients can rely on the research you’re buying and using. This test also applies to “in-sourced" research provided by your firm. The test revolves around the two key questions real due diligence must answer in the best possible way:
1. What does this manager do?
You need to know how these critical questions are being addressed.
2. Does this manager do it well?
What does this manager do?
This question is usually answered by on-site visits to the investment firm, supported by analytics like style analysis. It sets the stage for the second question and provides a potential stopping point. If we don’t understand what the manager does, we pass and move on — we don’t invest.
The Madoff Mess1reinforces the importance of this discipline. Style analysis is only as good as the indexes that are used to identify style, and holdings-based analysis provides the most accurate perspective3. According to Dr. William F. Sharpe4, the recommended indexes are mutually exclusive and exhaustive, which means no stock is in more than one index and the collection of indexes comprise the entire market.
The test: Does your due diligence provider use style analysis? Is it returns-based or holdings-based? Are the style definitions mutually exclusive and exhaustive (hint: there are only two index families with these properties)? Do you receive these reports?
If you’d like to see an example of holdings-based analysis, you can enter your own portfolio holdings into the portal at StyleScan; there’s no charge for this education.
This first question establishes the benchmark for the next question.
Does the manager do well at what it does?
Most due diligence fails to answer this question accurately. It’s not just an opinion and it’s not an easy question to answer, but it is, after all, the main purpose of due diligence. The researcher should determine whether the alternative of passive implementation would do just as well as this manager. We can replicate just about any investment strategy with passive inexpensive blends of mutual funds or ETFs.
This second question is best addressed with hypothesis testing and modern holdings-based attribution analyses.
Peer groups and indexes do not work – never have, never will5. Hypothesis testing compares the manager’s actual performance to all of the possible outcomes from what he does6. Importantly, contemporary simulation technology enables the determination of statistical significance over short periods of time, whereas regression analyses require decades to reach similar conclusions.
Beyond hypothesis testing, modern holdings-based attribution analysis explains why the manager has succeeded or failed, being very careful to get the benchmark right, because if the benchmark is wrong, all of the analytics are wrong7.
Hypothesis testing determines the significance of success or failure. Attribution analysis examines the likelihood that success will continue. As in style analysis, both hypothesis testing and attribution are best implemented using indexes that are mutually exclusive and exhaustive, blended into custom benchmarks.
The test: Does your due diligence provider use peer groups and indexes to evaluate performance?
If so, you and your clients are relying on luck to screen managers, because it’s certainly not science. How confident is the provider in the talent of each approved manager? How much better have these approved managers performed above a passive implementation? Why have approved managers succeeded, and why are they likely to continue to do so?
If the answers sound like the benchmarks might be wrong (e.g. “this manager succeeds because he’s smaller-company oriented”), there’s a good chance the benchmarks are, in fact, wrong. It’s easy to get the benchmark wrong, but difficult to make good decisions when we do.
It won’t be long until we see that all-important end-of-year reporting. We all want to do right by our clients, so now is the time to do diligence on your due diligence provider. As Bob the Builder says, “Use the right tools for the job.” Of course, there are no guarantees, but better tools should lead to better decisions, and therefore better performance. There’s also the benefit of differentiation from your competition.
There are at least three outsourcing firms that use the better tools described here, but they’re not the most widely used — and no, I’m not shilling for these firms. In summary, these new superior tools are:
- Mutually exclusive and exhaustive indexes used for custom benchmarking and portfolio construction
- Hypothesis tests that simulate the possible outcomes from what the manager does. It answers the “What could have happened?” question
- Holdings-based attribution analysis that uses accurate benchmarks; because if the benchmark is wrong, all of the analytics are wrong
1. “Madoff Prescription”, White Paper, http://www.ppca-inc.com/pdf/Madoff-Prescription.pdf
2. Statman, Meir. “What do Investors Want?” Journal of Portfolio Management, 30th Anniversary issue, 2004, pp 153-160
3. “Becoming Style Conscious”. Journal of Investing, October 2010, pp46-49. http://www.ppca-inc.com/pdf/Style-Conscious-20100708.pdf and http://www.ppca-inc.com/StyleScan/StyleScan.htm
4. Sharpe, William F. “Determining a Fund’s Effective Asset Mix” Investment Management Review, December 1988
5. The problems with peer groups and indexes
Ankrim, Ernest M. “Peer-Relative Active Portfolio Performance: It’s Even Worse Than We Thought.” The Journal of Performance Measurement, Summer 1998, pp 6-11.
Bailey, Jeffrey V. “Are Manager Universes Acceptable Performance Benchmarks?” Journal of Portfolio Management, Spring 1992, pp 9-13.
Bleiberg, Steve. “The Nature of the Universe.” Financial Analysts Journal, March/April 1986, pp 13-14.
6. Hypothesis Testing:
“A Handicap of the Investment Performance Horserace,” White Paper http://www.ppca-inc.com/pdf/Handicap-20090428.pdf
“A Fresh Look at Investment Performance Evaluation: Unifying Best Practices to Improve Timeliness and Accuracy,” The Journal of Portfolio Management, Summer 2006, pp 54-65.
“Testing the Hypothesis “Hedge Fund Performance is Good”,” The Journal of Wealth Management, Spring 2005, pp78-83.
7. “The New Trust but Verify,” Transitions, April 2010, pp 7-12 http://www.ppca-inc.com/pdf/Trust-But-Verify-20091123.pdf
The views expressed here are those of the author and not necessarily those of ProducersWEB.
Reprinting or reposting this article without prior consent of Producersweb.com is strictly prohibited.
If you have questions, please visit our terms and conditions