tjbresearch.com
Monday, November 30th, 2009
the performance parade

As any blogging manual will tell you, humans crave lists.  And what is more appealing than a ranked list?  How about a ranked list that has some scoring system or ranking methodology that justifies the ranking?  Such a list magically becomes a simple statement of truth about what or who is best or worst.

In this continuing series of postings on the Global Research Analyst Settlement,the research puzzle | This is the sixth in a series of postings on “the settlement.”  This link will take you to an index of the postings on the topic that will include future postings as they are written. we have finally gotten to the obligatory posting on research performance.  Long-time readers will be familiar with my thoughts on the matter, since I have covered it many times in postings like “the research performance derby”the research puzzle | This posting was triggered by a Bloomberg listing of “best” analysts. and “to the precipice.”the research performance | This one looks at a simple stock chart to think about the meaning of performance.

In standard practice, a performance evaluation firm takes the ratings put on stocks by research firms and judges which firms have had the best performance.  What could go wrong?  To start, I’ll quote myself (from a piece I wrote for the users of independent research under the settlement):

Ratings, which are used in the performance calculations, do not encompass the breadth of research information contained in a report.

There are different criteria used by firms to arrive at a rating on a stock (for example, some ratings imply a prediction of a certain level of absolute performance, while some are relative to the performance of the general market).  Such differences are not reflected in the rankings.

Firms have different ratings schemes; to try to compare them, evaluation services must make the simplifying assumption of “mapping” each firm’s ratings to a common standard.  Such simplifications yield distortions.

Performance for an individual client should always be viewed in the context of the client's objectives and tolerance for risk.  A “good” idea for one client may not be good for another.  No ratings system adequately deals with that reality.

So, a research analyst (or computer in the case of a quant firm) boils down all sorts of valuable information into one variable, which is adjusted further in the mapping process, and evaluated without regard to time horizon or risk.  That’s what we use to determine who is best?  If so, we get what we deserve.

In studying this area throughout the settlement, I saw nothing to make me believe that chasing research performance in that simple fashion is a better idea than chasing any other kind of performance.  It is one of the most common and most dangerous investment mistakes.

To be clear, there are things to be learned by analyzing performance, even in a basic way.  More worthwhile still is a complex evaluation of ratings behavior (and other variables, if available), using a variety of tools that can help to ascertain the strengths and weaknesses of a research analyst or firm.  Statistical analysis of what has happened can be helpful to determine what might happen, but not in isolation.  It should provide hints and clues to pursue and questions to ask.  The goal should be to understand how that performance might have come to be (remembering that much of it at any time is luck or statistical noise) and whether it’s reasonable to assume that it might be repeated given what you know about how the analyst or firm goes about their business.

It is the classic process-versus-outcome situation, where unsupportable observations about worth get made because of where a performance outcome shows up in a ranked list.   It is the same trap that plan sponsors often fall into:  While they say that past performance only counts for some small part of their evaluation process in selecting asset managers, anyone who has been “in the room” knows that in practice it usually overwhelms most everything else during the decision making.

My recommendation is to ignore most published lists of performance.  They don’t tell you what you need to know.  But to the extent to which you have a chance to delve into a richer trove of statistical information, it can help you to figure out which research firms are right for you.  But, you should use that information to answer some basic questions:

What kinds of things is the research process you are evaluating good at?  What is it not good at?  There are always trade-offs.

How would you expect a particular firm to perform in a certain market environment?  When will it produce its best performance (and why) and when its worst?  Many mistakes arise from not understanding that relative performance naturally fluctuates.

If you think that you can switch from one provider to another in anticipation of future performance, have you properly judged the switching costs and the odds that you are making changes based on false leads from information on past performance?  Those rankings will often lead you in the wrong direction.

Performance evaluations are predicated on having accurate data.  That brings us to our next topic in this survey:  Operations and the nuts and bolts behind the scenes.