The Americas finals for the CFA Research Challenge were held in Seattle last week. Out of 53 teams that presented, two advanced to the world competition that will be held in Prague later this month. The winners were Seton Hall University and Barna Business School.
Now in its eleventh year, the Research Challenge has grown to include more than a thousand schools and five thousand student participants. At the local level, all teams perform an analysis on a company picked by the CFA society or group of societies that are sponsoring the competition. They prepare a written report and give an oral presentation. Each local event yields a winning team that goes on to the regional competition. (Our local entrant, the University of Minnesota Duluth, lost to Seton Hall in the semifinal round.)
I had the opportunity to serve as a judge for one of the semifinals this year, my first exposure to the Research Challenge. Separate graders rate the written reports (which count for fifty percent of the final score); I was one of three judges that evaluated the content and delivery of the presentations by five teams.
It is a demanding format, for the students and the judges. The presentations are limited to ten minutes, followed by a period for questions and answers of the same length. That is very little time for the students to convey – and the judges to digest – an incredible amount of information.
The judges were given the names of the subject companies in advance, so that we could do some research in preparation, but we had no knowledge of a team’s thesis or recommendation until the start of their presentation. We had a little time after each presentation to make notes and then finalized our scores (in the categories of financial analysis, valuation, presentation, Q&A, participation among the members of the team, and slides) for compilation.
I also watched the final round of one of the brackets, so between the two parts of the competition, I saw ten presentations. I was struck by the fact that much of the feedback that I would offer students preparing for the Research Challenge is the same that I give to professionals.
The most important factor is that ten minutes is very little time to tell an effective story. Given the depth and breadth of the work that has been put into the analysis, the tendency is to try to cram as much information as possible into that amount of time. I’d recommend a more minimalist approach, focusing on the key issues and just a few important charts, striving to impress upon the listeners the salient aspects of the analysis.
Along with that, slides should be clean and clear. Too much information on a slide (that goes by quickly because there are too many slides) is difficult to comprehend. It does give the impression that a lot of work has been done, but there are other ways to do that without sacrificing comprehension. Simpler is better in most communications settings. Even experienced professionals have trouble processing information when it is flying by at warp speed.
Like most professional analyst reports, the student evaluations come across as entirely too precise for the real world. Fair value estimates and/or target prices are quoted to the penny – in the slides, in the presentations, and in the responses to questions. Each of those is better presented as a fuzzy range than a single point of reference. The figures to the right side of a decimal point are surely superfluous in the scheme of things.
Speaking of target prices, if they are used a time frame should be attached, as well as an indication of how much of that price change is attributable to the mere passage of time. Unfortunately, that’s often not the case, so a current fair value range is preferable (at least to me) to a target price, but students are led astray in that regard by the wide use of target prices by professionals (often mostly for marketing purposes).
At the end of a presentation, students and professionals alike like to have a slide that says, “Questions” or “Q&A.” I think that’s a mistake. Everyone knows it’s that time. Instead, it’s better to leave a key concept or chart on the screen for people to ponder upon, not a slide that adds no value.
However, in most cases during the competition that question slide didn’t linger too long. The teams usually had one student running the computer, selecting exhibits to support their responses to the questions that were asked. The teams that clicked back and forth looking for them burned up a lot of time, but several of the teams had intricate trees of slides at the ready to support their case (sometimes close to a hundred of them).
The effectiveness of that strategy was mixed, depending on the quality of the particular slide and its pertinence to the question at hand. Many questions should be answered directly, without referencing a slide. Doing so is more efficient and shows a command of the material that can impress the judges even more than showing an exhibit. But sometimes the exhibits help. Knowing when to use them and when not to use them is an important part of becoming an effective communicator.
As is knowing when to say, “I don’t know” or “we did not look at that specific issue.” It is hard to do that, thinking that you are admitting weakness, but honesty builds trust. Answering a question with a tangential thread that happened to have been rehearsed but is off point is likely more damaging to your cause than supportive of it.
In a team format, you have to trust your teammates to provide a good response and not be too eager to add on additional information unless it is very important. In some cases, it seemed like everyone added their two cents, but only some of the additional comments were necessary or helpful. Several detracted.
In addition, it’s good to remember that a short answer is best when that is what the question calls for.
In a competition, whether it’s during the Research Challenge or between asset management firms trying to win a piece of business, there is always a question of how much you should stand out from the crowd. Doing things like everyone else – in your analysis or your communication – is lower risk and provides a level of check-the-box comfort, but it probably lessens your chance of winning.
The presentations by students tend to have many things in common with each other, so I would suggest that teams look for opportunities to be different, picking their spots to surprise the judges with something fresh, even in little ways.
I think every presentation that I saw included a Monte Carlo analysis for the stock price of the company under review. The number of trials performed seemed to be mentioned every time (with some teams citing an extraordinary number of trials performed, as if that conveyed more power than it really does), but not one gave the kind of context that would have made the analysis mean something more significant. For example, by citing the key variables and the ranges for them that were used (and why), a team could provide a window into their analytical process and differentiate themselves at the same time.
To reiterate, these quibbles and concerns are just as applicable to professionals, so the students shouldn’t feel like they were particularly deficient in those areas. But future participants might benefit from stepping apart from the norms that tend to evolve in competitions like this, and take some chances aimed at communicating a story that doesn’t look like that of past winners or other teams that they face.
I appreciated the opportunity to witness a group of impressive young people demonstrate their abilities. There is no doubt that the Research Challenge presented them with a unique opportunity to polish their skills. I always say that, at a high level, all investment roles can be evaluated via a simple formula: analysis plus communication. This competition helps students to develop each of those capabilities, which will aid them in any investment career that they might have or any other vocation that they choose to pursue.