One of the (many) philosophical divides in the analytics community is the extent to which perfect accuracy in statistics should be pursued.
On the surface, it seems logical to pursue maximum accuracy, but what about the additional cost in complexity? It's hard enough to get new people involved in this field as it is, and it will be even harder when the accessibility bar is raised too high. On the other hand, it will be even harder to get new people involved if many of the stats we use are hot garbage. What should we do?
Most recently, I got drawn into this discussion after my ESPN Insider article that identified Braden Holtby as the front-runner for the Vezina trophy. It cites Holtby's leadership in four of the following five statistical categories; save percentage, home plate save percentage, quality starts, game stars, and goals versus threshold.
In particular relevance to this discussion, the issue revolved around save percentage how it can be influenced by a variety of factors. As I wrote in the article;
"This one number doesn't provide a complete picture, however. Considering only even-strength situations will remove the skewing effect of special teams, but a goalie's save percentage can still be affected by random variation, scorekeeper over-counting, score effects, the individual shooter's ability (in terms of speed and accuracy), average shot location, and hard-to-measure shot quality factors such as screened shots, rebounds, and scoring off the rush or a cross-ice pass."
What would you do address this situation?
One school of thought is to use a version of save percentage that is adjusted for all these factors. One reader made a strong push to adjust it for score effects, in particular.
If this article had been for a site devoted to statistical analysis, and with readers who have an interest and background in the fundamentals, everyone would agree with that approach. Or, if this analysis had been written for a book, where time could be taken to explain each of the adjustments, then it would make absolute sense to adjust it for every factor possible.
But what if the article is meant for the mainstream fan, as it was? In that case, none of the readers will have heard about any the factors that can affect save percentage, won't be familiar with the studies that have measured them, and lack the time (and often the interest) to learn how the adjustments are made.
To them, the accuracy and usefulness of an adjusted form of save percentage is something on which they would simply have to trust me. Given that sports fans are not exactly known for their great trust in statisticians, that's an easy way to lose an opportunity to bring your readers into the world of analytics.
But what can you do? There are several number-crunchers who believe that publishing a statistic with even slight inaccuracies is worse than publishing nothing at all. To them, even publishing a statistic that hasn't had its accuracy or persistence measured is a real no-no. After all, inaccuracy can turn off new readers just as handily as complexity.
In this case, I restricted myself to statistics that the readers would know, and those that were only one step removed, while specifically acknowledging their limitations. Those who are sufficiently intrigued can take a moment to dig further, and are likely to find one of my books, which take a far deeper dive into these stats.
Was that the right choice, or should I have gone with an adjusted form of save percentage? I don't know. I'm not sure that there really is a right answer.
Take team Corsi numbers and score effects, for example. Even now, there's a discussion about whether score effects should be addressed by using a score-adjusted Corsi, or by using each team's Corsi in close game situations only.
Statisticians know that higher accuracy can be achieved by using as much relevant data as you can, which is why we lean towards score-adjusted Corsi in our personal usage.
On the other hand, the average fan doesn't know how these adjustments are made, nor how to make it themselves, so they might prefer and trust something they understand: a team's Corsi in close-game situations only. We may lose some accuracy if we do it this way, but not very much, and at least there are more people seeing the game from this perspective, and with a higher awareness for score effects.
But, have we sold out? If there's a better, more accurate perspective, are we sending the wrong message by using the simpler version? In those few occasions where the simpler version actually leads the readers astray, will that do greater harm than using the more complex version in the first place?
I don't have the answers, but I always keep these questions in mind, and maintain respect for those who have chosen a different balance in the complexity of their statistics.
My Thoughts >