Nino Niederreiter (Photo: Andrew430/Wikimedia/CC BY-SA 3.0)
When it comes to saying which players are the very best of the best, and which are the very worst of the worst, there’s often little disagreement between those who follow the game closely and place tremendous value on statistics, those who follow the game closely but aren’t interested in statistics, and even casual observers. But when we start talking about more philisophical questions, there’s frequently a great chasm between those groups.
One of the great defining issues in this regard is shot quality. If you believe that it’s a highly important, repeatable skill, you’re probably not a stathead; if you believe that it exists but that its impact over a long period of time is small, you probably are.
That’s obviously an oversimplification, but I think it’s also fairly accurate. There are, of course, reasons that the statistical community has come to this conclusion, and I thought that it might be helpful to talk about those a little bit. But before going there, it’s important to make a few clarifications.
First of all, everyone acknowledges that shot quality has an enormous impact on the level of an individual shot. A shot from the red line just does not have the same chance of entering the net as does a shot from the slot, and that shot from the slot is more dangerous if it’s a rebound than if it isn’t. Over a small sample — like a game, a playoff series, or even a season from a particular individual — the gap in shot quality could still easily be quite large.
Over large samples — like a full season — this becomes less likely. The research done so far suggests that, at the team level, this can be a repeatable skill. Of course, in the middle of a season, there would be so many false positives (obligatory mention of the 2011-12 Minnesota Wild) that you wouldn’t want to bet on any one particular team sustaining their advantage in the percentages the rest of the way.
And what about the individual level? It’s fair to say that the consensus is that it’s very difficult for us to demonstrate talent statistically. But as a close observer (alright… fan) of the train wreck that is the Edmonton Oilers, I was treated to sixty games of Corey Potter last season. Now, Corey Potter did some things well, but suppressing shot quality wasn’t one of them, especially when he was defending in an odd-man situation. I think it would be understandably difficult for someone to watch Potter all year and figure that he doesn’t have much impact on shot quality.
If we take a quick look at Potter’s five-on-five PDO (on-ice shooting percentage + on-ice save percentage, which generally regresses toward 1,000), we find that it’s quite poor (976), and that the deficiency is coming at the defensive end (on-ice save percentage of 900).
Now, Corey Potter simply doesn’t have enough games for us to be statistically confident that his play is poor. Furthermore, if he continues to play in the NHL, it’s reasonable to expect that Potter’s defense will continue to improve. This led me to approach this question from a different direction: what if, instead of trying to discover specific individuals who do poorly by this measure, we try to identify groups of individuals that should do poorly over time.
For this exercise, I decided to look at the extremes, namely, individuals who played at least twenty games in a given season and had a PDO worse than 950 or better than 1050. I then identified a few groups who should do poorly by this measure, namely, goons (players who had at least 1.5 times the number of penalty minutes as games played in that season), young players or minor leaguers (players who had less than 200 NHL games before the start of the season), and players on the decline (in order to be consistent I labelled anyone who was at least thirty years old to start the season as being on the decline). I then used this criteria to identify players from the 2007-08 through 2011-12 seasons.
So what was the percentage of “suspect” players on each list? 83 of 92 players (90.2%) with a PDO worse than 950 were on the “suspect” list. And if we take a look at the list of players, even the non-suspects start looking pretty suspect:
|2011-12||Stephane Da Costa||894||Minor|
There are a couple of very strong players on this list. We’ve got a young Paul Stastny, a young Alex Semin, and Brad Richards in his prime. But several of the players who aren’t marked as “suspects” get out by virtue of the criteria not catching them rather than having a reason for not being there. Eric Fehr and Colby Armstrong were both returning from long injury layoffs, Tim Jackman really should be classified as a goon, and Jonathan Cheechoo got old at a young age.
On the flip side, 35 of the 48 players with a PDO better than 1050 (72.9%) were classified as “suspects”. That number is still very high, but it’s substantially lower than what we saw in the last group, and when we take a look at the list, the difference in player quality is obvious:
|2011-12||David Van Der Gulik||1063||Minor|
This list does include its fair share of actual suspects, but we also see several of the best players in the game listed.
I don’t think that this proves anything conclusively, but I do think that it suggests that we shouldn’t always be quick to jump to the “luck” conclusion when a player is under of over performing (although with extreme results like these, luck is playing a part). Perhaps more importantly, I think that this kind of grouping of like players could be a useful tool for research going forward.
Recently at NHLNumbers
- Do some players elevate their game in the playoffs?
- Columbus Blue Jackets 12-13 Preview: From basement to mediocrity?
- Parity, profit, and competing incentives
- Ottawa Senators 12-13 Preview: A return to respectability
- Why the NHL lockout won’t cancel the 2012-13 season
- St. Louis Blues 12-13 Preview: Contenders under Hitchcock