Earlier this week I wrote a piece about improvements that can be made at the margins of a hockey team. The basic premise was that fourth lines and third pairs move the needle much more than most think and based on my model, a switch from Ben Smith, Roman Polak, and Matt Hunwick to Peter Holland, Martin Marincin, and Frank Corrado is the difference between 89 and 93 points, at least by my model.
Today, David Johnson aka @hockeyanalysis on Twitter offered a different perspective to the piece, citing the Leafs defensive issues as the team’s biggest problem and showing that removing the three players I pointed out as issues would be a further detriment to the team’s defensive woes. That is 100 percent an argument I can get behind, after all, the role of those three is to play defence so they should be better at it.
But it is something I already discussed in the initial piece:
Traditionally speaking, we all know what role a fourth line and bottom pairing play. They don’t usually do much and their job is basically “don’t get scored on, we don’t care about offence with you guys out there, just shut the other team down.” And while they may be effective at slowing the game down, they’re completely ineffective because they’re only focusing on one area of the ice. The goal it to out-score the other team, both ends of the ice are valuable. To focus on one doesn’t make much sense. You hear the complaint about offensive-minded players needing to find a “200 foot game” but rarely will you hear that about some fourth line pluggers or shutdown d-men.
So while those three may be effective at playing defence, the game isn’t decided by how many goals the team allows, it’s decided by goal differential, and the three in question have very little offensive talent to speak of that would make their defensive acumen worth it.
At the start of his piece, Johnson cites each player’s Corsi percentage this year (and that the players in question were all at the bottom) perhaps with under the guise that Corsi was the only reason they should spend some time in the press box. Interestingly enough, when it’s time to talk about actual goals he only uses GA60 which is a little deceiving.
When you look at goals percentage, you get Smith and Polak at the bottom, with William Nylander being the only player lower. Funny enough, Polak’s goals percentage is actually lower than his Corsi. Hunwick is near the top, but it’s on the strength of an unsustainably-high-for-a-defenseman on-ice shooting percentage. All the players in question rank highly by on-ice save percentage, and perhaps they do have an effect on that, but with a sample of just 20 games, that’s not something worth reading into right now.
And that’s a big issue with Johnson’s critique. It is incredibly disingenuous to use goal metrics in a 20 game sample as anything remotely meaningful. I have time for larger samples of goal analysis, but 20 games? In 20 games Matt Martin has been on the ice for all of five goals against. Five. One extra bounce and well look at that his GA60 jumps up by 0.4 goals. Goals are incredibly noisy due to their rarity and it’s the reason most people turn to Corsi, there are more events to analyze. It stops people from drawing any meaningful conclusions from five goals in 20 games and forces them to look at the bigger picture. If the same trend continues over multiple seasons, perhaps there may be something there, but I’m not buying 20 games.
Johnson then goes into GA60 rates for Leafs d-men during their time with Mike Babcock, which shows Frank Corrado with the highest rate at 3.24. Terrible right? Well, Corrado has only played 40 games with the Leafs. In that time his goalies have stopped just 89.4 percent of his shots against, three percent below the league average rate. Is part of that Corrado’s fault? Probably. It wouldn’t surprise me at all if the shots Corrado allowed were higher quality than most d-men because he isn’t very good, but I do doubt whether it’s to that large of an effect. There’s simply way too much variance over such a small timeframe to be sure, and it’s probably what’s hindering his chances of getting back into the lineup.
At the opposite end of the spectrum is Roman Polak’s sparkling 1.97 GA60 which looks pretty great on the surface. This year his GA60 RelTM is -0.58 while last year it was -0.69. Clearly, he suppresses goals against right? Well, what happened to the seven years before that where he was at 0.76, 0.24, 0.17, 0.51, 0.21, 0.14, 0.29? Did he just suddenly realize how to negate shot quality all of a sudden and make his goalies save shots better than before (if so kudos to Babcock because we got a 360 makeover on our hands). I’m not sure, maybe there’s an argument for it, but I don’t see it.
And whether his GA60 is sparkling of late or not, he’s had a negative effect on goals percentage in seven of his 10 seasons. Does it really matter if goals against are down, if the team is a net negative with him on the ice?
You can do the same kind of analysis for all the players available and what you’ll get is pretty much a mixed bag. Hunwick has five bad years by goals percentage and four good ones, this season happened to land heads, err I mean, happened to be a good one. Ben Smith hasn’t played much in the NHL so I’m really not going to bother analyzing goals for him, but he’s been up and down too.
The point is that using goals for analysis, especially at this sample size, is incredibly unreliable. And if you’re going to go that route, I’m not sure why you’d ignore the other 50 percent of the equation.
That’s why most people make models to begin with: to account for what’s accountable and be aware of what isn’t. When I wrote my original piece on it, about 25 percent of it was spent on my own perceived weaknesses of the model. There’s very little you can tell me about what’s wrong with my model that I do not already know. I have never once claimed it to be perfect, in fact, I literally wrote that it is not in that original piece. No model is perfect (and you will never find a model maker who thinks so), especially in a sport as dynamic as hockey and especially one created by a guy whose last math class was grade 12 calculus.
But at the same time, this isn’t rocket science. We have a pretty good idea of what things matter and we have a pretty good grasp of things we should be accounting for. My model is a very simple one. Very simple. It assigns a linear weight to things we all believe to be important. Points and shots are there. On-ice Corsi too. But so are on-ice goals. I didn’t forget about them when I made my model. They’re just a small part – and I know how noisy they can be – but they’re there because they do matter.
That’s just the basics of it. For my team level analysis (ie, how I got the four-point difference in the initial piece) I use a weighted three-year average of every individual metric inside my model, with each year weighted based on a multi-variate regression of the prior three seasons. Each stat is then regressed to the mean based on its variance (ie, goals regress more than shots on goal). It’s a lot more complex, but at the heart, it’s basically measuring how good we can reasonably expect each player to be, at each particular metric in the model, based on how good he’s been in the past and how large of a sample we have on him.
So to say “your model says this is what’s likely to happen (and no it’s not an “absolute” certainty), but have you considered that these guys have the lowest goals against” is a pretty condescending statement to make considering that it’s something already being considered.
And that’s where we come full circle. Goals against are being considered. But for these particular players they do not outweigh the rest of their negative qualities.
Now there are a few inevitable errors to all of this and blind spots. I won’t fight them and I’m happy Johnson decided to critique my work here. When I first posted I was expecting more of it, but didn’t get much in terms of what could be missing, hopefully, because I covered my bases in the original post. Critiques are a good thing. They provide alternate perspectives from an outsider who won’t be blinded by the attachment that comes from creating something. They give you chances to learn, grow, and improve.
On the surface, I think Johnson has a valid argument. I personally think defensive skill is largely underrated in my model. I see players like Marc-Edouard Vlasic and Niklas Hjalmarsson and Chris Tanev get rated lower than some might suspect. It’s difficult to assess defensive value using the data that is currently available.
At the same time, though, I do think that defensive acumen is a bit overstated and a lot of what we see there is related to systems and goaltenders. While it may be off on certain players, I think as a whole, when all the players are combined into one team rating it is a pretty accurate assessment of that team’s talent. If it wasn’t I would probably be very poor betting with it, but I am not. The model would probably also struggle to pick which team is more likely to win on most nights, but it doesn’t.
I don’t think it’s the best model in existence (I would give that honour to DTM’s variant of WAR, personally), but I am reasonably confident in it and what it’s shown so far. Maybe it is a little too harsh on Hunwick, Smith, and Polak, all rated as below replacement level. Maybe it is too kind to Marincin, Holland, and Corrado, all rated as bottom line or bottom-pairing guys. But based on the separation between each group, I’m much more confident that one group is better than the other. And no, that’s not absolute. I’m willing to keep an open mind that these players are better than my model says they are, and perhaps better data will one day show that. But I don’t think on-ice goal statistics are the answer.
You may be 100 percent certain that my model isn’t perfect, and in fact, I would 100 percent agree with you. But I am 100 percent certain it is a more accurate representation of player quality than goal differential.