Using Marcel Projections for Goalies in the 2016-17 season

Updated: September 14, 2016 at 4:02 pm by FooledbyGrittiness

As most of us already know, goalies are “voodoo“. 

This makes projecting how our favorite team’s goalie will do next year (unless that goalie is Henrik Lundqvist) rather difficult. 

To help with that, we can make projections. It’s never easy to project a player’s performance for any upcoming season, but it’s also not a futile exercise.Specifically I’ll make Marcel projections for goalies. 

What’s Marcel?

It’s a forecasting model created by Tom Tango (some examples). He used it for baseball and we’ll adapt it to hockey.  Of course, this has actually been done before for goalies here by Garik16. But while he used regular Sv%, I’ll attempt to project Low/Mid/High Sv% (courtesy of Corsica.Hockey) as it gives us a better look at a goalie’s performance.

So how does it work? 

There are three steps. First we weight previous years performance. For example, a player’s performance last year matter more than 2 years ago…..and so on. Traditionally only the past three years are used. Here following the suggestion of Eric Tulsky (as did Garik16) I’ll use four. Goalies are hard to predict so using some more data makes sense. Second, we regress towards the mean (how much depends on which statistic we’re using). And lastly we’ll apply an aging curve.

I’ll only be using even-strength numbers here. Also as I said before all data is courtesy of Corsica.Hockey.

Weighting Past Performance

Traditionally the weights are 5/4/3 but we can do a little better. So I took the past 9 years of data (all we have) and recorded every instance where a player faced at least 400 total shots in each of the past four years and at least 800 shots in year 5. We can then run a multivariable regression with the past 4 years as the independent variables and year 5 (the year we’re trying to predict) as the dependant variable.

But we run into a problem (as some of you might have guessed). The problem is sample size. Due to the fact that we only have nine years of Danger Zone data and because a goalie needs to have played in 5 straight years we start running out of goalies and therefore the weights (well…really only for one of them as we’ll see soon) don’t make a lot of sense. I tried tweaking the sample restrictions around but the weights still end up moving all over the place.

To account for this, I ran the numbers for players who faced at least 400 total shots in each of the last three seasons (not four). This will give us more of a sample and therefore a better chance of getting some accurate numbers (thankfully it worked). And then we’ll take an educated guess at the weight for year four. It’s the best we can do here. So I did that and here are the numbers it spits out for High-Danger Sv% (All numbers here are adjusted for league average of that year):

Year n-1: .129

Year n-2: .1058

Year n-3: .0363

Re-weighting that we get 10/8/3. A bit more aggressive than 5/4/3 but It seems fine to me. But how much would year 4 be worth? I don’t know. I’ll give it a weight of 2. That makes sense to me. And honestly, it won’t make much of a difference if the weights slightly differ. Either way, these weights indicate the past two years of a player’s performance matters a lot for projecting next years HSv%.

I guess the last step here is Mid and Low Sv%. This may not come as much of a surprise to some of you but they don’ get weighted at all. Each season counts the same. This is because Low/Mid Sv% contain a lot less skill than High Sv% making one year of data a really bad judge of talent.

So here are the final weights:

Year     LSv%        MSv%        HSv%

n-1        1                 1               10

n-2        1                 1                8

n-3        1                 1                3

n-4        1                 1                2

Regressing 

When we look at any metric it’s important to remember that what we’re looking at is a combination of a few forces. The numbers we see are just the observed data. That’s made up of: Talent, randomness …..etc. Our job is to try to figure out just the talent component. There are multiple ways to do this, but I prefer to do it according to the following method (Note: If you aren’t interested in the math you can skip to the end of this section).

I took all goalie’s with at least 1000 total shots (cumulatively) from 2007 until now. I then calculated the standard deviation for each Low/Mid/High Sv%:

Type        SD

Low        .0049

Mid         .0101

High       .0227

I’ll then use a technique commonly used by Tom Tango (the same guy who created Marcels). He notes that:

Observed variance= Talent Variance + Luck Variance

Of course, there are more things that effect the observed distribution than just talent and luck. Another is team effects of which I wrote about here (Note: There are still technically more effects but this should do). So now, we have:

Observed= Talent + Luck + Team Effects

Our job now is to solve for talent using the observed variance, luck, and team effects. In order to do so, we need to compute the latter two. Let’s first look at “luck”. It’s important to note that “luck” is the expected spread we would expect if no skill existed among goalies (all goalies have the same skill level). We therefore have to approximate the amount spread in results we’d expect among our population of players if Sv% was random. Since we are dealing with a binomial the standard deviation should equal sqrt(P*Q/N). Where p is the league average probability of a save, and q is the league average probability of a goal (or 1-P), and N is shot attempts. But what number should N be?

Since we are dealing with a bunch of different players with varying amount of attempts it isn’t so clear cut. An easy guess would be the straight average among all players in the sample (and this will often be fine) but I tested it out and found that by weighing by the harmonic mean was a better fit (I came across it here and after running a simulation found it was a better fit). So to calculate N I take the harmonic mean(which is 1/n) of every players shot total (for either Low Shots, Mid Shots or High Shots), take the average of that and then take the reciprocal of the average. Here are those numbers.

                 LSv%          MSv%         HSv%

Avg. Sv%     .978          .9254        .8036

N              1135.6          918.6        580

SD            .0043           .0086        .0165

I then estimated the spread in team effects in a similar fashion to the article I linked a few paragraphs back. It only really matter for HSv% (for low and mid it’s effectively 0) and even there it’s not a big deal. But here’s the SD in team effects for HSv%: .0036.

Using the observed spread and the spread in luck and team effects we can solve for the spread in Talent: Here you go:

For example: HSv%:  .0227^2 = Talent^2 + .0165^2 + .0036^2

Type  SD Talent

Low   .0024

Mid    .0053

High   .015

We then set Talent equal to luck and solve for n (How many attempts do we need for luck to be equal to talent?). We then get the following numbers of attempts for each danger zone (I rounded off):

Low= 3700

Mid= 2400

High= 685

What this means is that we need to add those amount of shots to each players total to regress his numbers. Or we can think about calculating a specific r (correlation coefficient) for each player and then figuring out how much to regress from there (Regressed rate= 1-r). For example, let’s a player has 500 High Danger shots. We can then use the equation a/a+c. Where a is the number of player attempts and c is the constant. So we have in this example 500/(500+685) and get r=.442 and and (1-r)= .558. So we regress that player’s numbers 55.8% towards to mean.

Lastly, for Low and Mid Shots since the weights for each season is equal, the a in the equation a/a+c is just the sum of the shots faced over the past four years. But for HSv% we know that Year n-1 matters more than n-2….and so on. So a shot faced a year ago matters more than two years ago. So we need to weigh High Danger Shots faced. That’s easy, all we have to do is divide each weight by 10. And we now just multiply shots faced by the new corresponding weights (1/.8/.3/.2) and add them up. So if a player faced 300 shots last year and 200 the year before, we do: (300*1)+(200*.8)= 460 Equivalent shots.

Aging Curve

I’ll try to keep this section short. Basically I looked at aging recently here. But due to the fact that we only have danger zone data for the past nine years we can’t construct any aging curve with it. There isn’t enough data. So instead I did it for All-situation Sv% (using data from 1990 until now). From there we can guess at the appropriate numbers for Low/Mid/High Sv%. That’s really the best we can do. So I played around with the numbers and settled on (arbitrarily of course) dividing the weights by 4 for LSv%, keeping it the same for MSv%, and multiplying by 2 for HSv%. High danger contains most of the skill so we want most of the aging to be there, Low Danger has the lowest so the aging there should be small, and Mid Danger is somewhere in the middle so I gave it the same weights as I originally found. So here are the weights I settled upon for each.

Age

Change-Low

Change-Mid

Change-High

21

7.5E-05

0.0003

0.0006

22

0.000025

0.0001

0.0002

23

-2.5E-05

-0.0001

-0.0002

24

-7.5E-05

-0.0003

-0.0006

25

-0.000125

-0.0005

-0.001

26

-0.000175

-0.0007

-0.0014

27

-0.000225

-0.0009

-0.0018

28

-0.000275

-0.0011

-0.0022

29

-0.000325

-0.0013

-0.0026

30

-0.000375

-0.0015

-0.003

31

-0.000425

-0.0017

-0.0034

32

-0.000475

-0.0019

-0.0038

33

-0.000525

-0.0021

-0.0042

34

-0.000575

-0.0023

-0.0046

35

-0.000625

-0.0025

-0.005

36

-0.000675

-0.0027

-0.0054

37

-0.000725

-0.0029

-0.0058

38

-0.000775

-0.0031

-0.0062

39

-0.000825

-0.0033

-0.0066

40

-0.000875

-0.0035

-0.007

What those numbers represent is how much a player is expected to increase or decrease relative to league average from last year. For example: Let’s say a player’s League Adjusted HSv% (Player Sv% divided by Lg HSv%) is 1.02. And let’s say he’s turning 31 next year. So according to the chart we expect him to drop of .0034. So 1.02-.0034= 1.0166. And if the league average HSv%=.81. We then just do .81*1.0166= .8234. 

Conclusion 

Ok, so that’s really it. Here is a google docs with the full projections. Everyone who faced at least one (even strength) shot on goal last year was included. Also, you may notice, I also added a fourth category to those numbers- Adj.Sv%. It’s calculated the same way War on ice used to do it. It’s just the weighted average of Low, Mid, and High Sv% by it’s frequency. This is what we would expect a player’s Sv% to be if he faced a league average shot distribution.

But, before I finish, I need to note a few things. First, by the nature of the model the less a player has played the closer he is to average. This is because in the absence of any data my best guess is league average. Ideally I’d regress towards specific priors but this is meant as a simple model. Second, HSv% is the most important component. This is because it’s where most of the spread in skill is and because it’s the fastest one to reach a signal. Third, it assumes the Sv% for each zone next year will be the same as last year. The overall Sv% last year was .9249 and by danger zone .979/.9251/.8133. So keep those in mind when looking at the projections.

Lastly, some projections will look weird. For example, Joonas Korpisalo is projected to have a .9269 Adj.Sv% even though he only played 31 games in his career (all last year). Why? Well, since he barely played his LSv% and MSv% are very close to league average. But he posted a pretty good HSv% at .8434, and because HSv% accumulated a signal the fastest, he still ends up being a bit above average afterwards. Also since he’s 22 we still expect him to improve a little. Add all this up and you get his numbers.