2016 NHL Stanley Cup Predictions

For the last 3 seasons, I’ve been putting together statistical power rankings of teams with a focus on tracking the Oilers’ progress, or lack thereof. Then at the end of the season, I have used these rankings to make Stanley Cup playoff predictions. The statistical model began by examining shot attempt (a.k.a. Corsi) differentials, but I have since revised the model to also include goal differentials. The analytics community refers to this model as Weighted Shots.

The basic idea is simple: Goals count more than shot attempts. In theory, Weighted Shots help account for shot quality on offense and for goaltender performance on defense. In the Weighted Shots model that I use, goals count for 5 points and shot attempts–which for the sake of simplicity I will refer to as shots–count for 1 point. Measuring Weighted Shots at even-strength (5v5) is the most important aspect. Perhaps surprising to many, but special teams (power-play & short-handed) count for little (about 20% of scoring), although I do apply the model to ranking special teams. In my mind, a huge difference in special teams between teams who have similar Weighted Shot differentials may help tip the scale. There are a few matches this playoffs in which special teams may play a factor.

Testing Weighted Shot Model over 9 Playoff Seasons

To figure out if my model is useful, I applied it to the last 9 playoff series (2006-2015). I used all regular season games (82) in arriving at my 5v5 and special teams ranking systems. I also included rankings for the last 25 games because some of claimed that these are useful in making predictions. I have not thoroughly tested this 25-game model, but I do know that last season it was horrible in making predictions. Perhaps when I have more time, I’ll revisit the 25-game model for the other 8 seasons. Important to note that my model does not account for injuries, especially to key players. For instance, Tampa Bay has 2 key players out: their 2nd best defenseman, Anton Stralman, and elite sniper, Steven Stamkos. I cannot help but think this will have a huge impact on their playoff performance, especially Stralman, whose shot-differential relative-to-team is +5.7 per hour, which is 2nd on the team (Hedman is first) and 23rd among NHL defensemen. In any case, without further delay, here is how my model performed over 9 playoff seasons.

NHL_Playoff_Predictions_-_2006_to_2015_Summary

 

As we can see, the accuracy rate varies a lot from season-to-season. A few times it is as high as 87% (13 out of 15) and other times as low as 53% (8 out of 15).  The good news is that even 8 correct predictions is more than 50%. Even better, over the entire sample of 9 series, the accuracy rate is 70%.

What happened in those seasons in which only 8 predictions came true? In 2008, you can blame many failed predictions on upsets by the Montreal Canadiens and Pittsburgh Penguins, who were both riding high on hot goaltending, as well as for Pittsburgh, stellar offense by Sidney Crosby & Evgeni Malkin.

In 2011/12, goaltending was also a factor, but for different reasons. On the one hand, Henry Lundqvist carried the overachieving Rangers to the Eastern Conference finals. On the other hand, the Penguin’s–ranked 3rd–had Marc Andre Fleury losing his confidence and playing the worst hockey of his career. The Penguins lost to the Philadelphia Flyers in a memorable high-scoring and fight-filled first round series.

Then last season (2014/15), Lundquvist once again help carry the overachieving Rangers to the Conference Finals. Similarly, Carey Price helped the Montreal Canadiens upset the Ottawa Senators in the first round. Thus, despite the predictions, outstanding goaltending can change the outcome of a series. In the end, though, 14 out of the 18 Stanley Cup finalists ranked in the top-8. Moreover, the champions ranked in the top-4, except for Pittsburgh (ranked 15th) in 2009. Taking all this into account gives me enough confidence to keep using the model. In closely matched series, though, I think it’s important to pay attention to goaltending, as well as injuries and special teams. More on this below.

Next, I provide the overall rankings of the teams using 4 measures. As a reminder, the one I am using is the first green column (i.e., Weighted Shots using 82 regular season games). The other rankings are there as secondary predictions. As I mentioned above, I would like to test the model on the last 25 games of the season, so might as well include it below. For special teams, I took the difference between Power Play and Short-Handed Weighted Shots. This rank, I think, should only be used in a series in which the 5v5 numbers are very close. What is close? This season it’s easy. The Anaheim Ducks and Nashville Predators only have a WghSh% difference of 0.01%! Then in the 2nd round, assuming the St. Louis Blues and Dallas Stars beat their respective opponents, the difference is only 0.1%.

2015_16_NHL_Fancy_Stats_Power_Rankings

 

What is obvious is that, regardless of the ranking system, the top 2 teams are Los Angeles and Pittsburgh. Next, I’ll show my predictions and explain predictions that go against my model. Teams in green are the predicted winners of each series. The value in parentheses is the even-strength Weighted Shot differential.

2015_16_Playoff_Prediction_Bracket
My main upset is the Detroit Red Wing over the Tampa Bay Lightning. Although Tampa Bay is ranked 3rd overall, recall that they have injuries to two key players: Stralman and Stamkos. Although I believe the Lightning are a pretty good team even without those two, in a 7-game series, I do see Detroit as able to push passed them. I don’t expect it to be easy, though.

Another “upset” is the Anaheim Ducks over the Nashville Predators. Their respective WghSh% values are practically identical. Anaheim’s special teams, which are ranked 1st, are much stronger than Nashiville’s, which are ranked 14th. Thus, I give the advantage to the Ducks.

The final upset is St. Louis over Dallas (assuming they both make it to the 2nd round). Their respective WghSh% values are also nearly identical. Although the Stars’s special teams are slightly stronger–7th vs 11th–I don’t think this difference is that substantial. More importantly, I think goaltending and defense will be a factor. Dallas are double-teaming veterans Kari Lehtonen (90.6 save%; all situations) and Antti Niemi (90.5%). Neither has performed as a #1 goaltender. Also, despite Dallas’s strong offense–ranked 2nd–their defense is rather porous for a playoff team, ranking 17th. In contrast, St. Louis is solid both offensively and defensively, ranking 7th and 6th, respectively. Although there tends to be some variance in save% within a season, St. Louis’s save% (93.2%; rank=4th) exceeds that of Dallas’s (91.8%; rank=27th) by a large margin. I think there is more than variance going in this difference. With the Blues having the advantage in defense and goaltending, I favour St. Louis in this series.

Speaking of St. Louis, what of Chicago, the defending Stanley Cup champions? Unfortunately, their WghSh% rank is not even top-10. Last season, they were 2nd and favoured to win the Cup because Los Angeles (ranked 1st) failed to make the playoffs. Will intangibles such as “playoff experience” and “knowing how to win” matter? Maybe. But what the model shows, over 9 seasons of data, it that is sure helps when a team is better at out-shooting their opposition.

For the finals, I have seen a few models predict Pittsburgh over Los Angeles, which I don’t think is unreasonable. Pittsburgh has been the hottest team since January showing an improvement of 20th to 2nd in Weighted Shots over the last 40 games. I have not seen such an improvement within a season since I’ve been tracking these metrics. Plus, Pittsburgh is my 2nd favourite team. But then Pittsburgh without their #1 goaltender, Fleury, who is injured, but day-to-day, could make it a rough road for the Penguins to go deep into the playoffs.

There you go, folks! My predictions for the 2015/16 Stanley Cup playoffs. Please share your thoughts and predictions.

Walter

Walter Foddis Written by:
  • wfoddis

    In the first round, I was 4/8 in my predictions, whereas my 82-game model was 6/8. That’ll teach me not to “outsmart” my model. The 25-game model predicted 4/8.

    For the second round, my 82-game model predicts the series winners to be Nashville (ranked 7th), Dallas (5), Pittsburgh (2), and Tampa Bay (3). The 25-game model, though, predicts St. Louis (ranked 3rd) over Dallas (12) and San Jose (5) over Nashville (9th). The other 2 predictions remain the same.

    The Stanley Cup finalists predicted by the 82-game model are Dallas and Pittsburgh with Pittsburgh being the favorite. The 25-game model predicts St. Louis vs. Pittsburgh again with Pittsburgh favored.

    I’ll be interesting to see whether 82-game or 25-game model is better in predicting the 2nd and 3rd round.

    Have to say that San Jose quite impressed me against LA. They didn’t win because of luck. Each team’s 5v5 shooting metrics were very close. Adjusting for score effects, San Jose finished with a stronger weighted shot differential. The teams were even on the PP with 3 goals each and similar shot attempt differentials. As to goaltending, LA’s Jonathan Quick let in 2 more low & 3 more medium danger shots, whereas San Jose’s Martin Jones allowed 2 more high-danger shots. Have to give the advantage to Jones.

  • Walter Foddis

    For the 2nd round, the 25-game model outperformed the 82-game model: 4 for 4 (25-gm) vs 2 for 4 (82-gm).

    In my initial predictions, I chose St. Louis over Dallas–practically a coin-flip–because the Blues’ superior goaltending. However, that wasn’t the case. Dallas actually had better goaltending, especially from the medium and high danger areas. St. Louis only outscored Dallas by 1 goal at even-strength. Were special teams the difference? Yes.

    On the power-play, St. Louis outscored Dallas 5 to 2 and out-chanced them 27 to 13. Both teams had equal power-play time. Taking a look at my special teams rankings during the regular season, you wouldn’t have guessed that. Dallas was ranked 7th and St. Louis was ranked 11th, which for practical purposes is pretty close. Was there any way to figure that out before the series? Not that I know of. Perhaps it is worth studying a bit more.

    At this point, the Stanley Cup finalists predicted by the 82-game model are St. Louis and Pittsburgh with Pittsburgh being the favorite. The 25-game model also predicts St. Louis vs. Pittsburgh, again with Pittsburgh favored. Tampa Bay, though, keeps on surprising. They still don’t have Stamkos or Stralman, yet were able to grab an early lead on Pittsburgh in game 1 and held on without too much push back from Pittsburgh. I wonder if Murray, who has carried the Penguins this far, is put back in the net for game 2?

  • wfoddis

    For the 2nd round, the 25-game model (4 for 4) out-predicted the 82-game model (2 for 4) with Nashville over San Jose & Dallas over St. Louis as mistaken predictions. Both models predicted a Pittsburgh/Tampa Bay Eastern Conference final. As to who would advance to the Stanley Cup finals, both models predicted Pittsburgh & St. Louis in round 3, but St. Louis lost to San Jose. So both were 1 for 2.

    Finally, both models also predicted Pittsburgh Penguins to win the Stanley Cup, which they did in deserving fashion by easily out-shooting the San Jose Sharks throughout the series and stifling their offense through a team effort. Sullivan had a well-oiled machine working for him. Scoring from the Penguins came from up and down the line-up. Much to my surprise, and everyone else’s, Pittsburgh’s rookie goalie, Matt Murray, played outstanding, which lead the Penguins to only utilized Fleury in one game (after he recovered from his concussion).

    In total then, my 82-game model predicted 6/8 in round 1, 2/4 in round 2, 1/2 in round 3, and 1/1 in the final, for a total of 10/15, which is “average” for this model. The 25-game model predicted 4/8 in round 1, 4/4 in round 2, 1/2 in round 3, and 1/1 in round 4 for a total of 10/15. It’s a tie, or is it? Depends on how you look at it. The 82-game model was better at predicting round 1, but the 25-game was better in round 2. Both models performed the same in rounds 3 and 4. I would give the slight advantage to the 25-game model because of it’s perfect 2nd round predictions.

    Still, I’m not ready to give up the 82-game model just yet. I’d have to test the 25-game model across the previous 9 seasons, which I’m sure I’ll get around to just before next year’s playoffs.