Pac12 Pass Rating and W/L Sets

B1G Pass Rating in W/L sets



Figured I would build a similar chart for the Pac12 from 2016. It includes all conference matches, not just the big ones. As with the Big 10 numbers, there are certainly some teams where pass rating doesn’t necessarily differentiate won/lost sets. By including all sets however, this may introduce more noise than we’d like – as a good team may lower their performance to the level of their opponent and still win the set.



Here’s the same descending correlation data from the Pac as well. Washington State clearly takes the cake – while both UC’s seem to be unaffected by their ability to pass, interesting.



I’ve included the full Big Ten data – with all conference matches include as well. Overall there’s nothing super shocking, though I did have a chuckle with the Rutgers spike. You’re welcome to deduce why their curve has such slim deviation.

Here again is the correlation data for the Big when you include everything.



Big Ten Pass Rating in W/L sets

pass rating

I’m not a huge fan of using pass ratings – I don’t believe they accurately value different reception qualities. That being said, every single person ever uses pass ratings, so I decided to dive into it a little. In the above chart, pass ratings are valued on a 3 point scale (R# 3, + 2.5, ! 2, – 1, / 0.5, = 0). Set data was only collected from the top 9 Big Ten teams in 2016 in matches in which they played one another (“big matches”). The average pass rating for the team in each set they played is compiled in these distributions – with WON sets in teal and LOST sets in red.

What you’ll notice is that for some teams in these big matches, their pass rating in the set really has no bearing on winning the set. Wisconsin is a pretty good example of this as their distributions basically overlap one another completely. To be fair, this may be an anomaly due to Carlini’s ability to turn 1’s into 2’s and 2’s into 3’s (by the transitive Carlini property, 1 passes equal 3 passes, boom!).

On a similar note, Michigan and Michigan State suffer from this same phenomenon in that if you handed Rosen or George their team’s pass rating for any given set, they would essentially be guessing whether they won or lost the set. On the other hand, if you looked at Minnesota or Nebraska, you’d have a much better chance of guessing correctly, given the pass rating in the set.

corplotAbove are the descending correlations between set pass rating and set won/lost. Again, these are only in “big matches” which may skew the results – yet at the end of the day, Hugh/John/Russ etc. are game-planning to beat the best. But what you’ll see is that for some teams, the statistic of pass rating is relevant to winning and losing sets. For others, there’s likely something else driving winning. My goal is to continue to poke around to find the unique fingerprint for each team and what drives or hinders their success.


Receive Eff in Big Ten


Similar idea – just messing around with the ggjoy package in R.

What you see above is the receive efficiency (basically passer ratings valued by Big Ten standards for FBSO eff). I filtered out a bunch of names that failed to consistently post high values – as well as those sets in which the passer received fewer than 4 serves. Players are ordered from top to bottom by increasing average receive eff overall (yes, I did this backwards – Kelsey Wicinski is our top performer here).

Similar to PS% performances, you’ll notice that the better passers (towards the bottom) have shorter and shorter tails off to the left indicating fewer sets of poor passing (as you’d expect). Nothing crazy to report here, just cool to visualize in this fashion.

Final Four Passer Trends

So here are the Final Four teams from this past season. What you see are their primary passers and their “expected FBSO eff” based on how many points their team currently has. These expected FBSO numbers are built by combining Big 10 and Pac 12 conference and preseason/some postseason matches – meaning that I looked at the FBSO eff of ALL teams in the dataset on each type of reception rating (R#, R+, R!, R-, R/, R=). What this does not show is how well these four teams actually did in FBSO during the season – this is just what our expectations were, per passer. This is listed below:

Screen Shot 2017-04-04 at 10.00.38 AM

In the Final Four viz, the numbers above each bar is the number of receptions that occurred for that passer, within that range of scores. So if you look at Cat McCoy in the Texas chart, she passed 22 times when Texas had 20-24 points. Again, only have seven Texas matches since I don’t have access to the Big 12, so it is what it is.

What this type of analysis might spark is the idea of when to serve specific passers. As you can see from these four teams, some players like Goehner get better as Minnesota gets closer to 25 – some players like Wilhite and Albrecht decline as their teams approach 25. Others peak in the middle of sets like JWO, Micaya White – while others perform best at the beginnings and ends of sets like Rounsaville from Texas.

And then there’s Morgan Hentz. Dude. What?! This kid passes for a .220 FBSO Eff – AT ALL TIMES. She’s a true freshman who is apparently always really good, regardless of where in the set Stanford currently is. Unreal.

A fair criticism of this analysis is that these FBSO efficiencies were calculated with league  data and really don’t reflect the appropriate value of each pass for each specific team. Minnesota and Texas don’t hit on the same on poor passes, therefore we shouldn’t reward their passers equally. Fair point. Another good point would be that since we’re using the entire seasons for 3/4 teams, if Minnesota is constantly blowing out teams, their passers may be in that 20-24 range while the opponent is only at 10 or 15 points, possibly reducing the pressure on Minnesota passers. This argument maybe sticks better for Texas who is likely to blow out Big 12 teams more than Stanford might blow out Pac 12 teams, but the idea is the same – the logic isn’t 100% bulletproof.

With all this being said, this is still an interesting way to target passers as the match evolves. You could definitely add score differentials to drill deeper into how passers perform when winning/losing big vs. small or whatever. But for now, maybe you want to attack Sarah Wilhite at the ends of matches rather than Rosado or Goehner. And stop serving Morgan Hentz in general!!

Top Passers in the Big Ten


Here’s the same type of viz I made earlier for servers.

Just because I was curious who the best passers were, relative to the league average for FBSO value provided by each reception quality. What you may immediately notice is that Nebraska has 3 of the top 4 passers in Albrecht, JWO, and Rolfzen. So for those who unabashedly declare Kelly Hunter the best setter in the Big Ten, just realize she’s dealing with great raw materials to work with.

Another insight that isn’t super insightful is that getting aced strongly correlates with your overall effectiveness as a passer, but anywhere from 1 to 5% reception error can still drop you in the upper echelon of passers in the Big Ten. Reception error has the strongest relationship to overall effectiveness out of all of the pass qualities which isn’t necessarily surprising. This makes sense since relative to the average FBSO eff, getting aced is the farthest from the mean. Even if you were expected to hit zero in FBSO, a perfect pass only raises you .300 points higher than that – but getting aced drops you more than 3x this distance, all the way to -1.00. And this is where coaches lose valuable insight. On a 3 point passing scale, the distance from 3 to 2 and 2 to 1 and 1 to 0 appears equal, but in reality a 3 and 2 are very very similar in terms of the value they provide your offense. On the other side, 1 and 0 point passes seem similar, but even on a 1 point pass you’re still getting a swing and hitting above .000 whereas for a 0 pass, you lost 100% of the point every time. This is why we use FBSO eff and breakdown passers in this fashion.

The distinction between passers who exhibit similar effectiveness yet different reception error is of course the breakdown of each quality. Below is this breakdown for the top 5 passers in the Big Ten in 2016.


Because JWO passes so many balls perfectly, she can get away with having a slightly higher error rate. Albrecht passes the second lowest of the 5 perfectly…but is at 38% R+. Much like serving, each player has their unique footprint and the underlying factor is the value each type of pass brings to the table.

Screen Shot 2017-03-11 at 10.24.02 PM

Working from the perfect pass at the top, to the reception error at the bottom, these are the FBSO efficiencies for the Big Ten in 2016 on average.

The idea that is often overlooked by coaches is the insignificant difference between good and great when it comes to serve receive. The league as a whole is hitting .258 on passes with only 2 options. R! boils down to setting the ball from the 10-15 foot line to the Go or the Red. So while coaches may freak out that they’re losing their quick hitter option, it’s really not a huge deal when you glance at the numbers. Of course these relationships between reception qualities are unique to each team and the setter in your arsenal, but this trend is not uncommon.

This is why players who may not pass perfectly, but always get your team a swing are invaluable. These are the consistently medium players – just a really hard dead medium passer. They don’t pass nails, but they don’t get aced. That’s a big deal – and it’s currently underutilized because it’s undervalued because it’s misunderstood.

For those who are interested, here are how the teams overall shake out in the passing ranks from this season: Team – expected FBSO eff (rec. error%)

  • Nebraska – 0.211 (3.56%)
  • Wisconsin – 0.194 (3.41%)
  • Maryland – 0.188 (4.63%)
  • Penn State – 0.186 (4.64%)
  • Michigan – 0.177 (4.68%)
  • Minnesota – 0.170 (5.19%)
  • Michigan State – 0.157 (5.35%)
  • Indiana – 0.156 (5.86%)
  • Ohio State – 0.154 (6.26%)
  • Iowa – 0.150 (5.87%)
  • Illinois – 0.143 (6.50%)
  • Purdue – 0.136 (6.59%)
  • Northwestern – 0.112 (7.38%)
  • Rutgers – 0.072 (10.1%)

Again these are looking at each team’s passes in the context of what they are worth to the league as a whole, not just how each team hit in FBSO… (bravo, Maryland)

FBSO eff by Player / Serve EndZone


So I was watching the UCLA / BYU match tonight and was frustrated by the number of errors made from the service line by non-jumpservers. If Micha, JT, or Ben Patch want to go back and rip it, I get it, they’re going to miss at a higher rate. But if you’re a Joe Grosh or Hagen Smith who’s just popping in floaters, you gotta make 99% those (I think).

This got me thinking about Jake Langlois and his subpar passing over the last 2 nights. Obviously he’s the serving target if you’re UCLA, but are there specific rotations to attack him in? Are there specific areas in his passing lane that you want to attack? Or are you better off just attacking a specific zone on the floor in each rotation, regardless of the passer (i.e. serving zone 2 to make the setter play the ball over his shoulder which is of course tougher to set and tougher to glance at the opposing MB). Since I don’t have MPSF data (looking at you BK), I decided to run this experiment on Nebraska, arguably one of the top receiving teams in the Big Ten this season.

What you see above is a breakdown of the specific passers and the relative locations they took the ball (left, middle, right, low, overhead). The viz only shows the result that caused the worst FBSO eff for Nebraska per rotation. As you can see, there are different individuals and different positioning that lead to this effect in each rotation. You might not think that JWO would appear on this list at all, but in Ro1 and Ro6, if you can get her to take the pass low, Nebraska struggles.

Now you might be saying: 6 rotations, 5 different way to take the ball, 3 to 4 possible passers in each rotation – is there enough data to really extrapolate from here? Totally fair. The top viz shows the worst performing combination in passers who took the ball in the specified position a minimum of 5 times. But as we can see for Malloy in Ro2, Nebraska probably went for 0 kills, 2 errors, on 5 attempts when she passed a low ball. That’s not a great sample size. Touché, insightful reader, touché.


Here’s what that same viz looks like if you only look at those with at least 10 passes in the specified rotation and positioning. Again, these are the lowest FBSO eff’s per rotation, but this time, the increased minimum helps provide a more useful insight.

Another way to view serving is to pick zones to attack – for reasons like what was stated earlier. Sometimes you go after the pass-hitter, sometimes you try to make it tough on the setter, sometimes you try to “block” the middle’s route to the slide by serving into zone 3 and creating traffic. However, if you look at Nebraska’s FBSO eff by where they receive serve, an interesting trend emerges (the number above the FBSO eff is the zone).


This isn’t the sexiest viz. It’s 3 bars and 3 rotations that happen to be at .000 – yet zone 9 steps forward as the best way to serve at Nebraska in 4 of 6 rotations. For those who aren’t sure, zone 9 is between 1 and 2. This discovery seemingly plays into the “make it tough on the setter” theory of serving. Zone 8 in Ro2 doesn’t make a ton of sense to me seeing how that’s the dead center of the court, but maybe those serves dropped on the middle back passer or something? Zone 5 in Ro5 wouldn’t typically be an odd outcome – attacking the pass-hitter isn’t uncommon, especially in 2 hitter situations. But Nebraska pushes their pass-hitter, Foecke, out of the passing rotation here (or they did in the semis w/ Texas) and it’s actually JWO who passes in zone 5 with Rolfzen in the middle and Albrecht in zone 1. Not sure I can explain why this has the lowest FBSO eff for Nebraska.

The progression to this question is of course, does the start zone of the serve matter. Serving 5 to 5 is not the same angle for the passer as serving 1 to 5 – and objectively speaking, one is likely better than the other. Unfortunately, I scrap the serve start zone when I parse all the files – so I need to go back through my code and fix all that. That’s why that train of thought wasn’t expanded upon in this post. But I’ll get around to it.

Anyway, I think this type of viewpoint gives rise to an interesting question. Is there a methodology to serving such that you could actually “serve optimally?” Think about it.

Why getting aced hurts more than you think


We’ve used Efficiency Change in the majority of posts to identify where value is added or lost relative to expectations, but here I thought I would try to shave down the complexity of that metric.

The above viz looks at FBSO efficiency (won-lost on 1st ball)/(all receive attempts) in the left column – the percentage of each receive quality per team in the middle – and the product of these two in the right column to visibly look at how FBSO eff weights the distribution of qualities per team. These teams are again ordered by finish in the Big Ten and these numbers were built looking solely at conference matches.

Again, there’s a lot of cool stuff to look at within the viz but I’d like to draw your focus to to the righthand column: FBSO Eff * Percentage. This column sheds light on the impact of service aces that you may not have looked at before. As we’ve addressed before, using passing ratings is a terrible way to evaluate passing. In a passer rating, a receive error is a 0. This diminishes its true value and can make passers who get aced often look far better than they are. And thus, FBSO eff is a better way to evaluate passing.

But what you may not have considered is that because getting aced carries with it a FBSO Eff of -1.00, every single time, even a small percentage of aces can drastically impact your team’s receive efficiency. Illinois is unfortunately a good team to examine in this situation. They pass perfectly (R#) 22.8% of the time, which for them leads to an FBSO Eff of 0.331 – which is pretty decent when compared to the likes of Nebraska (0.354). On the other hand, Illinois is the second most frequently aced team in the Big Ten at 6.5%.

So while Illinois passes perfectly 3.5x more frequently than they get aced, these two receive qualities essentially cancel one another out in terms of FBSO. Perfect passes account for 0.076 of Illinois’ overall FBSO eff, while reception errors account for -0.065. That’s the gut-punch of getting aced. Aces carry a heavy potency per occurrence, with an ability to offset the value of much more frequent outcomes.

This is less of an issue if you’re Nebraska/Minnesota/Wisconsin because you spend so much time in good passing situations – and you hit for a great clip in those good situations. But if you’re a middle of the pack Big Ten team, you cannot afford to give up errors in serve receive – because you’re not making up for it in other areas like the top 3 are.

While this is likely just the beginning of the conversation about the value of contacts on this blog – take a second to really internalize what reward and cost of terminal contacts. If you jump to the Intro to Eff Change post, you’ll see how these terminal touches have big impacts, sometimes valued at more than a single point (your middle blocker acing JWO, a setting error on a perfect pass, netting on the block when your opponent is hitting out of system sets, etc). Just more food for thought to nibble on.