Question from VolleyTalk

So I got a message a couple months ago and 100% failed to respond, so I’m trying to do it some justice now. The question is about FBSO Efficiency, Passer Ratings, and Winning.

*Because the question was in reference to passing, I eliminated missed serves from the analysis.


So here are the 2017 numbers for the Big10 and Pac12 (only looking at matches in which both teams were from one of those two conferences). They’re sorted by their opponents’ average passer rating. So you’ll notice Maryland, Northwestern, and Indiana all get their opponents in a little trouble, but because these teams struggle to capitalize on these advantageous situations, their opponents still FBSO at a pretty high efficiency, dropping them deep into the red on the right side chart.

UCLA is on the other side of the coin – they don’t serve particularly tough, but their block/defense prowess holds opponents to a pretty decent 0.136 FBSO Eff. Nebraska and Stanford both serve tough, but also slow opponents with large blocks and scrappy defense, allowing them to rise into the top 2 of both statistics.

I would partially disagree with our VT friend that there is bias in evaluation of passer ratings. Typically yes, if you were to compare between multiple coaches who are charting the same 20 passers, you’d like end up with some different numbers – but because I’m using the VolleyMetrics codes for all these matches, we can assume a decent level of consistency.

The inherent reason to dislike passer rating is because as previously stated in many many posts here, the change in how likely your team is to win the point as you move from a 3 pass to a 2 pass to a 1 pass is not equidistance. Let’s just take FBSO. You might kill the ball at a 50% rate off a 3 pass, but only 38% on a 2 pass, and only a 10% chance on a 1 pass. If we assume the 3 to 2 to 1 relationship is linear, we reward sporadic passers who may average a 2.0, but with passes of 3, 1, 3, 1, 3, 1, 3, 1. We’d much prefer a passer who always just passes a dead 2. Never 3s. Never 1s. Because once you look at the value of each pass in terms of winning the point, the equation becomes much more clear.

Just a quick rundown on VolleyMetrics: R# is a perfect pass, R+ is very good, within 10 foot line, R! is a 2 pass, R- is a 1 pass, R/ means you didn’t get a swing (either you freeballed it or may have overpassed it), R= means you got aced.

VT_DM (men)

Here’s the equivalent chart for the 2018 men (using only matches when two teams in this chart played each other).

Very similar to UCLA on the women’s side, Long Beach isn’t even the toughest serving team in terms of getting opposing passers in trouble, yet they easily limit opponents to the lowest FBSO Efficiency out of the Top 10 teams – likely due to their block & defense.

If you were to run correlations on a per set and per match basis, you’d see strikingly similar results for both the 2017 women and 2018 men. The correlation between FBSO Eff in your team’s serve receive and your team’s likelihood of winning the set is 0.49. The rises to 0.59 when you look at the likelihood of winning the match.

While you may think the goal numbers are inherently different between the genders, for women, the top 5 teams in terms of FBSO Eff were Nebraska (0.274), Stanford (0.266), Penn State (0.265), Minnesota (0.255), and Wisconsin (0.238) – and on the men’s side it was Long Beach (0.278), Ohio State (0.257), UCLA (0.248), BYU (0.242), and Loyola (0.238). Pretty consistent between the two.


Pac12 Pass Rating and W/L Sets

B1G Pass Rating in W/L sets



Figured I would build a similar chart for the Pac12 from 2016. It includes all conference matches, not just the big ones. As with the Big 10 numbers, there are certainly some teams where pass rating doesn’t necessarily differentiate won/lost sets. By including all sets however, this may introduce more noise than we’d like – as a good team may lower their performance to the level of their opponent and still win the set.



Here’s the same descending correlation data from the Pac as well. Washington State clearly takes the cake – while both UC’s seem to be unaffected by their ability to pass, interesting.



I’ve included the full Big Ten data – with all conference matches include as well. Overall there’s nothing super shocking, though I did have a chuckle with the Rutgers spike. You’re welcome to deduce why their curve has such slim deviation.

Here again is the correlation data for the Big when you include everything.


Big Ten Pass Rating in W/L sets

pass rating

I’m not a huge fan of using pass ratings – I don’t believe they accurately value different reception qualities. That being said, every single person ever uses pass ratings, so I decided to dive into it a little. In the above chart, pass ratings are valued on a 3 point scale (R# 3, + 2.5, ! 2, – 1, / 0.5, = 0). Set data was only collected from the top 9 Big Ten teams in 2016 in matches in which they played one another (“big matches”). The average pass rating for the team in each set they played is compiled in these distributions – with WON sets in teal and LOST sets in red.

What you’ll notice is that for some teams in these big matches, their pass rating in the set really has no bearing on winning the set. Wisconsin is a pretty good example of this as their distributions basically overlap one another completely. To be fair, this may be an anomaly due to Carlini’s ability to turn 1’s into 2’s and 2’s into 3’s (by the transitive Carlini property, 1 passes equal 3 passes, boom!).

On a similar note, Michigan and Michigan State suffer from this same phenomenon in that if you handed Rosen or George their team’s pass rating for any given set, they would essentially be guessing whether they won or lost the set. On the other hand, if you looked at Minnesota or Nebraska, you’d have a much better chance of guessing correctly, given the pass rating in the set.

corplotAbove are the descending correlations between set pass rating and set won/lost. Again, these are only in “big matches” which may skew the results – yet at the end of the day, Hugh/John/Russ etc. are game-planning to beat the best. But what you’ll see is that for some teams, the statistic of pass rating is relevant to winning and losing sets. For others, there’s likely something else driving winning. My goal is to continue to poke around to find the unique fingerprint for each team and what drives or hinders their success.


Receive Eff in Big Ten


Similar idea – just messing around with the ggjoy package in R.

What you see above is the receive efficiency (basically passer ratings valued by Big Ten standards for FBSO eff). I filtered out a bunch of names that failed to consistently post high values – as well as those sets in which the passer received fewer than 4 serves. Players are ordered from top to bottom by increasing average receive eff overall (yes, I did this backwards – Kelsey Wicinski is our top performer here).

Similar to PS% performances, you’ll notice that the better passers (towards the bottom) have shorter and shorter tails off to the left indicating fewer sets of poor passing (as you’d expect). Nothing crazy to report here, just cool to visualize in this fashion.

Final Four Passer Trends

So here are the Final Four teams from this past season. What you see are their primary passers and their “expected FBSO eff” based on how many points their team currently has. These expected FBSO numbers are built by combining Big 10 and Pac 12 conference and preseason/some postseason matches – meaning that I looked at the FBSO eff of ALL teams in the dataset on each type of reception rating (R#, R+, R!, R-, R/, R=). What this does not show is how well these four teams actually did in FBSO during the season – this is just what our expectations were, per passer. This is listed below:

Screen Shot 2017-04-04 at 10.00.38 AM

In the Final Four viz, the numbers above each bar is the number of receptions that occurred for that passer, within that range of scores. So if you look at Cat McCoy in the Texas chart, she passed 22 times when Texas had 20-24 points. Again, only have seven Texas matches since I don’t have access to the Big 12, so it is what it is.

What this type of analysis might spark is the idea of when to serve specific passers. As you can see from these four teams, some players like Goehner get better as Minnesota gets closer to 25 – some players like Wilhite and Albrecht decline as their teams approach 25. Others peak in the middle of sets like JWO, Micaya White – while others perform best at the beginnings and ends of sets like Rounsaville from Texas.

And then there’s Morgan Hentz. Dude. What?! This kid passes for a .220 FBSO Eff – AT ALL TIMES. She’s a true freshman who is apparently always really good, regardless of where in the set Stanford currently is. Unreal.

A fair criticism of this analysis is that these FBSO efficiencies were calculated with league  data and really don’t reflect the appropriate value of each pass for each specific team. Minnesota and Texas don’t hit on the same on poor passes, therefore we shouldn’t reward their passers equally. Fair point. Another good point would be that since we’re using the entire seasons for 3/4 teams, if Minnesota is constantly blowing out teams, their passers may be in that 20-24 range while the opponent is only at 10 or 15 points, possibly reducing the pressure on Minnesota passers. This argument maybe sticks better for Texas who is likely to blow out Big 12 teams more than Stanford might blow out Pac 12 teams, but the idea is the same – the logic isn’t 100% bulletproof.

With all this being said, this is still an interesting way to target passers as the match evolves. You could definitely add score differentials to drill deeper into how passers perform when winning/losing big vs. small or whatever. But for now, maybe you want to attack Sarah Wilhite at the ends of matches rather than Rosado or Goehner. And stop serving Morgan Hentz in general!!

Top Passers in the Big Ten


Here’s the same type of viz I made earlier for servers.

Just because I was curious who the best passers were, relative to the league average for FBSO value provided by each reception quality. What you may immediately notice is that Nebraska has 3 of the top 4 passers in Albrecht, JWO, and Rolfzen. So for those who unabashedly declare Kelly Hunter the best setter in the Big Ten, just realize she’s dealing with great raw materials to work with.

Another insight that isn’t super insightful is that getting aced strongly correlates with your overall effectiveness as a passer, but anywhere from 1 to 5% reception error can still drop you in the upper echelon of passers in the Big Ten. Reception error has the strongest relationship to overall effectiveness out of all of the pass qualities which isn’t necessarily surprising. This makes sense since relative to the average FBSO eff, getting aced is the farthest from the mean. Even if you were expected to hit zero in FBSO, a perfect pass only raises you .300 points higher than that – but getting aced drops you more than 3x this distance, all the way to -1.00. And this is where coaches lose valuable insight. On a 3 point passing scale, the distance from 3 to 2 and 2 to 1 and 1 to 0 appears equal, but in reality a 3 and 2 are very very similar in terms of the value they provide your offense. On the other side, 1 and 0 point passes seem similar, but even on a 1 point pass you’re still getting a swing and hitting above .000 whereas for a 0 pass, you lost 100% of the point every time. This is why we use FBSO eff and breakdown passers in this fashion.

The distinction between passers who exhibit similar effectiveness yet different reception error is of course the breakdown of each quality. Below is this breakdown for the top 5 passers in the Big Ten in 2016.


Because JWO passes so many balls perfectly, she can get away with having a slightly higher error rate. Albrecht passes the second lowest of the 5 perfectly…but is at 38% R+. Much like serving, each player has their unique footprint and the underlying factor is the value each type of pass brings to the table.

Screen Shot 2017-03-11 at 10.24.02 PM

Working from the perfect pass at the top, to the reception error at the bottom, these are the FBSO efficiencies for the Big Ten in 2016 on average.

The idea that is often overlooked by coaches is the insignificant difference between good and great when it comes to serve receive. The league as a whole is hitting .258 on passes with only 2 options. R! boils down to setting the ball from the 10-15 foot line to the Go or the Red. So while coaches may freak out that they’re losing their quick hitter option, it’s really not a huge deal when you glance at the numbers. Of course these relationships between reception qualities are unique to each team and the setter in your arsenal, but this trend is not uncommon.

This is why players who may not pass perfectly, but always get your team a swing are invaluable. These are the consistently medium players – just a really hard dead medium passer. They don’t pass nails, but they don’t get aced. That’s a big deal – and it’s currently underutilized because it’s undervalued because it’s misunderstood.

For those who are interested, here are how the teams overall shake out in the passing ranks from this season: Team – expected FBSO eff (rec. error%)

  • Nebraska – 0.211 (3.56%)
  • Wisconsin – 0.194 (3.41%)
  • Maryland – 0.188 (4.63%)
  • Penn State – 0.186 (4.64%)
  • Michigan – 0.177 (4.68%)
  • Minnesota – 0.170 (5.19%)
  • Michigan State – 0.157 (5.35%)
  • Indiana – 0.156 (5.86%)
  • Ohio State – 0.154 (6.26%)
  • Iowa – 0.150 (5.87%)
  • Illinois – 0.143 (6.50%)
  • Purdue – 0.136 (6.59%)
  • Northwestern – 0.112 (7.38%)
  • Rutgers – 0.072 (10.1%)

Again these are looking at each team’s passes in the context of what they are worth to the league as a whole, not just how each team hit in FBSO… (bravo, Maryland)

FBSO eff by Player / Serve EndZone


So I was watching the UCLA / BYU match tonight and was frustrated by the number of errors made from the service line by non-jumpservers. If Micha, JT, or Ben Patch want to go back and rip it, I get it, they’re going to miss at a higher rate. But if you’re a Joe Grosh or Hagen Smith who’s just popping in floaters, you gotta make 99% those (I think).

This got me thinking about Jake Langlois and his subpar passing over the last 2 nights. Obviously he’s the serving target if you’re UCLA, but are there specific rotations to attack him in? Are there specific areas in his passing lane that you want to attack? Or are you better off just attacking a specific zone on the floor in each rotation, regardless of the passer (i.e. serving zone 2 to make the setter play the ball over his shoulder which is of course tougher to set and tougher to glance at the opposing MB). Since I don’t have MPSF data (looking at you BK), I decided to run this experiment on Nebraska, arguably one of the top receiving teams in the Big Ten this season.

What you see above is a breakdown of the specific passers and the relative locations they took the ball (left, middle, right, low, overhead). The viz only shows the result that caused the worst FBSO eff for Nebraska per rotation. As you can see, there are different individuals and different positioning that lead to this effect in each rotation. You might not think that JWO would appear on this list at all, but in Ro1 and Ro6, if you can get her to take the pass low, Nebraska struggles.

Now you might be saying: 6 rotations, 5 different way to take the ball, 3 to 4 possible passers in each rotation – is there enough data to really extrapolate from here? Totally fair. The top viz shows the worst performing combination in passers who took the ball in the specified position a minimum of 5 times. But as we can see for Malloy in Ro2, Nebraska probably went for 0 kills, 2 errors, on 5 attempts when she passed a low ball. That’s not a great sample size. Touché, insightful reader, touché.


Here’s what that same viz looks like if you only look at those with at least 10 passes in the specified rotation and positioning. Again, these are the lowest FBSO eff’s per rotation, but this time, the increased minimum helps provide a more useful insight.

Another way to view serving is to pick zones to attack – for reasons like what was stated earlier. Sometimes you go after the pass-hitter, sometimes you try to make it tough on the setter, sometimes you try to “block” the middle’s route to the slide by serving into zone 3 and creating traffic. However, if you look at Nebraska’s FBSO eff by where they receive serve, an interesting trend emerges (the number above the FBSO eff is the zone).


This isn’t the sexiest viz. It’s 3 bars and 3 rotations that happen to be at .000 – yet zone 9 steps forward as the best way to serve at Nebraska in 4 of 6 rotations. For those who aren’t sure, zone 9 is between 1 and 2. This discovery seemingly plays into the “make it tough on the setter” theory of serving. Zone 8 in Ro2 doesn’t make a ton of sense to me seeing how that’s the dead center of the court, but maybe those serves dropped on the middle back passer or something? Zone 5 in Ro5 wouldn’t typically be an odd outcome – attacking the pass-hitter isn’t uncommon, especially in 2 hitter situations. But Nebraska pushes their pass-hitter, Foecke, out of the passing rotation here (or they did in the semis w/ Texas) and it’s actually JWO who passes in zone 5 with Rolfzen in the middle and Albrecht in zone 1. Not sure I can explain why this has the lowest FBSO eff for Nebraska.

The progression to this question is of course, does the start zone of the serve matter. Serving 5 to 5 is not the same angle for the passer as serving 1 to 5 – and objectively speaking, one is likely better than the other. Unfortunately, I scrap the serve start zone when I parse all the files – so I need to go back through my code and fix all that. That’s why that train of thought wasn’t expanded upon in this post. But I’ll get around to it.

Anyway, I think this type of viewpoint gives rise to an interesting question. Is there a methodology to serving such that you could actually “serve optimally?” Think about it.