Pac12 Pass Rating and W/L Sets

B1G Pass Rating in W/L sets

 

pacpassrate

Figured I would build a similar chart for the Pac12 from 2016. It includes all conference matches, not just the big ones. As with the Big 10 numbers, there are certainly some teams where pass rating doesn’t necessarily differentiate won/lost sets. By including all sets however, this may introduce more noise than we’d like – as a good team may lower their performance to the level of their opponent and still win the set.

pacpass

 

Here’s the same descending correlation data from the Pac as well. Washington State clearly takes the cake – while both UC’s seem to be unaffected by their ability to pass, interesting.

bigpassrate2

 

I’ve included the full Big Ten data – with all conference matches include as well. Overall there’s nothing super shocking, though I did have a chuckle with the Rutgers spike. You’re welcome to deduce why their curve has such slim deviation.

Here again is the correlation data for the Big when you include everything.

bigpassrate

Advertisements

Big Ten Pass Rating in W/L sets

pass rating

I’m not a huge fan of using pass ratings – I don’t believe they accurately value different reception qualities. That being said, every single person ever uses pass ratings, so I decided to dive into it a little. In the above chart, pass ratings are valued on a 3 point scale (R# 3, + 2.5, ! 2, – 1, / 0.5, = 0). Set data was only collected from the top 9 Big Ten teams in 2016 in matches in which they played one another (“big matches”). The average pass rating for the team in each set they played is compiled in these distributions – with WON sets in teal and LOST sets in red.

What you’ll notice is that for some teams in these big matches, their pass rating in the set really has no bearing on winning the set. Wisconsin is a pretty good example of this as their distributions basically overlap one another completely. To be fair, this may be an anomaly due to Carlini’s ability to turn 1’s into 2’s and 2’s into 3’s (by the transitive Carlini property, 1 passes equal 3 passes, boom!).

On a similar note, Michigan and Michigan State suffer from this same phenomenon in that if you handed Rosen or George their team’s pass rating for any given set, they would essentially be guessing whether they won or lost the set. On the other hand, if you looked at Minnesota or Nebraska, you’d have a much better chance of guessing correctly, given the pass rating in the set.

corplotAbove are the descending correlations between set pass rating and set won/lost. Again, these are only in “big matches” which may skew the results – yet at the end of the day, Hugh/John/Russ etc. are game-planning to beat the best. But what you’ll see is that for some teams, the statistic of pass rating is relevant to winning and losing sets. For others, there’s likely something else driving winning. My goal is to continue to poke around to find the unique fingerprint for each team and what drives or hinders their success.

 

Receive Eff in Big Ten

receiveeff

Similar idea – just messing around with the ggjoy package in R.

What you see above is the receive efficiency (basically passer ratings valued by Big Ten standards for FBSO eff). I filtered out a bunch of names that failed to consistently post high values – as well as those sets in which the passer received fewer than 4 serves. Players are ordered from top to bottom by increasing average receive eff overall (yes, I did this backwards – Kelsey Wicinski is our top performer here).

Similar to PS% performances, you’ll notice that the better passers (towards the bottom) have shorter and shorter tails off to the left indicating fewer sets of poor passing (as you’d expect). Nothing crazy to report here, just cool to visualize in this fashion.

Point Scoring% in the Big Ten

pspercentage

Not a sexy topic, but I just figured out how to do these ‘joy division’ charts in R so I’m kinda pumped to share.

What you see is a histogram of each team’s point scoring % in every individual set they played (only against the teams you see listed, so Purdue v. OSU but not Purdue v. Rutgers).

They’re ordered in ascending fashion by their average PS% in these sets. Something which interested me was the shape of top vs. medium teams. Nebraska and Minnesota seem pretty consistent set to set in how they PS – yet as you work down the chart, you’ll notice some teams flatten out or even have multiple peaks. The latter is especially comical because teams in the middle of the Big Ten could often be described as “dangerous” – sometimes they’re red hot and other times they’re pretty self-destructive. Multiple peaks would certainly play into this narrative and I would be interested to see if other metrics manifest in these patterns, specifically amongst the middle teams in the conference.

And to answer the question nobody asked, yes, Nebraska had a single set where they point scored at 0% (OSU set4) and one where they PS’d at 73% (PSU set5) – that’s why those outliers give the Nebraska chart wings.

Heatmap Intro

big

*the net is at the top. the endline is closest to this text

In the land of cones and zones…and subzones, it’s easy to forget that these are merely representations of locations on the court. Equipped with that logic – and sparked by our friends at VolleyMetrics – I converted the zones to xy coordinates.

Yes, it’s annoying that the x-axis is above the heatmap, but I wanted to throw this up here anyway. It’s the hitting efficiency of attackers in the Big Ten in 2016 based on where the set came from – and it’s what you’d expect. Perfect passes lead to good things, moving your setter forward isn’t terrible, but moving your setter backwards is horrible. Below the hitting efficiencies in each grid are the frequencies.

One thing I think that gets overlooked is being content with medium passes. It goes back to the FBSO posts and why getting aced hurts. Yeah you’re hitting .100 instead of .300, but at least you aren’t losing the point 100% of the time like when you get aced or fail to dig a ball. That’s why keeping digs on your side is huge. As long as they don’t result in a set from the endline, you’ll be hitting positively – rather than defending against a perfect pass after you overpass.

pac

Here’s the same thing for the Pac 12 from 2016. I’ve set the cutoff for both graphs between green and red at 0.150. But again, the frequencies are below the efficiencies so I would definitely not devise an offensive system that runs through zone 1D. Just thought I’d add it for those who were curious.

Anyway, looking forward to messing around with heatmap-type stuff to answer a variety of questions. The inherently great thing about heatmaps is that they convey ideas incredibly quickly. You glance at it and understand that you’d want to pass into the green rather than the red. Stuff like this for serving targets, setter tendencies, attacker tendencies, etc  can all be addressed in this fashion. Heatmaps aren’t a novel concept for scouting by any means – both VM and Oppia have employed them (I believe) – but I got curious if I could build something similar and it looks like it’s prety easy to do with either R + ggplot2 or Tableau. If you have suggestions for ways to employ this, I’m all ears.

Top Passers in the Big Ten

Book1

Here’s the same type of viz I made earlier for servers.

Just because I was curious who the best passers were, relative to the league average for FBSO value provided by each reception quality. What you may immediately notice is that Nebraska has 3 of the top 4 passers in Albrecht, JWO, and Rolfzen. So for those who unabashedly declare Kelly Hunter the best setter in the Big Ten, just realize she’s dealing with great raw materials to work with.

Another insight that isn’t super insightful is that getting aced strongly correlates with your overall effectiveness as a passer, but anywhere from 1 to 5% reception error can still drop you in the upper echelon of passers in the Big Ten. Reception error has the strongest relationship to overall effectiveness out of all of the pass qualities which isn’t necessarily surprising. This makes sense since relative to the average FBSO eff, getting aced is the farthest from the mean. Even if you were expected to hit zero in FBSO, a perfect pass only raises you .300 points higher than that – but getting aced drops you more than 3x this distance, all the way to -1.00. And this is where coaches lose valuable insight. On a 3 point passing scale, the distance from 3 to 2 and 2 to 1 and 1 to 0 appears equal, but in reality a 3 and 2 are very very similar in terms of the value they provide your offense. On the other side, 1 and 0 point passes seem similar, but even on a 1 point pass you’re still getting a swing and hitting above .000 whereas for a 0 pass, you lost 100% of the point every time. This is why we use FBSO eff and breakdown passers in this fashion.

The distinction between passers who exhibit similar effectiveness yet different reception error is of course the breakdown of each quality. Below is this breakdown for the top 5 passers in the Big Ten in 2016.

Book12.jpg

Because JWO passes so many balls perfectly, she can get away with having a slightly higher error rate. Albrecht passes the second lowest of the 5 perfectly…but is at 38% R+. Much like serving, each player has their unique footprint and the underlying factor is the value each type of pass brings to the table.

Screen Shot 2017-03-11 at 10.24.02 PM

Working from the perfect pass at the top, to the reception error at the bottom, these are the FBSO efficiencies for the Big Ten in 2016 on average.

The idea that is often overlooked by coaches is the insignificant difference between good and great when it comes to serve receive. The league as a whole is hitting .258 on passes with only 2 options. R! boils down to setting the ball from the 10-15 foot line to the Go or the Red. So while coaches may freak out that they’re losing their quick hitter option, it’s really not a huge deal when you glance at the numbers. Of course these relationships between reception qualities are unique to each team and the setter in your arsenal, but this trend is not uncommon.

This is why players who may not pass perfectly, but always get your team a swing are invaluable. These are the consistently medium players – just a really hard dead medium passer. They don’t pass nails, but they don’t get aced. That’s a big deal – and it’s currently underutilized because it’s undervalued because it’s misunderstood.

For those who are interested, here are how the teams overall shake out in the passing ranks from this season: Team – expected FBSO eff (rec. error%)

  • Nebraska – 0.211 (3.56%)
  • Wisconsin – 0.194 (3.41%)
  • Maryland – 0.188 (4.63%)
  • Penn State – 0.186 (4.64%)
  • Michigan – 0.177 (4.68%)
  • Minnesota – 0.170 (5.19%)
  • Michigan State – 0.157 (5.35%)
  • Indiana – 0.156 (5.86%)
  • Ohio State – 0.154 (6.26%)
  • Iowa – 0.150 (5.87%)
  • Illinois – 0.143 (6.50%)
  • Purdue – 0.136 (6.59%)
  • Northwestern – 0.112 (7.38%)
  • Rutgers – 0.072 (10.1%)

Again these are looking at each team’s passes in the context of what they are worth to the league as a whole, not just how each team hit in FBSO… (bravo, Maryland)

Top Servers in the Big Ten

servetop.jpg

Coming back from the SSAC in Boston this past week, I’ve been putting more thought in player evaluation against the market they’re situated in. Much like how baseball uses WAR (wins above replacement) to compare players’ values against that of an average MLB player. That stat has of course evolved over the years with different measurements for position players and pitchers, but the underlying principle has remained constant.

Looking at volleyball, there are 6 (7 if you count freeball passing) discrete skills so a single skill WAR metric makes a little less sense, but the general philosophy can be applied as a way to compare performance against league expectancies.

So in the above viz, I’ve used the league average PS Eff for each of the receive qualities. Which looks like this table below:

Screen Shot 2017-03-11 at 4.12.43 PM

Service Ace on the top, working down to service error at the bottom. And yes. Service ace should absolutely be at 1.0 and I’m not sure why it isn’t, but .998 and 1.0 are pretty darn close for our purposes at the moment.

Using these numbers, we then look at the frequencies a player served and got each of these specific outcomes. Multiply frequencies by efficiencies, add them up, and divide by the number of serve attempts and voilà!!

I’ve built this viz to again look at the relationship to service error percentage (while highlighting the top servers). You’ll notice there’s a slight negative relationship between effective serving and lower service error, but it’s not definite. Especially when looking at the servers who bring the most value, there’s certainly a range of error in that group – and almost a correlation of 0 if you draw a box between SSS, Davis, Swackenberg, and Kranda.

However in a general sense, you can assess the value each server brings to the table based on what her results are worth against the league average. In this case, Kranda comes out on top as giving you the best shot to point score.

Clever folks might be wondering what her breakdown looks like in terms of percentages of each outcome. So voilà again!

Book1

^ Here are the top 4 servers’ breakdown by each of the outcomes.

What you’ll notice is that they all have a unique footprint. Kranda makes her money by serving aces (around 14%) whereas SSS lives in the consistently good realm. SSS only misses 2% of her serves. That’s a huge deal. She keeps consistent pressure and even though her sum total of ok+good+perfect passes is higher than the others, she doesn’t give up free points, which results in her being the 3rd best server in the Big Ten in 2016.

I’m just starting to look at the data from a “what’s it worth relative to the league” type of standpoint, so I’ll likely have more posts like this soon. Previously, I’ve focused more on “what’s a player worth to her team” and specifically “what’s a player worth to her team, in this context, when playing opponent X.” I think the way I’ve approached this previously has merit, especially since we don’t have a marketplace for trading players like professional teams do – but you could easily evaluate All-Americans and other interesting things by comparing players to league data.