MPSF Service Error (2017)

fbso sepercentage

Been waiting to dive into this for a while now, so let’s get right to it.

What you see above is the Service Error % in a given set and the opponent’s FBSO efficiency: (opp. won in FBSO – opp. lost in FBSO)/total serves. The teams you see as labels are the serving teams, so the bottommost point (USC), means that USC missed around 4% of their serves in that set and held their opponent to -0.115 or so in FBSO. Pretty impressive.

As you’ll see, the blob of data certainly trends positively, indicating that higher service error is associated with, but does not necessarily cause higher opponent FBSO eff. The R-squared of this trend line is only around 0.13, which is pretty mild. This would suggest that you can be successful at limiting your opponent’s ability to FBSO, even at a higher level of error (say 20-25%), as there are teams like UCLA (lowest UCLA dot) who missed around 24% of their serves in the set, but still held their opponent to a negative FBSO eff.

mpsf se wonlost 2

So the next question for me was: if service error doesn’t have a league-wide trend, does it help/hurt individual teams more than others? That’s what the above graphic helps to drill into. Similar to previous charts, blue/teal indicates the team won the set, red means they lost. The curves are of frequency distribution – meaning that for UC Irvine, the highest frequency of won sets occurred primarily in the narrow range of 18-23% service error while fewer won sets occurred outside this range – whereas Stanford’s curve for the bulk of won sets occurred in a wider range from 5-20% with only a few sets won outside this range.

The hypothesis of the casual men’s volleyball observer might be that higher levels of service error would of course manifest more frequently in lost sets, yet what we see is that for most teams, it doesn’t make a difference in terms of winning/losing the set. The fact that these mounds essentially overlap one another for the majority of teams indicates that they miss approximately the same number of serves in won and lost sets.

Screen Shot 2017-09-05 at 11.24.26 AM

There are of course a couple outliers. Cal Baptist and Hawai’i both show large effect sizes of service error in won/lost sets. A negative cohen’s d indicates an inverse relationship between service error % and winning the set; as one rises, the other falls. UCSD shows a medium strength relationship between the variables, but you’ll notice that all the other teams, including top teams such as Long Beach, BYU, UCI, and UCLA, all show small to negligible effect sizes for service error %.

So moving forward, unless you’re a fan of Cal Baptist (…unlikely) or Hawai’i (much more likely), don’t let those missed serves ruffle you. In the grand scheme of the set, remember that they’re likely negligible in terms of winning and losing.

Advertisements

Long Beach State In-System Atk (Men’s)

LBSU In-Sys

Just got access to the men’s side of the data so I’m still playing around with a few things.

What you see above is a data-driven visualization of what coaches might term “the key guy to stop.” In recent years with a team like BYU, the common phrase was “Taylor Sander is going to get his kills, let’s focus on the other guys.” So what I’ve built is essentially a histogram of what players hit in any set they appear in – and then color code   lost sets in red and won sets in…turquoise? *only sets in which players have 3 or more in-system hitting attempts count – and players must appear in a minimum of 30 sets during the mpsf conference season to be counted.

LBSU cohensd

What you’ll notice from the visuals is that yes, TJ DeFalco certainly has an impact between won/lost sets, it’s actually Amir Lugo Rodriquez who’s hitting efficiency carries the most weight. The likely reason is that if Amir gets going, LBSU can get pin hitters 1 on 1’s much easier as opponents move to front the quicks. Another possibility is that this doesn’t actually prove causation – and that Amir hits better when LBSU passes better. This is also fair, but again, we include the data if Amir has at least 3 attempts in the set.

I like this visual because it makes sense to look at – coaches can see that shift between won and lost sets, but to also include the actual cohen’s d and magnitude levels supplies additional statistical weight to the problem. I’d like to use this approach more frequently moving forward – in both the men’s data from the spring and the women’s data currently coming in from the Big Ten and Pac 12 this fall once conference kicks off.

Pac12 Pass Rating and W/L Sets

B1G Pass Rating in W/L sets

 

pacpassrate

Figured I would build a similar chart for the Pac12 from 2016. It includes all conference matches, not just the big ones. As with the Big 10 numbers, there are certainly some teams where pass rating doesn’t necessarily differentiate won/lost sets. By including all sets however, this may introduce more noise than we’d like – as a good team may lower their performance to the level of their opponent and still win the set.

pacpass

 

Here’s the same descending correlation data from the Pac as well. Washington State clearly takes the cake – while both UC’s seem to be unaffected by their ability to pass, interesting.

bigpassrate2

 

I’ve included the full Big Ten data – with all conference matches include as well. Overall there’s nothing super shocking, though I did have a chuckle with the Rutgers spike. You’re welcome to deduce why their curve has such slim deviation.

Here again is the correlation data for the Big when you include everything.

bigpassrate

Big Ten Pass Rating in W/L sets

pass rating

I’m not a huge fan of using pass ratings – I don’t believe they accurately value different reception qualities. That being said, every single person ever uses pass ratings, so I decided to dive into it a little. In the above chart, pass ratings are valued on a 3 point scale (R# 3, + 2.5, ! 2, – 1, / 0.5, = 0). Set data was only collected from the top 9 Big Ten teams in 2016 in matches in which they played one another (“big matches”). The average pass rating for the team in each set they played is compiled in these distributions – with WON sets in teal and LOST sets in red.

What you’ll notice is that for some teams in these big matches, their pass rating in the set really has no bearing on winning the set. Wisconsin is a pretty good example of this as their distributions basically overlap one another completely. To be fair, this may be an anomaly due to Carlini’s ability to turn 1’s into 2’s and 2’s into 3’s (by the transitive Carlini property, 1 passes equal 3 passes, boom!).

On a similar note, Michigan and Michigan State suffer from this same phenomenon in that if you handed Rosen or George their team’s pass rating for any given set, they would essentially be guessing whether they won or lost the set. On the other hand, if you looked at Minnesota or Nebraska, you’d have a much better chance of guessing correctly, given the pass rating in the set.

corplotAbove are the descending correlations between set pass rating and set won/lost. Again, these are only in “big matches” which may skew the results – yet at the end of the day, Hugh/John/Russ etc. are game-planning to beat the best. But what you’ll see is that for some teams, the statistic of pass rating is relevant to winning and losing sets. For others, there’s likely something else driving winning. My goal is to continue to poke around to find the unique fingerprint for each team and what drives or hinders their success.

 

Receive Eff in Big Ten

receiveeff

Similar idea – just messing around with the ggjoy package in R.

What you see above is the receive efficiency (basically passer ratings valued by Big Ten standards for FBSO eff). I filtered out a bunch of names that failed to consistently post high values – as well as those sets in which the passer received fewer than 4 serves. Players are ordered from top to bottom by increasing average receive eff overall (yes, I did this backwards – Kelsey Wicinski is our top performer here).

Similar to PS% performances, you’ll notice that the better passers (towards the bottom) have shorter and shorter tails off to the left indicating fewer sets of poor passing (as you’d expect). Nothing crazy to report here, just cool to visualize in this fashion.

Point Scoring% in the Big Ten

pspercentage

Not a sexy topic, but I just figured out how to do these ‘joy division’ charts in R so I’m kinda pumped to share.

What you see is a histogram of each team’s point scoring % in every individual set they played (only against the teams you see listed, so Purdue v. OSU but not Purdue v. Rutgers).

They’re ordered in ascending fashion by their average PS% in these sets. Something which interested me was the shape of top vs. medium teams. Nebraska and Minnesota seem pretty consistent set to set in how they PS – yet as you work down the chart, you’ll notice some teams flatten out or even have multiple peaks. The latter is especially comical because teams in the middle of the Big Ten could often be described as “dangerous” – sometimes they’re red hot and other times they’re pretty self-destructive. Multiple peaks would certainly play into this narrative and I would be interested to see if other metrics manifest in these patterns, specifically amongst the middle teams in the conference.

And to answer the question nobody asked, yes, Nebraska had a single set where they point scored at 0% (OSU set4) and one where they PS’d at 73% (PSU set5) – that’s why those outliers give the Nebraska chart wings.

Quick thoughts; serving

Was just messing around with some numbers this afternoon and wanted to share.

I looked at a few things related to serving, specifically serve error%, point score%, and serve output efficiency. I ran some correlations between these stats and themselves as well as with winning the set overall.

As with my last post, I'm only using data from the top 9 in the Big Ten from 2016 so the calculated efficiencies are based on these matches alone.

Serve error% and winning the set came out to -0.150, pretty weak – and a disappointment to parents and fans everywhere who'd like nothing more than for you to quit missing your damn serves.

Winning the set and serve output eff (like pass rating but using the actual efficiencies off each possible serve outcome) clocked in at 0.323

And serve error% and serve output eff correlated at -0.546, the highest result I found. This seems to reiterate that terminal contacts skew performance ratings. So quit missing your damn serve! but at the same time, it's unlikely you'll have missed serves to blame on their own for losing a set.

Point score% and serve output eff came in at 0.474, which makes a lot of sense – it would be interesting to see if serve output eff is the largest factor in whether you point score or not.

Finally, because everyone likes service errors, I did SE% and point score% which resulted in -0.220. Again, pretty mild – suggesting that while the association is negative, as we'd expect, teams can still point score well even if they're missing some serves.

Anyway, just wanted to jot these numbers down before they get lost in a notebook somewhere.