MPSF Service Error (2017)

fbso sepercentage

Been waiting to dive into this for a while now, so let’s get right to it.

What you see above is the Service Error % in a given set and the opponent’s FBSO efficiency: (opp. won in FBSO – opp. lost in FBSO)/total serves. The teams you see as labels are the serving teams, so the bottommost point (USC), means that USC missed around 4% of their serves in that set and held their opponent to -0.115 or so in FBSO. Pretty impressive.

As you’ll see, the blob of data certainly trends positively, indicating that higher service error is associated with, but does not necessarily cause higher opponent FBSO eff. The R-squared of this trend line is only around 0.13, which is pretty mild. This would suggest that you can be successful at limiting your opponent’s ability to FBSO, even at a higher level of error (say 20-25%), as there are teams like UCLA (lowest UCLA dot) who missed around 24% of their serves in the set, but still held their opponent to a negative FBSO eff.

mpsf se wonlost 2

So the next question for me was: if service error doesn’t have a league-wide trend, does it help/hurt individual teams more than others? That’s what the above graphic helps to drill into. Similar to previous charts, blue/teal indicates the team won the set, red means they lost. The curves are of frequency distribution – meaning that for UC Irvine, the highest frequency of won sets occurred primarily in the narrow range of 18-23% service error while fewer won sets occurred outside this range – whereas Stanford’s curve for the bulk of won sets occurred in a wider range from 5-20% with only a few sets won outside this range.

The hypothesis of the casual men’s volleyball observer might be that higher levels of service error would of course manifest more frequently in lost sets, yet what we see is that for most teams, it doesn’t make a difference in terms of winning/losing the set. The fact that these mounds essentially overlap one another for the majority of teams indicates that they miss approximately the same number of serves in won and lost sets.

Screen Shot 2017-09-05 at 11.24.26 AM

There are of course a couple outliers. Cal Baptist and Hawai’i both show large effect sizes of service error in won/lost sets. A negative cohen’s d indicates an inverse relationship between service error % and winning the set; as one rises, the other falls. UCSD shows a medium strength relationship between the variables, but you’ll notice that all the other teams, including top teams such as Long Beach, BYU, UCI, and UCLA, all show small to negligible effect sizes for service error %.

So moving forward, unless you’re a fan of Cal Baptist (…unlikely) or Hawai’i (much more likely), don’t let those missed serves ruffle you. In the grand scheme of the set, remember that they’re likely negligible in terms of winning and losing.

Advertisements

Point Scoring% in the Big Ten

pspercentage

Not a sexy topic, but I just figured out how to do these ‘joy division’ charts in R so I’m kinda pumped to share.

What you see is a histogram of each team’s point scoring % in every individual set they played (only against the teams you see listed, so Purdue v. OSU but not Purdue v. Rutgers).

They’re ordered in ascending fashion by their average PS% in these sets. Something which interested me was the shape of top vs. medium teams. Nebraska and Minnesota seem pretty consistent set to set in how they PS – yet as you work down the chart, you’ll notice some teams flatten out or even have multiple peaks. The latter is especially comical because teams in the middle of the Big Ten could often be described as “dangerous” – sometimes they’re red hot and other times they’re pretty self-destructive. Multiple peaks would certainly play into this narrative and I would be interested to see if other metrics manifest in these patterns, specifically amongst the middle teams in the conference.

And to answer the question nobody asked, yes, Nebraska had a single set where they point scored at 0% (OSU set4) and one where they PS’d at 73% (PSU set5) – that’s why those outliers give the Nebraska chart wings.

Quick thoughts; serving

Was just messing around with some numbers this afternoon and wanted to share.

I looked at a few things related to serving, specifically serve error%, point score%, and serve output efficiency. I ran some correlations between these stats and themselves as well as with winning the set overall.

As with my last post, I'm only using data from the top 9 in the Big Ten from 2016 so the calculated efficiencies are based on these matches alone.

Serve error% and winning the set came out to -0.150, pretty weak – and a disappointment to parents and fans everywhere who'd like nothing more than for you to quit missing your damn serves.

Winning the set and serve output eff (like pass rating but using the actual efficiencies off each possible serve outcome) clocked in at 0.323

And serve error% and serve output eff correlated at -0.546, the highest result I found. This seems to reiterate that terminal contacts skew performance ratings. So quit missing your damn serve! but at the same time, it's unlikely you'll have missed serves to blame on their own for losing a set.

Point score% and serve output eff came in at 0.474, which makes a lot of sense – it would be interesting to see if serve output eff is the largest factor in whether you point score or not.

Finally, because everyone likes service errors, I did SE% and point score% which resulted in -0.220. Again, pretty mild – suggesting that while the association is negative, as we'd expect, teams can still point score well even if they're missing some serves.

Anyway, just wanted to jot these numbers down before they get lost in a notebook somewhere.

Attackers’ Trends + Visualizing Development

attacker trends.jpg

Here are how four of the key outsides with the top teams in the Big Ten looked from the start of conference play until their respective seasons ended. Output Efficiencies are calculated using data from both the Big & Pac 2016 seasons and look at not only the kills/errors/attempts, but also the value of non-terminal swings. In this case OutputEff differentiates between a perfect dig by the opponent and a perfect cover by the attacking team – or a poor dig versus a great block touch by the opponent – etc. In this sense it’s better than traditional “true efficiency” in that it’s not just about how well your opponent attacks back after you attack – but it also appropriately weights different block touch, dig, and cover qualities as to their league-average value.

What you see above is the trends of these outsides over the course of the season. Foecke continuously improves as the season, as does Haggerty for Wisconsin. Frantti is interesting in that she actually declines up until early November then turns it on as PSU approaches tournament time. Classic Penn State. If Wilhite didn’t hit for over “.600” early in the season, she wouldn’t look like she’s trending down – but you have to keep in mind that her average (just north of .300) kinda blows people out of the water when you look at her consistency.

Personally, while I think this type of stuff is mildly interesting and you can definitely spin a story out of it, it’s not actionable in the sense that it’s going to help a coach make a better decision. However, this same principle could and probably should be applied on an in-season basis to look deeper at the development of players and specific skills. For example, high ball attacking:

swatk.jpg

You could build something like this for every day in practice. If you goal is to pass better, cool, let’s take your data from practice and graph it for the week and see if whatever changes we’re trying to implement have had their desired effect. Or let’s see if the team is improving as a whole as we make these specific changes:

mnpass.jpg

*the asterisk on 10/29 is because volleymetrics coded both MN matches from that week on the same day, so the date on the file for both says 10/29. That’s why we use Avg. Output Eff.

Anyway, there are thousands of ways to implement something like this – and then turn it into some digestible and actionable for the coaching staff.

Which type of serve is best?

Bears. Beets. Battlestar Galactica.

serve type.jpg

What you see above is the distribution of serving performances per player per match, broken down by type of serve. This chart is built using Big Ten and Pac 12 conference matches and serving performances with fewer than 5 serves in a match were excluded. 1st ball point score efficiency is the serving team’s wins minus losses when defending the reception + attack of their opponent. It’s basically FBSO eff from the standpoint of the serving team, which is why most of the efficiencies are negative, as the serving team is more likely to lose on that first attack after they serve.

You’ll see from the viz that the natural midpoint for all types of serves is around -.250. So the argument then becomes, well if you’re going to average about the same result regardless of what serve you hit, what does it matter? What matters here is the deviation from the mean. If you look at jump floats, it looks like the classic bell-shaped normal distribution graph and if you searched for specific players, you could see how their performances shake out relative to the average of the two leagues. If a player consistently fell below this average, maybe it’s time to develop a new serve or dive deeper into her poor performance.

Jump serving, as you might expect, definitely has a good percentage of players with performances above the mean. However, there’s also a wider distribution in general and because of this (likely due to increased service error when jump serving) many performance fall far short of league averages. The takeaway here is that while it can be beneficial, the larger standard deviation means you might only want to be jump serving if you need to take a chance against a stronger team.

Standing floats are interesting. Close and far just indicate where the server starts, relative to the endline. Molly Haggerty with Wisconsin hits a “far” standing float while Kathryn Plummer out of Stanford hits her standing float just inches from the endline. Not only is the average for standing floats farther from the endline a little higher (-.243) than standing floats from close to the endline (-.257) but as you can see from the chart, these far away floats are more narrowly distributed, indicating more consistent performance.

While jump floats have the highest average (-.229) and jump serves (-.264) may provide the appropriate risk-reward for some servers, it may actually be these standing float serves from long distance that provide a great alternative if you have a player lacking a nicely developed, above-average serve.

False. Black bear.

Top Servers in the Big Ten

servetop.jpg

Coming back from the SSAC in Boston this past week, I’ve been putting more thought in player evaluation against the market they’re situated in. Much like how baseball uses WAR (wins above replacement) to compare players’ values against that of an average MLB player. That stat has of course evolved over the years with different measurements for position players and pitchers, but the underlying principle has remained constant.

Looking at volleyball, there are 6 (7 if you count freeball passing) discrete skills so a single skill WAR metric makes a little less sense, but the general philosophy can be applied as a way to compare performance against league expectancies.

So in the above viz, I’ve used the league average PS Eff for each of the receive qualities. Which looks like this table below:

Screen Shot 2017-03-11 at 4.12.43 PM

Service Ace on the top, working down to service error at the bottom. And yes. Service ace should absolutely be at 1.0 and I’m not sure why it isn’t, but .998 and 1.0 are pretty darn close for our purposes at the moment.

Using these numbers, we then look at the frequencies a player served and got each of these specific outcomes. Multiply frequencies by efficiencies, add them up, and divide by the number of serve attempts and voilà!!

I’ve built this viz to again look at the relationship to service error percentage (while highlighting the top servers). You’ll notice there’s a slight negative relationship between effective serving and lower service error, but it’s not definite. Especially when looking at the servers who bring the most value, there’s certainly a range of error in that group – and almost a correlation of 0 if you draw a box between SSS, Davis, Swackenberg, and Kranda.

However in a general sense, you can assess the value each server brings to the table based on what her results are worth against the league average. In this case, Kranda comes out on top as giving you the best shot to point score.

Clever folks might be wondering what her breakdown looks like in terms of percentages of each outcome. So voilà again!

Book1

^ Here are the top 4 servers’ breakdown by each of the outcomes.

What you’ll notice is that they all have a unique footprint. Kranda makes her money by serving aces (around 14%) whereas SSS lives in the consistently good realm. SSS only misses 2% of her serves. That’s a huge deal. She keeps consistent pressure and even though her sum total of ok+good+perfect passes is higher than the others, she doesn’t give up free points, which results in her being the 3rd best server in the Big Ten in 2016.

I’m just starting to look at the data from a “what’s it worth relative to the league” type of standpoint, so I’ll likely have more posts like this soon. Previously, I’ve focused more on “what’s a player worth to her team” and specifically “what’s a player worth to her team, in this context, when playing opponent X.” I think the way I’ve approached this previously has merit, especially since we don’t have a marketplace for trading players like professional teams do – but you could easily evaluate All-Americans and other interesting things by comparing players to league data.

 

Pac 12 – FBSO & Service Error

fbso-and-se

Welcome to the Pac 12.

So here’s the same FBSO vs. SE% graphic I built for the Big Ten. Looks pretty similar, with a loose, positive correlation between the two metrics. If you don’t remember from the original post, these are the individual match performances of each team – meaning that UCLA in the bottom left there only missed 1.37% of their serves in a match where they also held their opponent (sorry Cal) to an FBSO Efficiency of -0.027. FBSO Eff again referring to the receiving team winning or losing the point on that first receive to attack possession, divided by the number of times they’re served to (which accounts for SE’s).

An interesting observation when comparing leagues is the correlation coefficient between the metrics in question . For the Big Ten, r = 0.45. For the Pac 12, r = 0.28. Translated, service error percentage has a stronger relationship with FBSO Eff in the Big Ten than in the Pac 12. What this suggests is that if you play in the Big Ten, missing your serves hurts you more than it does in the Pac 12.

But the next thought I had, which I think I’ve alluded to in most posts, is that the context matters – who you’re playing matters. Teams have different strengths, different weaknesses and maybe you just need to get them to 10 feet so they can’t set the quick – maybe you need to get them to 20 feet to really pressure them – or maybe you just need to serve in to pressure them…So we need to break this down by opponent.

FBSO and SE%opponent.jpg

This is still Opponent FBSO Eff vs. Team SE% on a per-match basis, but the opponents are listed on the left. The linear regression lines you see going through each set of dots is a visual representation of how related the two metrics are. If you look at Cal or Washington State, you’ll see pretty strong, positive relationships. This means if you miss fewer serves, you’re likely to lower your opponent’s FBSO Eff.

But the surprising thing here is that for the majority of teams there is little to no relationship between the two. If you take UCLA for example, the almost horizontal regression line means that increasing your SE% does not result in a corresponding increase or decrease in FBSO Eff, it’s effectively a wash. The way I interpret this is that while you might be serving “tougher” the additional errors are offsetting the overall outcome. So you might be reducing FBSO Eff when your serve is in (since you’re serving tougher), but you’re also missing more often, essentially counteracting any benefit.

Colorado is the only opponent where an increase in SE% actually has the relationship with FBSO Eff that most coaches seem to think – and it’s a crazy weak relationship (r = -0.13).

So for some teams, the takeaway here is that missing serves doesn’t really matter – they’re going to have the same FBSO Eff either way. But for some, like Stanford (r = 0.49) or Cal (r = 0.68), you might want to tone down the aggressive serving as the more errors you make, the more likely your opponent is going to raise their FBSO Eff.

FBSO and SE%oppbig10.jpg

Just for fun – and because I needed a break from my thesis – I ran this same type of thing using the Big Ten data again.

Playing against Ohio State has the strongest relationship between SE% and FBSO Eff at r = 0.68. Again, this means that you’re very likely to raise OSU’s FBSO Eff if you raise your SE% – so keep your freaking serves in play.

The Illinois chart is kind of funny. We had a running joke in the office that we shouldn’t pass perfectly if we could help it because our overall efficiency was better off a medium pass than a perfect one. This is likely because Jocelynn Birks was usually the best in-system choice but if the pass the perfect, our setter could make the mistake of involving someone else in the offense…Granted, Birks graduated last year and this data is from this season, but I’m glad to see the team sticking with tradition.

As you can see from the breakdown by opponent, the overall Big Ten relationship between FBSO Eff and SE% being stronger makes sense – as the majority of opponents show a weak to moderate relationship between the variables.

A complaint of this viewpoint might be that better teams serve better and that teams who miss a bunch of serves are less likely to stop an opponent in general. I think this is fair, Minnesota is a low-error team while Michigan State is often the opposite. One of these teams is also objectively better than the other, so should we compare them the same way? That’s where Efficiency Change comes into play. My next post will likely be Service Error % against Eff Change, broken down by team. This will help clarify the real value of serving against a specific opponent and take into account the overall context of the situation much better than FBSO Eff in general can.

Anddddd back to my thesis work…