for RMD version: https://rpubs.com/chadgordon09/step7

1. Above are the possible outcomes that result from any given attack, using VolleyMetrics’ coding. The * Expected Value* is given for each output along with the count of each in parentheses (you’ll notice we’re not working with a small sample size…)

2. Block touches; if you’re wondering where they are – we skipped to the next touch if there was a block touch. So if it was a tool, we just called it a kill for simplicity’s sake. If there was a good block touch that resulted in a perfect dig, we called the “output” a perfect dig.

3. So you can think of adding these extra outputs as an “*expanded attack efficiency*” – one that relies not only on the terminal kills, errors, and blocks – but **gives credit** for 1/4 of a point on an attack that leads into a *poor dig* (+0.256) by the defense – and * subtracts* ~1/5 of a point if you attack into a

*perfect dig*(-0.187). This way, we can better account for attackers that might be described as

*having great volleyball IQ*who don’t simply chip the ball to the libero anytime they don’t have a fantastic swing. We want to give credit where it’s due.

4. So this is cool, using the language of ** Expected Value**, to describe 100% of attacking rather than the 50% traditional attack efficiency accounts for – but now what?

**Does this help predict the future?**

5. First row, we create our column “output_situation” – this column will eventually contain each attack outcome that we designate.

5.1 Second row, if the Skill is Attack & the next row is a Block, then give me two touches in the future, give me whatever is in Attack + 2 touches. Third – if the next row is not Block, give me whatever is next, Attack + 1 rows.

5.2 If the specific Attack is #, =, or /…then I know it’s respectively Kill, Error, Block and label them accordingly.

5.3 Finally, because these outputs account for 99.1% of outputs after attacking (VM code is not perfect, believe it or not) we need to only use these and get rid of the other 0.9% – so that’s the last row, keep it if it’s in the list of things we want, otherwise call it NA.

6. Now that we have our “output_situation” variable, we need to see how Expected Value differs between each of the outcomes. In this case, “rally_eff” is just saying, if a specific touch happens and you won that rally at any point, +1. If you lost that rally at any point -1. Then we aggregate all those attacks and divide by the count to determine the average rally_eff or average* Expected Value*. We then label this,

*.*

**eV_output**6a. A common question is should the attacker receive credit even if the rally ended 10 seconds or another 5 possessions or something after the attack? I wrestled with this question for a long time and looked at the correlation between using a “possession efficiency” vs. a “rally efficiency” – giving credit “within the appropriate possession” vs. “within the whole rally” – and surprisingly, the outcomes were very very similar so we stuck with “rally efficiency” for its ease of calculation, use, and explanation.

7. Now is the fun stuff. We have the attack label (“output_situation”) as well as the Expected Value (“eV_output”) but we need to merge this information w/ our master dataframe with all rallies and touches. This merge looks to match output_situation in our master data w/ output_situation in our temporary dataframe (a) and when it finds a match, it spits out the eV_output value associated with the specific output_situation. (Basically, when it finds “Kill” in the dataframe, it adds 1.0 to the eV_output column and does this for each of the different output_situation outcomes)

7.1 Because the VM data isn’t 100% clean, eV_output values for Kill and Error as like 0.99981 and -0.99982 respectively – so we manually clean those up to be an even +1 or -1 as they should be.

Now to the fun stuff…

8. Does Expected Value **predict the future**?

8.1 So we use similar code to the other posts. We grab *historical* Expected Value numbers for the two competing teams. We look at the difference between these values – and then see if using the historical difference, we can predict the winner of a “future” set.

8.2 Let’s take the 2019 National Championship. Stanford vs. Wisconsin. A difference of 0.00396.

**Stanford** Attacking Expected Value: 0.285

**Wisconsin** Attacking Expected Value: 0.281

8.3 Basically the question we are asking is if I knew the two values (0.285 and 0.281) for the teams in question – could I predict the future?

8.4 So we run the same logistic regression modeling to take continuous inputs and predict binary outputs (input = eV difference, output = won or lost)

8.5 The model predicts with 32% accuracy. I would argue, this is not great. Historical Expected Value averages for attacking do not seem to reliably predict the future. This is identical to just using regular, old-fashioned hitting efficiency to predict the future.

9. Well, dang. Does Expected Value **explain the past**? Yes, yes, it does. To the same extent of Hitting Efficiency – about 65% of variance in the result can be attributed to the difference between Expected Value in Attacking.

10. Final question – does historical Expected Value better predict future Hitting Efficiency?

10.1 So we know that regular, basic hitting efficiency is important. But we also know that historical hitting efficiency doesn’t always predict future hitting efficiency due largely to variability (teams might hit 0.300 +/- 0.200 for a range of 0.100 to 0.500).

10.2 I’m a big fan of stealing good ideas: https://hockey-graphs.com/2015/10/01/expected-goals-are-a-better-predictor-of-future-scoring-than-corsi-goals/

10.3 So…is Expected Value a better predictor of Future Attacking? Unfortunately at the team level, *doesn’t look like it*. We see R^{2} values around 0.12, meaning only 12% of variance in actual attack efficiency for a team in a set can be accounted for by historical attack efficiency or historical expected value numbers.

10.4 Even at the individual player level, it might not be great either. The correlations are higher, near 0.39, but still not super high. This might be worth digging into in the future, but for the moment I’m going to leave it alone.

11. So, Expected Value sounds good in principle. It is a more accurate way to measure attacking outputs. It accounts for the positive and negative non-terminal attacks to better represent the outcomes an attacker has created. In the aggregate of 8,000,000 touches, ~1.9 million of which are attacks, using single efficiencies for output situations just might not work. We have about a 7:1 gender split of data between women and men’s matches that may be playing a role. We have Big Ten and Pac 12 data in there with lower level conferences around the US. Future work here might require a more focused, drilled-down viewpoint rather than taking everything in the aggregate as there are likely a few confounding variables – but for now, the concept is really the important piece.

12. But wait! We’re only looking at **outputs** from attacking. Hitting an in-system quick attack must be easier than hitting a high ball against a triple block though right?? Shouldn’t we account for this? Yes. Yes we should. Welcome to the other half of the equation: **the input situation**

## 2 Comments