Most of the people reading this blog are pretty familiar with the stuff I’m doing in terms of athlete monitoring and specifically with my thesis here at Illinois. You’ve likely read articles such as these (ESPN, Sports Illustrated, or even BJSM if you’re a complete nerd). “Sports Science” is the buzzword you hear and companies are popping up all over to monitor sleep, fatigue, stress, HRV, GPS, training loads, recovery, hydration, you name it…
The issue I have with this approach is that they’re aiming at the wrong target in my opinion (which is admittedly not backed by several PhD’s nor a lifetime of experience). Most people are aiming to do one of the following: avoid overtraining syndrome, avoid rapid spikes in training load, assess mental wellness, or assess physical strength/competency. But the common thread underneath it all: Performance. We want to avoid overtraining syndrome because it cripples performance. We want to avoid injuries because injuries hurt performance. We want to keep our athletes’ mental wellbeing optimal because unstable athletes have unstable performances. We want to increase athlete strength/speed/etc because we want to increase performance.
So performance is the name of the game. Cool. Done. So why aren’t we really looking at it? Running faster isn’t the performance outcome we ultimately care about. Neither is lifting heavier. What we really care about is how well you played – not how fast you ran.
Anna Saw published some interesting research last year looking at the effectiveness of using athlete self-reported subjective measures for tracking changes in wellness in response to training load. What she found was that these subjective measures outperformed objective measures (blood markers, heart rate data, etc) in their sensitivity to changes in load. And trying to account for contextual stressors makes a lot of sense – running a mile when you feel great isn’t terrible; running a mile when you’re exhausted from 2 midterms, a fight with your girlfriend, and a nagging phone call from your parents about getting your life together…that mile might feel longer. So while you might feel more or less ready, biomarkers might fail to register the effects of these stressors in a meaningful way. That’s why these subjective measures are important – because they may be able to cut through the noise and get at the source of increased or decreased performance.
This fall, we took Saw’s work a step further and asked the Illinois volleyball team to subjectively rate their performances in practices and matches on the spectrum from their own worst to best performances ever. We made the assumption that the athlete knows when she has a terrible match or a great practice according to her own standards and expectations. The same way you know when your mind and body are at optimal levels before a practice – we have a pretty good sense about these things (as evidenced from Saw’s research).
The viz you see at the top of this post is one of our starters. I looked through her data and built a regression model to explain and predict her performance for nine matches in the fall. You might ask why so few matches are included:
1. The data collection process was transitioned to our DOVO in November as a more permanent solution so if it happened after that, I didn’t use that data.
2. We eliminated preseason matches
3. The athletes took 3 surveys on practice/match days (AM, Pre, Post). If they forgot one of these surveys and the regression model is built using a variable from the survey they forgot, it can only use days with 100% data collection to spit out an output.
So for this athlete, her regression model is as follows:
Performance = 0.33 (physical fatigue) + 0.84 (3-day rolling average: control of your day) + 2.2 (Acute:Chronic workload) + 0.44 (PhysFatigue*Control of your day) – 2.55
That’s 3 unique variables. Her pre-match physical fatigue rating, a 3 day average of her “who’s in control of your day: you or others” rating, and her acute to chronic workload.
Using these variables, we account for 90% of the variance in her performance (adj-R squared = .90). This model has a p-value of 0.007 and a LOOCV value of 0.16 (a way to look at how well a model predicts). If this is over your head a little, the translation is that this model is pretty decent.
If you look at the viz, the x-axis is what the model spits out graphed against the y-axis of what the athlete actually reported. We’d like to see a perfectly straight line through all the values, but we get a pretty good estimate here.
To translate: if we know these 3 values for this player going into a match, we can predict with pretty good accuracy how this player will self-report her performance afterwards. That’s kind of insane. That’s what my thesis is about.
The cool thing is that self-rated performance can be universal. Runners know when they were in the zone, quarterbacks know when they’re seeing the field and throwing with accuracy, golfers know when they’re driving the ball like they want to. All we need to do is ask the right questions.
Now this isn’t to say that all our athletes had models at this standard. The lowest adjusted R-squared was 0.23 which is pretty useless at the end of the day. Some athletes took these surveys with varying levels of seriousness, which is fine, we’re asking a lot of them. But for those with good data, we got pretty cool results.
A final, interesting aspect of using self-rated performance is that it could help drive athlete education and behavioral changes. A player might be more inclined to change her behavior is she sees the direct relationship to how she is rating her performance rather than just having an athletic trainer spit out cookie cutter lines about getting more sleep/etc. If I can show her that specific actions will lead to her reporting better performance, I think that could impact how influential athlete monitoring systems can be.
When my thesis is actually finalized I’ll share it here, but that’s not likely to be until April/May. I’m excited to see how it progresses.