Step 14.1 Computer Vision (update)

Quick update on our original post. We’ve mostly solved the “players switching sides while jumping” problem and have also jumped into the rabbit hole of ball tracking as well. I’m not going to get into the technical details here – more just give a glimpse into what we’ve been working on and where we are today.

Player Tracking

There’s still some jitter which we think we can smooth out by using moving averages. Honestly not too shabby compared with the original version from the first post. As we get farther along with this, I’d like to post something about cool use cases for this type of data. I’m not sure if there’s more value in adding extra positional data to what a .dvw / .vs file already has (think: regular file + 24 extra columns for the X & Y coordinates of each player) – or if the full tracking dataset (including movement between touches) would yield cooler insights. Probably a combination of both at some point?

Ball Tracking

Now to be fair, this video is cheating. This is a pre-labeled video (I have gone in and put that box around the volleyball). But I like having that bounding box around the ball, it’s a cool visual. As you can see, we haven’t merged the two models together – but in theory, we’d get to the point where you run things once and you’re able to get both player and ball tracking simultaneously.

Here is what the output from the model actually looks like. It shrinks the bounding box to its center and uses that to track the ball as it zips around the court. In theory, with a little math (distance, time, and how parabolas work) we should be able to calculate not only the XY of the ball, but also the Z coordinate (distance from floor). Fully baked ball tracking is still down the road for us, but cool to see what’s possible in a short amount of time.

As per usual, tons of credit to Andrew Tao for his work on the player tracking side – and to Steve Aronson for helping stand up the first iteration of ball tracking.