It seems like everyone is talking about fitness in VR. From the well-known training sequence of Wade Watts in Ready Player One to the reams of comments piling up on r/Vive, to people designing their own fitness programs with playlists of Holopoint and Raquet NX, it’s easy to see that this is going to be a pretty big deal.

When conceptualizing how to design a fitness experience that would work not only with the Vive controllers, but with future tracked peripherals and an integrated resistance subsystem, I had to break down the problem into manageable chunks. There were–and are–many key questions that need to be answered.

From a spatial and mathematical standpoint, how do you track and grade exercise form while allowing for individual movement patterns. How do you define right and wrong, and convey that to the user without inspiring a rage-quit? Perhaps most importantly, how do you accomplish this with so few data points?

While it may seem obvious in retrospect, the current solution took many iterative forms. I started with simple colliders and had the user move their arms through them. Imagine a punching motion described by five colliders in a row. These colliders corresponded to an immutable array, and each collision would populate the array with the identifier of the collider being hit. When the array read { 1, 2, 3, 4, 5 }, the exercise would be completed. This worked pretty well, but was clearly not an extendable or robust solution.

The second iteration was to create an array of voxels as both a data visualization tool and capture device. In essence, a three dimensional array of vectors would be created dynamically and a cube collider would be instantiated at each position forming a grid of voxels, or 3D pixels. As the user moved through the space, which now appeared as a three dimensional grid, the voxel around each controller would light up and record its position in a list. We could then draw a line constituting positions through which the user had moved, and visualize the exercise. While this solution was pretty, and quite fun to interact with, it ended up being a short stepping stone to the third iteration.

I quickly realized that the voxels simply represented slices in time, and that we could forego mapping them entirely in favor of simply taking the position data of each controller in 100 millisecond slices.

This allowed much greater resolution than the voxel matrix and was much more dynamic in its application. The question was how to compare the user’s current motion to this ideal curve.

At Intel, I helped design an algorithm for comparing electromagnetic waves for determining position based on a passive radio-frequency identification (RFID) reflector. The data received by the antenna only contained a phase delta, which was the only data point we had to predict the distance of the target object. We found after filtering out the signal noise that the number of waves that had passed could be determined algorithmically based on the size of the phase delta. Smoothing out and determining the phase over time was done with a curve fitting algorithm based on the sum of squares method for determining statistical error.

preston_hermes

All this is to say that when I walked around the ideal curves drawn in the virtual space, everything clicked. The comparison of those curves could be done with simple math, and we could determine with accuracy how closely the curve was followed in any given repetition of the exercise.

The method continues to evolve. The curve tracking concept started us on what I believe to be the correct path, but was only part of the ultimate solution. The curves need to be dynamically manipulated to fit the user’s body in real time.

If you face east and do a bicep curl, it is exactly the same motion as if you were facing north. The static curve solution, while accurate in its own right, did not represent that distinction. It seems that the curve really shouldn’t be computed in relation to world space at all, but rather in local space in relation to the user’s joints. Additionally, body sizes and extents vary as well as position, so finding ways to dynamically fit curves to individual bodies will likely be key in the future.  This should be a fairly accurate representation of muscle recruitment, which is ultimately what we are trying to describe.

Due to the present limitation of having only three data points (two controllers and the HMD), we will need to use a custom inverse kinematics (IK) solution to predict where the elbows, shoulders, body and legs are positioned. This will evolve greatly with the introduction of tracking modules for those body parts.

Developing a solution to track and compare physical movements in the virtual world is a major milestone in the frontier of VR fitness. If anyone reading this article has worked on similar projects or has any insights you’d like to share, I would love to hear them. Feel free to reach out to duane@blackbox-vr.com.

1 COMMENT

  1. Great article, on trying to find way to track correct body movement. Not an easy task. We’ve been working on something similiar witht he Gear VR, with even less data points it’s hard. 🙂 Getting to read your thought process in solving these problems is simple awersome!

    I think VR is the future for gaming changing it from the sitting, slothful activity it is today to a platform for fitness change in our entire culture. Esports and Professional sports will merge. But the real impact will be in the home for the average person.

Comments are closed.