Uplift's 2 camera capture provides a reliable method of motion capture that can be easily set up at any location to capture a number of sports movements. Because of the portable and flexible nature of Uplift's solution, the environment, positioning, lighting, and other variables can change from capture to capture. To make sure you're always capturing the highest-quality movements no matter the situation, follow these steps.
General Troubleshooting Steps
Ensure cameras are set up at 90deg angles to one another, and at the recommended distance
This is in reference to the actual camera orientations and not the tripods themselves; even if the tripods are set up at 90 degree angles, the quality of the data may be impacted if the views of the cameras are not orthogonal.
If the cameras deviate from this general setup, we cannot guarantee that our calibration matrix used for creating 3D data will work as intended.
Also, please ensure the cameras are set up at least 7 feet from the athlete.
Position the athlete in the green frame.
Our models try to “zone in” on people within the green frame. Although it can still capture body parts outside of this frame, the likelihood of collecting high-quality data decreases.
Ensure that the capture contains the entire movement. If part of the movement is missing in the videos, it can negatively impact the quality of our recorded data.
Although we've designed our event detection algorithms to be as robust as possible, without the start or end of a trial, we cannot guarantee they will work as intended. A helpful tip is to have the athlete move through the entire motion once to ensure they stay within the frame the whole time, and move the cameras back if needed.
Avoid black or dark clothing when possible.
All black clothing makes it more difficult for our models to capture your body positions. Similar to any other video-based computer vision model, the algorithm works by looking at edges, contours, and colors in the image (for an example of how these algorithms work, you can check out this article outlining how these algorithms may learn to recognize a bird). Since edges and contours appear as distinct lines in images, if you’re wearing all black clothing it becomes much more difficult for the algorithm to detect these lines and contours that are used to identify the joints in our models.
Ensure other people aren’t in the video recording
When other people are in the video recording, it’s possible that the model gets confused when it sees keypoints on other individuals.
Try a test-viewing during some warm ups to ensure there is minimal occlusion during key points of the activity (e.g., if you can’t see someone’s arm for most of the trial, it will be hard for our models to also see it).
As with any other motion capture system collecting accurate 3D data, our system works best if two cameras can see each “point” it’s trying to capture. Although this may be impossible for tasks with a lot of rotation (such as a pitch), ensuring that key joints and segments are in view of both cameras during important moments of the trial (e.g., at pitch initiation up to ball release) will maximize the likelihood that the data you collect is of high quality.
If possible, set up a plain background in the video.
Although this isn’t necessary for high quality data, limited objects in the background reduce the likelihood of any errors in the motion tracking.
If you must deviate from our recommended camera setups, ensure that the cameras are still at 90deg angles, they are at the specified distance from one another, and that body parts are visible for as long as possible during a trial. Understandably, it may not always be possible to set up the cameras with our recommended configurations. So long as the principles above are adhered to, you should still be able to collect accurate data.
Putting Biomechanical Data Quality in Context
Even if you follow our recommended camera setups perfectly and have gone through all the steps above to ensure high quality data, it is still possible that some data may be omitted from your report. This is because we implement quality assurance processes on our motion data before putting them into reports.
Unfortunately, the reality of collecting biomechanical data is that sometimes it is messy and the data doesn’t meet the stringent requirements we require when trying to deliver actionable insights. Omission of data due to quality issues is something that even happens in published research studies where teams of watchful graduate students and professors use 8+ cameras (each of which may cost upwards of $20K) in relatively controlled settings. You’ll often see in the results sections of research papers that “X number of participants” or “Y number of trials” were excluded from the analysis due to data quality issues. Therefore, it’s almost inevitable that, at some point, trials will be omitted from a report you receive due to data quality issues.
Despite the messiness that sometimes exists when collecting biomechanical data, we understand it can be frustrating to have missing data. Therefore, we are constantly refining our models and algorithms to ensure that we reliably yield high-quality data that meets our stringent quality assurance processes.