Skip to main content
All CollectionsUsing Uplift
Tips to ensure a successful assessment
Tips to ensure a successful assessment

Ensure your assessment results and movement profile are as accurate as possible by following the instructions below.

M
Written by Matthew Kowalski
Updated over a week ago

Place your iPhone or iPad on a tripod for the most accurate results

Stabilize your device as much as possible to minimize shaking in the recording. Uplift works best if the recording is uniform between assessments, so a tripod will maintain the camera's position between takes. Do your best to set up the camera such that your landing will not shake the camera. If you do not have access to a tripod, try to ensure the device is as steady as possible.

Set up your device so the athlete’s whole body (including feet and head) are visible in the video window through the whole jump motion

Although we've designed our event detection algorithms to be as robust as possible, without the start or end of a trial, we cannot guarantee they will work as intended. A helpful tip is to have the athlete move through the entire motion once to ensure they stay within the frame the whole time, and move the camera back if needed. Run a complete test before you press record. If you cannot see any part of the participant’s body, neither can our analysis.

Athlete should land as close to takeoff position as possible

If possible, aim to jump straight up and down. The closer to the landing position you land the more accurately the model will measure your movement. Begin and end your movement in the same relaxed position.

Make sure you are in good lighting: Avoid back-lighting and silhouetting

The model relies on the contrast in the video to detect the joint centers of the body. Similar to any other video-based computer vision model, the algorithm works by looking at edges, contours, and colors in the image (for an example of how these algorithms work, you can check out this article outlining how these algorithms may learn to recognize a bird). Since edges and contours appear as distinct lines in images, without good lighting the model will not be able to find what it’s looking for. Without good lighting, it becomes much more difficult for the algorithm to detect these lines and contours that are used to identify the joints in our models.

Perform each movement as described in the app and demonstrated in the example video

We are measuring specific data points depending on the movement you are assessing. To maximize the accuracy of your results we encourage you to focus on form and perform the movements as demonstrated.

Record 1 second before AND after the jump motion

Start each recording with the athlete standing still in a neutral position. Wait at least 1 second after pressing record to start the jumping motion. When the motion is complete, and the athlete has returned to rest, wait again for 1 second before stopping the recording. Our model works best when it can track your body through its full range of motion.

Avoid baggy clothing and dark colors if possible

The software is working to detect your joint centers, so baggy clothing will interfere with the processing. The algorithm works by looking at edges, contours, and colors in the image. Dark colors may not show as much contrast in the recording. Excessively baggy clothing will throw off the edge detection in the model and cause it to misread your body's joint centers and body position.

Did this answer your question?