Global Position Prediction for Interactive Motion Capture

Research output: Contribution to journalJournal articleResearchpeer-review

Documents

We present a method for reconstructing the global position of motion capture where position sensing is poor or unavailable. Capture systems, such as IMU suits, can provide excellent pose and orientation data of a capture subject, but otherwise need post processing to estimate global position. We propose a solution that trains a neural network to predict, in real-time, the height and body displacement given a short window of pose and orientation data. Our training dataset contains pre-recorded data with global positions from many different capture subjects, performing a wide variety of activities in order to broadly train a network to estimate on like and unseen activities. We compare training on two network architectures, a universal network (u-net) and a traditional convolutional neural network (CNN) - observing better error properties for the u-net in our results. We also evaluate our method for different classes of motion. We observe high quality results for motion examples with good representation in specialized datasets, while general performance appears better in a more broadly sampled dataset when input motions are far from training examples.

Original languageEnglish
Article number3479985
JournalProceedings of the ACM on Computer Graphics and Interactive Techniques
Volume4
Issue number3
Pages (from-to)1-16
DOIs
Publication statusPublished - 2021

    Research areas

  • IMU, motion capture, neural networks

Number of downloads are based on statistics from Google Scholar and www.ku.dk


No data available

ID: 285525802