Currently, digital people models typically lack realism or require time-consuming manual editing of physical simulation parameters. Our hypothesis is that better and more realistic models of humans and clothing can be learned directly by capturing real people using 4D scans, images, and depth and inertial sensors. Combining statistical machine learning techniques and geometric optimization, we create realistic statistical models from the captured data.
To be able to digitize people from low-cost ubiquitous sensors (RGB cameras, depth or small number of wearable inertial sensors), we leverage the learned statistical models -- which are robust to noise and missing data.