PropertyValue
?:abstract
  • Tracking body and hand motions in the 3D space is essential for social and self-presence in augmented and virtual environments. Unlike the popular 3D pose estimation setting, the problem is often formulated as inside-out tracking based on embodied perception (e.g., egocentric cameras, handheld sensors). In this paper, we propose a new data-driven framework for inside-out body tracking, targeting challenges of omnipresent occlusions in optimization-based methods (e.g., inverse kinematics solvers). We first collect a large-scale motion capture dataset with both body and finger motions using optical markers and inertial sensors. This dataset focuses on social scenarios and captures ground truth poses under self-occlusions and body-hand interactions. We then simulate the occlusion patterns in head-mounted camera views on the captured ground truth using a ray casting algorithm and learn a deep neural network to infer the occluded body parts. In the experiments, we show that our method is able to generate high-fidelity embodied poses by applying the proposed method on the task of real-time inside-out body tracking, finger motion synthesis, and 3-point inverse kinematics.
is ?:annotates of
?:arxiv_id
  • 2012.03680
?:creator
?:externalLink
?:license
  • arxiv
?:pdf_json_files
  • document_parses/pdf_json/fd1a205b45e7bef69bb77a90555c13b6a18833d6.json
?:publication_isRelatedTo_Disease
?:sha_id
?:source
  • ArXiv
?:title
  • UNOC: Understanding Occlusion for Embodied Presence in Virtual Reality
?:type
?:year
  • 2020-11-12

Metadata

Anon_0  
expand all