Computer Graphics
TU Braunschweig

Detailed Human Avatars from Monocular Video


Detailed Human Avatars from Monocular Video

We present a novel method for high detail-preserving human avatar creation from monocular video. A parameterized body model is refined and optimized to maximally resemble subjects from a video showing them from all sides. Our avatars feature a natural face, hairstyle, clothes with garment wrinkles, and high-resolution texture. Our paper contributes facial landmark and shading-based human body shape refinement, a semantic texture prior, and a novel texture stitching strategy, resulting in the most sophisticated-looking human avatars obtained from a single video to date. Numerous results show the robustness and versatility of our method. A user study illustrates its superiority over the state-of-the-art in terms of identity preservation, level of detail, realism, and overall user preference.


Author(s):Thiemo Alldieck, Marcus Magnor, Weipeng Xu, Christian Theobalt, Gerard Pons-Moll
Published:September 2018
Type:Article in conference proceedings
Book:International Conference on 3D Vision (IEEE)
DOI:10.1109/3DV.2018.00022
Presented at:International Conference on 3D Vision (3DV) 2018
Project(s): Comprehensive Human Performance Capture from Monocular Video Footage  Immersive Digital Reality 


@inproceedings{alldieck2018detailed,
  title = {Detailed Human Avatars from Monocular Video},
  author = {Alldieck, Thiemo and Magnor, Marcus and Xu, Weipeng and Theobalt, Christian and Pons-Moll, Gerard},
  booktitle = {International Conference on 3D Vision},
  doi = {10.1109/3{DV}.2018.00022},
  pages = {98--109},
  month = {Sep},
  year = {2018}
}

Authors