Computer Graphics
TU Braunschweig

Monocular Video Augmentation

Abstract

The goal of this project is to augment video data with high-quality 3D geometry, while only using a single camera as input. As an application of this project, we want to dress a person in a video with artificial clothing. We reconstruct a 3D human pose from 2D input data. This information can be used to drive a cloth simulation creating a plausible 3D garment for the observed pose. Composing this animated garment into the original video creates the illusion of the person wearing different clothing. We aim at real-time frame rates for this system, allowing for virtual mirror applications.

Publications

Lorenz Rogge:
Augmenting People in Monocular Video Data
PhD thesis, TU Braunschweig, July 2015.


Lorenz Rogge, Pablo Bauszat, Marcus Magnor:
Monocular Albedo Reconstruction
in Proc. IEEE International Conference on Image Processing (ICIP), IEEE, pp. 1046-1050, October 2014.

Lorenz Rogge, Thomas Neumann, Markus Wacker, Marcus Magnor:
Monocular Pose Reconstruction for an Augmented Reality Clothing System
in Proc. Vision, Modeling and Visualization (VMV), pp. 339-346, September 2011.

Related Projects

Comprehensive Human Performance Capture from Monocular Video Footage

Photo-realistic modeling and digital editing of image sequences with human actors are common tasks in the movies and games industry. The processes are however still laborious since tools only allow basic manipulations. In cooperation with the Institut für Informationsverarbeitung (TNT) of the University of Hannover (http://www.tnt.uni-hannover.de/), this project aims to solve this dilemma by providing algorithms and tools for automatic and semi-automatic digital editing of actors in monocular footage. To enable visual convincing renderings, a digital model of the human actor, detailed spatial scene information as well as scene illumination need to be reconstructed. Hereby plausible look and motion of the digital model are crucial.

The research project is funded by the German Science Foundation, DFG MA2555/12-1.

Scene-Space Video Processing

The high degree of redundancy in video footage allows compensating for noisy depth estimates and to achieve various high-quality processing effects such as denoising, deblurring, super resolution, object removal, computational shutter functions, and scene-space camera effects.