Computer Graphics
TU Braunschweig

Virtual Video Camera

Abstract

The Virtual Video Camera research project is aimed to provide algorithms for rendering free-viewpoint video from asynchronous camcorder captures. We want to record our multi-video data without the need of specialized hardware or intrusive setup procedures (e.g., waving calibration patterns).


Virtual Video Camera

Image-Based Free Viewpoint Video


"Who Cares", a stereoscopic free-viewpoint music video by Symbiz Sound.



Project Summary

The Virtual Video Camera research project is aimed to provide algorithms for rendering free-viewpoint video from asynchronous camcorder captures. We want to record our multi-video data without the need of specialized hardware or intrusive setup procedures (e.g., waving calibration patterns).

While controlling the location and time of the viewpoint, the user should not be able to distinguish synthetically rendered images from the ones originally recorded. Our key idea is to employ an image interpolation scheme that is based on dense pixel correspondences. This separates us from attempts that rely on depth/geometry reconstruction. We are convinced that a strict enforcement of any geometric model will ultimately fail, since failure cases can easily be constructed for these approaches.

Our goal is to provide and constantly improve a complete end-to-end system that includes algorithms on image correspondence estimation, (real-time) rendering, special-effects creation, camera calibration and quality assessment.



Publications

Matthias Überheide, Felix Klose, Tilak Varisetty, Markus Fidler, Marcus Magnor:
Web-based Interactive Free-Viewpoint Streaming
in Proc. ACM Multimedia, pp. 1031-1034, October 2015.
Poster Presentation

Christian Lipski, Felix Klose, Marcus Magnor:
Correspondence and Depth-Image Based Rendering: a Hybrid Approach for Free-Viewpoint Video
in IEEE Trans. Circuits and Systems for Video Technology (T-CSVT), vol. 24, no. 6, pp. 942-951, June 2014.


Alexander Lerpe:
Detail Hallucinated Image Interpolation
Master's thesis, TU Braunschweig, May 2013.

Felix Klose, Christian Lipski, Marcus Magnor:
A Framework for Image-Based Stereoscopic View Synthesis from Asynchronous Multi-View Data
in Emerging Technologies for 3D Video: Creation, Coding, Transmission and Rendering, Wiley, ISBN 978-1-118-35511-4, pp. 249-270, May 2013.

Christian Lipski, Christian Linz, Thomas Neumann, Markus Wacker, Marcus Magnor:
High Resolution Image Correspondences for Video Post-Production
in Journal of Virtual Reality and Broadcasting (JVRB), vol. 9.2012, no. 8, pp. 1-12, December 2012.

Christian Lipski, Felix Klose, Kai Ruhl, Marcus Magnor:
Making of ”Who Cares?” HD Stereoscopic Free Viewpoint Video
in Proc. European Conference on Visual Media Production (CVMP), vol. 8, pp. 1-10, November 2011.

Martin Eisemann, Felix Klose, Marcus Magnor:
Towards Plenoptic Raumzeit Reconstruction
in Cremers, D. and Magnor, M. and Oswald, M.R. and Zelnik-Manor, L. (Eds.): Video Processing and Computational Video, Springer, ISBN 978-3-642-24869-6, pp. 1-24, October 2011.

Marcus Magnor, Daniel Cremers, Lihi Zelnik-Manor, Martin Oswald (Eds.):
Video Processing and Computational Video
Springer, ISBN 978-3-642-24869-6, October 2011.

Kai Ruhl, Kai Berger, Christian Lipski, Felix Klose, Yannic Schröder, Alexander Scholz, Marcus Magnor:
Integrating multiple depth sensors into the virtual video camera
in Proc. SIGGRAPH, ACM, p. 1, August 2011.
SIGGRAPH '11: ACM SIGGRAPH 2011 Posters




Christian Linz, Christian Lipski, Marcus Magnor:
Multi-Image Interpolation based on Graph-Cuts and Symmetric Optical Flow
in Proc. Vision, Modeling and Visualization (VMV), Eurographics Association, pp. 115-122, November 2010.


Christian Lipski, Christian Linz, Thomas Neumann, Markus Wacker, Marcus Magnor:
High Resolution Image Correspondences for Video Post-Production
in Proc. European Conference on Visual Media Production (CVMP), vol. 7, IEEE Computer Society, pp. 33-39, November 2010.
http://doi.ieeecomputersociety.org/10.1109/CVMP.2010.12

Felix Klose, Christian Lipski, Marcus Magnor:
Reconstructing Shape and Motion from Asynchronous Cameras
in Proc. Vision, Modeling and Visualization (VMV), pp. 171-177, November 2010.

Christian Linz, Christian Lipski, Lorenz Rogge, Christian Theobalt, Marcus Magnor:
Space-Time Visual Effects as a Post-Production Process
in ACM Multimedia 2010 Workshop - 1st International Workshop on 3D Video Processing (3DVP), vol. 1, pp. 1-6, October 2010.

Anita Sellent, Christian Linz, Marcus Magnor:
Consistent Optical Flow for Stereo Video
in Proc. IEEE International Conference on Image Processing (ICIP), pp. 1-4, September 2010.

Christian Lipski, Christian Linz, Marcus Magnor:
Belief propagation optical flow for high-resolution image morphing
in Proc. SIGGRAPH, ACM, p. 1, August 2010.
SIGGRAPH '10: ACM SIGGRAPH 2010 Posters

Martin Eisemann, Timo Stich, Marcus Magnor:
3-D Cinematography with approximate and no geometry
in Rémi Ronfard and Gabriel Taubin (Eds.): Image and Geometry Processing for 3-D Cinematography, Springer, ISBN 978-3-642-12391-7, pp. 259-284, July 2010.


Benjamin Meyer, Christian Lipski, Björn Scholz, Marcus Magnor:
Multi-view Coding with Dense Correspondence Fields
in Proc. IEEE International Symposium on Consumer Electronics (ISCE), pp. 117-120, June 2010.

Benjamin Meyer, Christian Lipski, Björn Scholz, Marcus Magnor:
Real-time Free-Viewpoint Navigation from Compressed Multi-Video Recordings
in Proc. 3D Data Processing, Visualization and Transmission (3DPVT), pp. 1-6, May 2010.

Christian Lipski, Denis Bose, Martin Eisemann, Kai Berger, Marcus Magnor:
Sparse Bundle Adjustment Speedup Strategies
in WSCG Communication Papers Proceedings, pp. 85-88, February 2010.

Lorenz Rogge:
Integration of visual effects into the Virtual Video Camera system
Master's thesis, Institut für Computergraphik, TU Braunschweig, December 2009.

Kai Berger, Christian Lipski, Christian Linz, Anita Sellent, Marcus Magnor:
A ghosting artifact detector for interpolated image quality assessment
in Proc. ACM Applied Perception in Computer Graphics and Visualization (APGV), September 2009.

Christian Lipski, Georgia Albuquerque, Timo Stich, Marcus Magnor:
Spacetime Tetrahedra: Image-Based Viewpoint Navigation through Space and Time
Technical Report no. 12-9, Institut für Computergraphik, TU Braunschweig, December 2008.
http://www.digibib.tu-bs.de/?docid=00023968

Benjamin Meyer, Timo Stich, Marcus Magnor, Marc Pollefeys:
Subframe Temporal Alignment of Non-Stationary Cameras
in Proc. British Machine Vision Conference (BMVC), September 2008.

Timo Stich, Christian Linz, Georgia Albuquerque, Marcus Magnor:
View and Time Interpolation in Image Space
in Computer Graphics Forum (Proc. of Pacific Graphics PG), vol. 27, no. 7, pp. 1781-1787, February 2008.

Related Projects

Floating Textures

We present a novel multi-view, projective texture mapping technique. While previous multi-view texturing approaches lead to blurring and ghosting artefacts if 3D geometry and/or camera calibration are imprecise, we propose a texturing algorithm that warps (``floats'') projected textures during run-time to preserve crisp, detailed texture appearance. Our GPU implementation achieves interactive to real-time frame rates. The method is very generally applicable and can be used in combination with many image-based rendering methods or projective texturing applications. By using Floating Textures in conjunction with, e.g., visual hull rendering, light field rendering, or free-viewpoint video, improved rendering results are obtained from fewer input images, less accurately calibrated cameras, and coarser 3D geometry proxies. In a nutshell, the notion of Floating Textures is to correct for local texture misalignments by determining the optical flow between projected textures and warping the textures accordingly in the rendered image domain. Both steps, optical flow estimation and multi-texture warping, can be efficiently implemented on graphics hardware to achieve interactive to real-time performance.

Image-space Editing of 3D Content

The goal of this project is to develop algorithms in image space that allow photo-realistic editing of dynamic 3D scenes. Traditional 2D editing tools cannot be applied to 3D video as in addition to correspondences in time spatial correspondences are needed for consistent editing. In this project we analyze how to make use of the redundancy in multi-stereoscopic videos to compute robust and dense correspondence fields. these space-time correspondences can then be used to propagate changes applied to one frame consistently to all other frames in the video. Beside the transition of classical video editing tools we want to develop new tools specifically for 3D video content.

This project has been funded by ERC Grant #256941 `Reality CG` and the German Science Foundation, DFG MA2555/4-2.

Immersive Digital Reality

Motivated by the advent of mass-market head-mounted immersive displays, we set out to pioneer the technology needed to experience recordings of the real world with the sense of full immersion as provided by VR goggles.

Multi-Image Correspondences

Multi-view video camera setups record many images that capture nearly the same scene at nearly the same instant in time. Neighboring images in a multi-video setup restrict the solution space between two images: correspondences between one pair of images must be in accordance with the correspondences to the neighboring images.

The concept of accordance or consistency for correspondences between three neighboring images can be employed in the estimation of dense optical flow and in the matching of sparse features between three images.

This work has been funded in parts by the ERC Grant #256941 `Reality CG` and the German Science Foundation, DFG MA2555/4-2.

Perception-motivated Interpolation of Image Sequences

We present a method for image interpolation which is able to create high-quality, perceptually convincing transitions between recorded images. By implementing concepts derived from human vision, the problem of a physically correct image interpolation is relaxed to an image interpolation that is perceived as physically correct by human observers. We find that it suffices to focus on exact edge correspondences, homogeneous regions and coherent motion to compute such solutions. In our user study we confirm the visual quality of the proposed image interpolation approach. We show how each aspect of our approach increases the perceived quality of the interpolation results, compare the results obtained by other methods and investigate the achieved quality for different types of scenes.

Reality CG

Scope of "Reality CG" is to pioneer a novel approach to modelling, editing and rendering in computer graphics. Instead of manually creating digital models of virtual worlds, Reality CG will explore new ways to achieve visual realism from the kind of approximate models that can be derived from conventional, real-world imagery as input.

Who Cares?

Official music video "Who Cares" by Symbiz Sound; the first major production using our Virtual Video Camera.

Dubstep, spray cans, brush and paint join forces and unite with the latest digital production techniques. All imagery depicts live action graffiti and performance. Camera motion added in post production using the Virtual Video Camera.