Computer Graphics
TU Braunschweig

Immersive Digital Reality


Motivated by the advent of mass-market head-mounted immersive displays, we set out to pioneer the technology needed to experience recordings of the real world with the sense of full immersion as provided by VR goggles.

Immersive Digital Reality

a DFG Reinhart Koselleck Project

Project Summary

Motivated by the advent of mass-market head-mounted immersive displays, we set out to pioneer the technology needed to experience recordings of the real world with the sense of full immersion as provided by VR goggles. To achieve this goal, a number of interdisciplinary, tightly interrelated challenges from video processing, computer graphics, computer vision, and applied visual perception need to be addressed concertedly. By importing the real world into immersive displays, we want to lay the foundations for the way we may watch movies in the future, leaving fixed-viewpoint, limited field-of-view screens behind for a completely immersive, collective experience.


Susana Castillo

Moritz Kappel

Moritz Mühlhausen

Jan-Philipp Tauscher

Steve Grogorick

Visiting Researchers

Tobias Bertel


Marc Kassubeck

Thiemo Alldieck

Leslie Woehler

Matthias Überheide

Michael Stengel

Job Openings

We are always looking for excellent researchers. Want to join the project?


October 10, 2019

Invited talk at DLR Braunschweig "What’s missing in Head-mounted VR Displays?"

November 30, 2018

Invited talk at FhG Heinrich Hertz Institut Berlin "Turning Reality into Virtual Reality"

January 12, 2018

Keynote presentation at VR Walkthrough Technology Day, TU Tampere, Finland (presentation video)

April 20, 2017

Invited talk at Stanford Computer Graphics Lab (GCafe), Stanford University, USA 

January 23, 2017

Invited talk at University of Konstanz/SFB TRR 161: "Visual Computing - Bridging Real and Digital Domain"


June 30-July 3, 2019

Real VR - Importing the Real World into Immersive VR and Optimizing the Perceptual Experience of Head-Mounted Displays, Dagstuhl Seminar 19272

June 7-8, 2017

Symposium on Visual Computing and Perception (SVCP),TU Braunschweig


In the News

March 23, 2018
Interview in the local newspaper Braunschweiger Zeitung (in German)
November 10, 2017
Article in local chamber of commerce magazine standort 38 (in German)
June 8, 2017
Radio FFN coverage (mp3)
May 4, 2016Articles in the local newspaper Braunschweiger Zeitung. TU Research Magazine, and (in German).



Marcus Magnor, Alexander Sorkine-Hornung (Eds.):
Real VR - Importing the Real World into Immersive VR and Optimizing the Perceptual Experience of Head-Mounted Displays
in Marcus Magnor, Alexander Sorkine-Hornung (Eds.): Dagstuhl Reports, Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, ISBN 2192-5283, pp. 143-156, November 2019.
Dagstuhl Seminar 19272

Marcus Magnor:
From Reality to Immersive VR: What’s missing in VR?
Dagstuhl Reports @ Dagstuhl Seminar 2019, p. 151, November 2019.
Dagstuhl Seminar 19272

Steve Grogorick, Georgia Albuquerque, Marcus Magnor:
Comparing Unobtrusive Gaze Guiding Stimuli in Head-mounted Displays
in Proc. IEEE International Conference on Image Processing (ICIP), IEEE, October 2018.

Jan-Philipp Tauscher, Fabian Wolf Schottky, Steve Grogorick, Marcus Magnor, Maryam Mustafa:
Analysis of Neural Correlates of Saccadic Eye Movements
in Proc. ACM Symposium on Applied Perception (SAP), no. 17, ACM, pp. 17:1-17:9, August 2018.

Tilak Varisetty, Markus Fidler, Matthias Überheide, Marcus Magnor:
On the Delay Performance of Browser-based Interactive TCP Free-viewpoint Streaming
in Proc. IFIP Networking 2018 Conference (NETWORKING 2018), IEEE, pp. 1-9, July 2018.

Thiemo Alldieck, Marcus Magnor, Weipeng Xu, Christian Theobalt, Gerard Pons-Moll:
Video Based Reconstruction of 3D People Models
in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, pp. 8387-8397, June 2018.
CVPR Spotlight Paper

Moritz Mühlhausen, Matthias Überheide, Leslie Wöhler, Marc Kassubeck, Marcus Magnor:
Automatic Upright Alignment of Multi-View Spherical Panoramas
Poster @ European Conference on Visual Media Production 2017, December 2017.
Best Student Poster Award

Steve Grogorick, Michael Stengel, Elmar Eisemann, Marcus Magnor:
Subtle Gaze Guidance for Immersive Environments
in Proc. ACM Symposium on Applied Perception (SAP), ACM, pp. 4:1-4:7, September 2017.

Thiemo Alldieck, Marc Kassubeck, Bastian Wandt, Bodo Rosenhahn, Marcus Magnor:
Optical Flow-based 3D Human Motion Estimation from Monocular Video
in Proc. German Conference on Pattern Recognition (GCPR), Springer, pp. 347-360, September 2017.

Martin Weier, Michael Stengel, Thorsten Roth, Piotr Didyk, Elmar Eisemann, Martin Eisemann, Steve Grogorick, André Hinkenjann, Ernst Kruijff, Marcus Magnor, Karol Myszkowski, Philipp Slusallek:
Perception-driven Accelerated Rendering
in Computer Graphics Forum (Proc. of Eurographics EG), vol. 36, no. 2, The Eurographics Association and John Wiley & Sons Ltd., pp. 611-643, April 2017.

Thomas Löwe, Michael Stengel, Emmy-Charlotte Förster, Steve Grogorick, Marcus Magnor:
Gaze Visualization for Immersive Video
in Burch, Michael and Chuang, Lewis and Fisher, Brian and Schmidt, Albrecht and Weiskopf, Daniel (Eds.): Eye Tracking and Visualization, Springer, ISBN 978-3319470238, pp. 57-71, March 2017.

Michael Stengel, Marcus Magnor:
Gaze-contingent Computational Displays: Boosting perceptual fidelity
in IEEE Signal Processing Magazine, vol. 33, no. 5, IEEE, pp. 139-148, September 2016.

Michael Stengel, Steve Grogorick, Martin Eisemann, Marcus Magnor:
Adaptive Image-Space Sampling for Gaze-Contingent Real-time Rendering
in Computer Graphics Forum (Proc. of Eurographics Symposium on Rendering EGSR), vol. 35, no. 4, pp. 129-139, July 2016.
EGSR'16 Best Paper Award

Related Projects

Comprehensive Human Performance Capture from Monocular Video Footage

Photo-realistic modeling and digital editing of image sequences with human actors are common tasks in the movies and games industry. The processes are however still laborious since tools only allow basic manipulations. In cooperation with the Institut für Informationsverarbeitung (TNT) of the University of Hannover (, this project aims to solve this dilemma by providing algorithms and tools for automatic and semi-automatic digital editing of actors in monocular footage. To enable visual convincing renderings, a digital model of the human actor, detailed spatial scene information as well as scene illumination need to be reconstructed. Hereby plausible look and motion of the digital model are crucial.

This research project is partially funded by the German Science Foundation DFG.

Digital Representations of the Real World

The book presents the state-of-the-art of how to create photo-realistic digital models of the real world. It is the result of work by experts from around the world, offering a comprehensive overview of the entire pipeline from acquisition, data processing, and modelling to content editing, photo-realistic rendering, and user interaction.

Eye-tracking Head-mounted Display

Immersion is the ultimate goal of head-mounted displays (HMD) for Virtual Reality (VR) in order to produce a convincing user experience. Two important aspects in this context are motion sickness, often due to imprecise calibration, and the integration of a reliable eye tracking. We propose an affordable hard- and software solution for drift-free eye-tracking and user-friendly lens calibration within an HMD. The use of dichroic mirrors leads to a lean design that provides the full field-of-view (FOV) while using commodity cameras for eye tracking.

ICG Dome

Featuring more than 10 million pixels at 120 Hertz refresh rate, full-body motion capture, as well as real-time gaze tracking, our 5-meter ICG Dome enables us to research peripheral visual perception, to devise comprehensive foveal-peripheral rendering strategies, and to explore multi-user immersive visualization and interaction.

Reality CG

Scope of "Reality CG" is to pioneer a novel approach to modelling, editing and rendering in computer graphics. Instead of manually creating digital models of virtual worlds, Reality CG will explore new ways to achieve visual realism from the kind of approximate models that can be derived from conventional, real-world imagery as input.

Virtual Video Camera

The Virtual Video Camera research project is aimed to provide algorithms for rendering free-viewpoint video from asynchronous camcorder captures. We want to record our multi-video data without the need of specialized hardware or intrusive setup procedures (e.g., waving calibration patterns).