Computer Graphics
TU Braunschweig

Research Projects


ICG Dome

Featuring more than 10 million pixels at 120 Hertz refresh rate, full-body motion capture, as well as real-time gaze tracking, our 5-meter ICG Dome enables us to research peripheral visual perception, to devise comprehensive foveal-peripheral rendering strategies, and to explore multi-user immersive visualization and interaction.


Immersive Digital Reality

Motivated by the advent of mass-market head-mounted immersive displays, we set out to pioneer the technology needed to experience recordings of the real world with the sense of full immersion as provided by VR goggles.



Astrophysical Modeling and Visualization

Humans have been fascinated by astrophysical phenomena since prehistoric times. But while the measurement and image acquisition devices have evolved enormously by now, many restrictions still apply when capturing astronomical data. The most notable limitation is our confined vantage point in the solar system, disallowing us to observe distant objects from different points of view.

In an interdisciplinary German-Mexican research project partially funded by German DFG (Deutsche Forschungsgemeinschaft, grants MA 2555/7-1 and 444 MEX-113/25/0-1) and Mexican CONACyT (Consejo Nacional de Ciencia y Tecnología, grants 49447 and UNAM DGAPA-PAPIIT IN108506-2), we evaluate different approaches for automatical reconstruction of plausible three-dimensional models of planetary nebulae. The team comprises astrophysicists working on planetary nebula morphology as well as computer scientists experienced in the field of reconstruction and visualization of astrophysical objects.


Radio Astronomy Synthesis Imaging

Radio interferometers sample an image of the sky in the spatial frequency domain. Reconstructing the image from a necessarily incomplete set of samples is an ill-posed inverse problem that we address with methods inspired by the theory of compressed sensing.

During two research visits to the National Radio Astronomy Observatory (NRAO) and the University of New Mexico, both gratefully funded by the Alexander von Humboldt Foundation, we had the unique opportunity to work together with world-leading experts in radio astronomy synthesis imaging to develop new algorithms for the Very Large Array (VLA) and other radio telescope arrays.



Alternate Exposure Imaging

Traditional optic flow algorithms rely on consecutive short-exposure images. In contrast, long-exposed images contain integrated motion information directly in form of motion blur. In this project, we use the additional information provided by a long exposure image to improve robustness and accuracy of motion field estimation. Furthermore, the long exposure image can be used to determine the moment of occlusion for the pixels in any of the short exposure images that are occluded or disoccluded.

This work has been funded by the German Science Foundation, DFG MA2555/4-1


Comprehensive Human Performance Capture from Monocular Video Footage

Photo-realistic modeling and digital editing of image sequences with human actors are common tasks in the movies and games industry. The processes are however still laborious since tools only allow basic manipulations. In cooperation with the Institut für Informationsverarbeitung (TNT) of the University of Hannover (http://www.tnt.uni-hannover.de/), this project aims to solve this dilemma by providing algorithms and tools for automatic and semi-automatic digital editing of actors in monocular footage. To enable visual convincing renderings, a digital model of the human actor, detailed spatial scene information as well as scene illumination need to be reconstructed. Hereby plausible look and motion of the digital model are crucial.

The research project is funded by the German Science Foundation, DFG MA2555/12-1.


Monocular Video Augmentation

The goal of this project is to augment video data with high-quality 3D geometry, while only using a single camera as input. As an application of this project, we want to dress a person in a video with artificial clothing. We reconstruct a 3D human pose from 2D input data. This information can be used to drive a cloth simulation creating a plausible 3D garment for the observed pose. Composing this animated garment into the original video creates the illusion of the person wearing different clothing. We aim at real-time frame rates for this system, allowing for virtual mirror applications.


Scene-Space Video Processing

The high degree of redundancy in video footage allows compensating for noisy depth estimates and to achieve various high-quality processing effects such as denoising, deblurring, super resolution, object removal, computational shutter functions, and scene-space camera effects.



Eye-tracking Head-mounted Display

Immersion is the ultimate goal of head-mounted displays (HMD) for Virtual Reality (VR) in order to produce a convincing user experience. Two important aspects in this context are motion sickness, often due to imprecise calibration, and the integration of a reliable eye tracking. We propose an affordable hard- and software solution for drift-free eye-tracking and user-friendly lens calibration within an HMD. The use of dichroic mirrors leads to a lean design that provides the full field-of-view (FOV) while using commodity cameras for eye tracking.



ElectroEncephaloGraphics

This project focuses on using electroencephalography (EEG) to analyze the human visual process. Human visual perception is becoming increasingly important in the analyses of rendering methods, animation results, interface design, and visualization techniques. Our work uses EEG data to provide concrete feedback on the perception of rendered videos and images as opposed to user studies that just capture the user's response. Our results so far are very promising. Not only have we been able to detect a reaction to artifacts in the EEG data, but we have also been able to differentiate between artifacts based on the EEG response.


Floating Textures

We present a novel multi-view, projective texture mapping technique. While previous multi-view texturing approaches lead to blurring and ghosting artefacts if 3D geometry and/or camera calibration are imprecise, we propose a texturing algorithm that warps (``floats'') projected textures during run-time to preserve crisp, detailed texture appearance. Our GPU implementation achieves interactive to real-time frame rates. The method is very generally applicable and can be used in combination with many image-based rendering methods or projective texturing applications. By using Floating Textures in conjunction with, e.g., visual hull rendering, light field rendering, or free-viewpoint video, improved rendering results are obtained from fewer input images, less accurately calibrated cameras, and coarser 3D geometry proxies. In a nutshell, the notion of Floating Textures is to correct for local texture misalignments by determining the optical flow between projected textures and warping the textures accordingly in the rendered image domain. Both steps, optical flow estimation and multi-texture warping, can be efficiently implemented on graphics hardware to achieve interactive to real-time performance.


Perception-motivated Interpolation of Image Sequences

We present a method for image interpolation which is able to create high-quality, perceptually convincing transitions between recorded images. By implementing concepts derived from human vision, the problem of a physically correct image interpolation is relaxed to an image interpolation that is perceived as physically correct by human observers. We find that it suffices to focus on exact edge correspondences, homogeneous regions and coherent motion to compute such solutions. In our user study we confirm the visual quality of the proposed image interpolation approach. We show how each aspect of our approach increases the perceived quality of the interpolation results, compare the results obtained by other methods and investigate the achieved quality for different types of scenes.


Simulating Visual Perception

The aim of this work is to simulate glaring headlights on a conventional monitor by first measuring the time-dependent effect of glare on human contrast perception and then to integrate the quantitative findings into a driving simulator by adjusting contrast display according to human perception.


Video Quality Assessment

Goal of this project is to assess the quality of rendered videos and especially detect those frames that contain visible artifacts, e.g. ghosting, blurring or popping.


Visual Fidelity Optimization of Displays

The visual experience afforded by digital displays is not identical to our perception of the genuine real world. Display resolution, refresh rate, contrast, brightness, and color gamut neither match the physics of the real world nor the perceptual characteristics of our Human Visual System. With the aid of new algorithms, however, a number of perceptually noticeable degradations on screen can be diminished or even completely avoided.



Digital Representations of the Real World

The book presents the state-of-the-art of how to create photo-realistic digital models of the real world. It is the result of work by experts from around the world, offering a comprehensive overview of the entire pipeline from acquisition, data processing, and modelling to content editing, photo-realistic rendering, and user interaction.


Image-space Editing of 3D Content

The goal of this project is to develop algorithms in image space that allow photo-realistic editing of dynamic 3D scenes. Traditional 2D editing tools cannot be applied to 3D video as in addition to correspondences in time spatial correspondences are needed for consistent editing. In this project we analyze how to make use of the redundancy in multi-stereoscopic videos to compute robust and dense correspondence fields. these space-time correspondences can then be used to propagate changes applied to one frame consistently to all other frames in the video. Beside the transition of classical video editing tools we want to develop new tools specifically for 3D video content.

This project has been funded by ERC Grant #256941 `Reality CG` and the German Science Foundation, DFG MA2555/4-2.


Multi-Image Correspondences

Multi-view video camera setups record many images that capture nearly the same scene at nearly the same instant in time. Neighboring images in a multi-video setup restrict the solution space between two images: correspondences between one pair of images must be in accordance with the correspondences to the neighboring images.

The concept of accordance or consistency for correspondences between three neighboring images can be employed in the estimation of dense optical flow and in the matching of sparse features between three images.

This work has been funded in parts by the ERC Grant #256941 `Reality CG` and the German Science Foundation, DFG MA2555/4-2.


Reality CG

Scope of "Reality CG" is to pioneer a novel approach to modelling, editing and rendering in computer graphics. Instead of manually creating digital models of virtual worlds, Reality CG will explore new ways to achieve visual realism from the kind of approximate models that can be derived from conventional, real-world imagery as input.


Virtual Video Camera

The Virtual Video Camera research project is aimed to provide algorithms for rendering free-viewpoint video from asynchronous camcorder captures. We want to record our multi-video data without the need of specialized hardware or intrusive setup procedures (e.g., waving calibration patterns).



Accelerating Photo-realistic RT

The goal of this research project is to develop and evaluate new approaches to accelerate photo-realistic ray tracing. Our focus lies on novel acceleration and denoising strategies for fast and memory-efficient photo-realistic rendering. Our research covers various topics from basic research for fast intersection tests to advanced filtering techniques of Monte Carlo simulation-based rendering.


Computer Vision Algorithms for the DARPA Urban Challenge 2007

The TU Braunschweig participated in the DARPA Urban Challenge 2007, its autonomous vehicle 'Caroline' was among the finalists. The Computer Graphics lab provided the real-time vision algorithms for that task.

Caroline's computer vision system consists of two separate systems. The first is a monocular color segmentation based system that classifies the ground in front of the car as drivable, undrivable or unknown. It assists in situations where the drivable terrain and the surrounding area (e.g. grass, concrete or shrubs) differ in color and it deals with man-made artifacts such as lane markings as well as bad lighting and weather conditions. The second vision system is a multi-view lane detection that identifies the different kinds of lanes described by DARPA, such as broken and continuous as well as white and yellow lane markings. Using four high-resolution color cameras and state-of-the-art graphics hardware, it detects its own lane and the two adjacent lanes to the left and right with a field of view of 175 degrees at up to 35 meters. The output of the lane detection algorithm is directly processed by the artificial intelligence.


Lunar Surface Relief Reconstruction

Our "Astrographics" research group works on various methods to overcome the difficulties associated with gaining knowledge about faraway astronomical objects using computer vision and computer graphics algorithms. In this project, we have computed plausible 3D surface data for the moon from photographic imagery from the 1960's "Lunar Orbiter" mission.


Multiple Kinect Studies

This project investigates multi-camera setups using Microsoft Kinects. Active structured light from the Kinect is used in several scenarious, including gas flow description, motion capture and free-viewpoint video.

While the ability to capture depth alongside color data (RGB-D) is the starting point of the investigations, the structured light is also used more directly. In order to combine Kinects with passive recording approaches, common calibration with HD cameras is also a topic.


Photo Zoom

We present a system to automatically construct high resolution images from an unordered set of low resolution photos. It consists of an automatic preprocessing step to establish correspondences between any given photos. The user may then choose one image and the algorithm automatically creates a higher resolution result, several octaves larger up to the desired resolution. Our recursive creation scheme allows to transfer specific details at subpixel positions of the original image. It adds plausible details to regions not covered by any of the input images and eases the acquisition for large scale panoramas spanning different resolution levels.


Physics-based Rendering

In this project, novel techniques to measure different light-matter interaction phenomena are developed in order to provide new or verify existing models for rendering physically correct images.


Scalable Visual Analytics

Goal of this research project is to develop and evaluate a fundamentally new approach to exhaustively search for, and interactively characterize any non-random mutual relationship between attribute dimensions in general data sets. To be able to systematically consider all possible attribute combinations, we propose to apply image analysis to visualization results in order to automatically pre-select only those attribute combinations featuring non-random relationships. To characterize the found information and to build mathematical descriptions, we rely on interactive visual inspection and visualization-assisted interactive information modeling. This way, we intend to discover and explicitly characterize all information implicitly represented in unbiased sets of multi-dimensional data points.


Who Cares?

Official music video "Who Cares" by Symbiz Sound; the first major production using our Virtual Video Camera.

Dubstep, spray cans, brush and paint join forces and unite with the latest digital production techniques. All imagery depicts live action graffiti and performance. Camera motion added in post production using the Virtual Video Camera.