Computer Graphics
TU Braunschweig


Talk Disputation

07.08.2015 10:00
Informatikzentrum, IZ 161

Speaker(s): Pablo Bauszat

Advanced Denoising and Memoryless Acceleration for Realistic Image Synthesis

Talk Disputation

27.07.2015 14:00
Informatikzentrum, Seminarraum G04

Speaker(s): Benjamin Meyer

Measuring, modeling and simulating the re-adaptation process of the Human Visual System after short-time glares in traffic scenarios

Talk Disputation

17.07.2015 15:00
Informatikzentrum, IZ 161

Speaker(s): Kai Ruhl

Interactive Spacetime Reconstruction in Computer Graphics

Talk Disputation

13.07.2015 13:15
Informatikzentrum, Seminarraum G04

Speaker(s): Maryam Mustafa

ElectroEncephaloGraphics - a Novel Modality for Graphics Research

Talk Disputation

03.07.2015 10:00
Informatikzentrum, Seminarraum G04

Speaker(s): Lorenz Rogge

Augmenting People in Monocular Video Data

Talk Methods for Analyzing the Influence of Molecular Dynamics on Neuronal Activity

26.06.2015 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Stefan Sokoll

Investigating the functioning of neurons at the molecular level is an important foundation to understand how higher brain functions like perception, behavior, or learning and memory are accomplished. Since molecular processes occur in the nanometer range and have to be studied in living samples, recently developed optical super-resolution techniques have boosted their characterization. However, super-resolution techniques require complex instrumentation, are hardly applicable to organotypic samples and still suffer from relatively low temporal resolution. This talk presents new analysis tools that aim to overcome these limitations and allow to study how the dynamics and the interplay of molecules modulate synaptic transmission efficiency. At first, an approach for the detection of individual presynaptic activity will be briefly introduced, but the major part focuses on an algorithm that facilitates fast 3D molecular dynamic analyses within brain slices. It adjusts astigmatism-based 3D single-particle tracking (SPT) techniques to depth-dependent optical aberrations induced by the refractive index mismatch so that they are applicable to complex samples. In contrast to existing techniques, the presented online calibration method determines the aberration directly from the acquired 2D image stream by exploiting the inherent particle movement and the redundancy introduced by the astigmatism. The method improves the positioning by reducing the systematic errors introduced by the aberrations and allows to correctly derive the cellular morphology and molecular diffusion parameters in 3D independently of the imaging depth.

Talk Promotions-V-Vg: Real-World Video Processing Using Unstructured Scene Representations

26.06.2015 10:00
Informatikzentrum, Seminarraum G30

Speaker(s): Felix Klose

When processing single or multi-view video data recorded in uncontrolled environments using scene reconstruction algorithms a multitude of factors can negatively influence the result quality. These factors include camera, lens or color miscalibrations, errors in temporal or spatial camera alignment, unsynchronized and rolling shutters on the camera side, as well as specular, untextured, repetitive objects or objects with visually complex appearances inside the scene. These circumstances make working with computer vision algorithms on real-world data a very challenging task and errors in measurements in real-world recoding setups can not be avoided and have to be accounted for.

In this talk I will give an overview of my work in single and multi-view video processing of real world data using unstructured scene representations. I show how dense 2D correspondence based stereoscopic free-viewpoint video can be created, using tools for user guided error correction. How the complexity of real-world multi-view data can be handled by tracking small surface patches and using a strict motion model to resolve ambiguities and create quasi dense scene

representations. And finally how to create high quality video effects that can handle extreme amounts of noise in estimated depth maps by leveraging the redundancy inherent in video data.

Talk MA-Talk: Compressed Sensing-based Progressive Reconstruction for Image Synthesis

11.05.2015 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Cong Wang

Compressed Sensing (CS) is a new mathematical framework for reconstruction of signals with missing information. Recently, its application to sparse image reconstruction, reconstruction of an image from a small set of known pixels, has shown promising results. The key idea is derived from the fact that most natural images are highly compressible because they are sparse in a transform domain. This leads to the obvious questions: Why waste resources on evaluating information (here, individual pixels) that are discarded later on or have only small impact on the overall visual impression? So far the measurements (evaluated pixels of the image) are chosen in random fashion (usually based on a Blue Noise distribution) to uniformly cover the image domain. Theoretically, if salient features of the image are known in advance, fewer measurements would be needed for high-quality reconstruction. For real-world images taken by a photo or video camera it is very hard to evaluate important features of the image without actually capturing them. However, during image synthesis more knowledge about the scene, camera and lighting situation is present. If carefully observed, the rendering process can potentially provide useful cues which are more efficient to evaluate than the actual measurements, can guide the image sampling process, and thus accelerate convergence.

Talk BA Talk: Tiefenbasierte Blickpunktgenerierung für interaktive Videos in immersiven VR-Systemen

27.04.2015 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Inga Menke

Virtual Reality Head-mounted Displays ermöglichen bei der Betrachtung von Bildern und Videos den Sprung von der traditionellen Fenster-Metapher hin zum virtuellen Rundum-Blick des Betrachters. Die Kombination aus Panoramavideo und Virtual Reality Display mit Head-Tracking schaffen die Voraussetzungen für die die freie Wahl der Blickrichtung auf die Inhalte. Die im Vortrag präsentierte Bachelorarbeit betrachtet für die Entwicklung immersiver medialer Inhalte den nächsten logischen Schritt : Ein Verfahren für die freie Wahl der Betrachterposition in der dargestellten Szene. Präziser wird es dem Betrachter ermöglicht, bei Positionsänderung des Kopfes die Bewegungsparallaxe innerhalb des Videos wahrzunehmen. Durch die Möglichkeit der Bewegung wird der Immersionsgrad im Video erhöht.

Talk Promotions-V-Vg: Electroencephalographics:A Novel Modality for Graphics Research

24.04.2015 11:00
Informatikzentrum, Seminarraum G30

Speaker(s): Maryam Mustafa

In this thesis I present the application of ElectroEncephaloGraphy(EEG) as a novel modality for investigating perceptual graphics problems.

Until recently, EEG has predominantly been used for clinical diagnosis, in psychology and by the BCI community. Here I extend its scope to assist in understanding the perception of visual output from graphics applications and to create new methods based on direct neural feedback.

My work uses EEG data to determine the perceptual quality of videos and images which is of paramount importance for most graphics algorithms. This is especially important given the gap between perceived quality of an image and physical accuracy.

One of the main impediments to the use of an EEG is the very low Signal-to-Noise Ratio (SNR) which requires averaging the data from many trials and participants to get a meaningful result. I propose a novel method for evaluating EEG signals which allows prediction of perceived image quality from only a single trial.

This thesis also explores the possibilities for automatic optimization of rendering parameters for images and videos based on implicit neural feedback.

Talk User-guided Image Pre-Segmentation

17.04.2015 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Ying Wang

This is the presentation of a specialisation project done at the Institut für Computergraphik. The student shows a method for computing a locally linear image structure which is then used to propagate user input in the form of brush strokes in the image at hand. The talk includes some aspect of the numerical computation done in MATLAB and presents a variety of results.

Talk Promotions-V-Vg: Interactive Scene Reconstruction and Image Correspondence Estimation

10.04.2015 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Kai Ruhl

High-quality dense correspondence maps between images and scene reconstruction, be it optical flow, stereo or scene flow, are an essential prerequisite for a multitude of computer vision and graphics tasks, e.g. scene editing or view interpolation in visual media production. Due to the ill-posed nature of the estimation problem in typical setups (i.e. limited amount of cameras and limited frame rate), automated estimation approaches are prone to erroneous correspondences and subsequent quality degradation in many non-trivial cases such as occlusions, ambiguous movements, long displacements, or low texture. While improving estimation algorithms is one possible direction, this thesis complementarily concerns itself with minimal user interactions that lead to improved correspondence maps and scene reconstruction. Where visually convincing results are essential, rendering artifacts resulting from estimation errors are usually repaired by hand with image editing tools, which is time consuming and therefore costly. New forms of user interaction that integrate human scene recognition capabilities to guide a semi-automatic correspondence or scene reconstruction algorithm have the potential to save considerable effort, enabling faster and more efficient production of visually convincing rendered footage.

Talk Promotions-V-Vg: Advanced Denoising and Memory-efficient Acceleration for Realistic Image Synthesis

06.03.2015 11:00
Informatikzentrum, Seminarraum G30

Speaker(s): Pablo Bauszat

Stochastic ray tracing methods have become the industry's standard for today's realistic image synthesis thanks to their ability to achieve a supreme degree of realism by physically simulating various natural phenomena of light and cameras (e.g. global illumination, depth-of-field or motion blur). Unfortunately, high computational costs for more complex scenes and image noise from insufficient simulations are major issues of these methods and, hence, acceleration and denoising are key components in stochastic ray tracing systems. In this thesis, we introduce three new filtering methods for advanced lighting and camera effects, as well as two new concepts for memory-efficient acceleration structures. In particular, we present a filter for global illumination aiming at real-time performance, an interactive filter for global illumination in the presence of depth-of-field and a general and robust adaptive reconstruction framework for high-quality images with arbitrary rendering effects. To address complex scene geometry, we propose an extension to the classic Bounding Volume Hierarchy reducing its footprint down to 1 bit per node and a new concept which models the acceleration structure completely implicit, i.e. without any additional memory cost at all, while maintaining interactive performance. Our contributions advances the state-of-the-art of denoising techniques for realistic image synthesis as well as the field of memory-efficient acceleration for ray tracing systems.

Talk MA-Talk: Compressed Sensing and Sparse Coding for Depth and RGB-D Images

12.02.2015 14:00
Informatikzentrum, Seminarraum G30

Speaker(s): Emmy-Charlotte Förster

In this talk, new methods for the compression of natural images and depth maps using compressed sensing and sparse coding are presented. Sarkis and Diepold recently presented an approach for compression of depth maps using compressed sensing. We expand upon this approach by using sparse coding, and enhance the depth map compression by adding the available RGB information. By modifying the underlying optimization problem of compressed sensing, we are able to further enhance the depth maps of compressed RGB-D images. We create our dictionaries and evaluate our result by using both synthetic as well as natural data sets, captured using a light-field camera.

Talk Participating Media - Fast Rendering and Artistic Stylization

29.01.2015 14:00
Informatikzentrum, Seminarraum G30

Speaker(s): Oliver Klehm

Talk MA-Talk: Space-time reconstruction of very fast fluid dynamic processes

10.12.2014 11:00
Informatikzentrum, Seminarraum G30

Speaker(s): Matthias Überheide

In this talk a method will be presented to reconstruct very fast fluid dynamic processes in space and time from a single camera view. The underlying physical relations are used to resolve the ambiguity of the problem. The captured images are used to guide a fluid simulation resulting in an animated 3d volume of the captured effect.

An optimization problem is formulated and the adjoint method is applied to allow the computation of the radient in reasonable time.

The difficulties inherent in this optimization problem are shown with multiple artificial and real data testcases. Possible approaches for the individual difficulties are analyzed.

Talk BA-Talk: Natural Eye Adaptation for Real-Time HDR Applications

28.11.2014 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Andreas Bauerfeld

In the Bachelor Thesis of Andreas Bauernfeld a perception model capable of performing a natural rendering of high dynamic range scenes is developed. It is real-time capable and simulates the most important properties of the human visual system. Furthermore, and in contrast to the often used assumption of fully adapted photoreceptors, a continuous state of maladaption is considered, resulting in a more precise prediction of contrast perception. In addition, the impression tarnishing effects will not be suppressed - unlike photo reproduction methods - but rather are enhanced and physiologically accurately rendered. These components provide a realistic impression of the rendered high dynamic range environment and consider individual observer properties as well as setup- and display device-related constraints.

Talk Industrial Visual Computing - metaio GmbH

11.07.2014 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Christian Lipski

ICG alumnus and PhD graduate Dr. Christian Lipski talks about what it is like to work at metaio GmbH.

Talk Disputation: Regularized Optimization Methods for Reconstruction and Modeling in Computer Graphics

17.06.2014 10:00
Informatikzentrum, Seminarraum 105

Speaker(s): Stephan Wenger

The field of computer graphics deals with virtual representations of the real world. These can be obtained either through reconstruction of a model from measurements, or by directly modeling a virtual object, often on a real-world example. The former is often formalized as a regularized optimization problem, in which a data term ensures consistency between model and data and a regularization term promotes solutions that have high a priori probability.

In this dissertation, different reconstruction problems in computer graphics are shown to be instances of a common class of optimization problems which can be solved using a uniform algorithmic framework. Moreover, it is shown that similar optimization methods can also be used to solve data-based modeling problems, where the amount of information that can be obtained from measurements is insufficient for accurate reconstruction.

As real-world examples of reconstruction problems, sparsity and group sparsity methods are presented for radio interferometric image reconstruction in static and time-dependent settings. As a modeling example, analogous approaches are investigated to automatically create volumetric models of astronomical nebulae from single images based on symmetry assumptions.

Talk Disputation: Visual Analysis of High-Dimensional Spaces

09.05.2014 10:00
Informatikzentrum, Seminarraum 105

Speaker(s): Georgia Albuquerque

The visual exploration and analysis of high-dimensional data sets commonly requires projecting the data into lower-dimensional representations. The number of possible representations grows rapidly with the number of dimensions, and manual exploration quickly becomes ineffective or even infeasible. In this thesis I present automatic algorithms to compute visual quality metrics and show different situations where they can be used to support the analysis of high-dimensional data sets. The proposed methods can be applied to different specific user tasks and can be combined with established visualization techniques to sort or select projections of the data based on their information-bearing content. These approaches can effectively ease the task of finding truly useful visualizations and potentially speed up the data exploration task. Additionally, I present a framework designed to generate synthetic data, for evaluation by users who can interactively navigate through high dimensional data sets.

Talk Promotions-Vor-Vortrag: Augmenting People in Monocular Video

28.04.2014 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Lorenz Rogge

Promotions-Vor-Vortrag: Aiming at realistic video augmentation, i.e. the embedding of virtual, 3-dimensional objects into an scene's original content, a series of challenging problems has to be solved. This is especially the case when working with solely monocular input material, as important additional 3D information is missing and has to be recovered during the process, if necessary.

In this talk, I will present a semi-automatic strategy to tackle this task by providing solutions to individual problems in the context of virtual clothing as an example for realistic video augmentation. Starting with two different approaches for monocular pose- and motion estimation, I continue to build up a 3D human body model by estimating detailed shape information as well as basic surface material properties. Using this information allows to further extract a dynamic illumination model from the provided input material. This illumination model is particularly important for rendering a realistic virtual object and adds a lot of realism to the final video augmentation. The animated human model is able to interact with virtual 3D object and is used in the context of virtual clothing to animate a simulated garment. To achieve the desired realism, I present an additional image-based compositing approach, to realistically embed the simulated garment into the original scene content. Combining the presented approaches provide an integrated strategy for realistic augmentation of actors in monocular video sequences.

Talk Regularized Optimization Methods for Reconstruction and Modeling in Computer Graphics

07.03.2014 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Stephan Wenger


The field of computer graphics deals with virtual representations of the real world. They can be obtained either through reconstruction of a model from measurements, or by directly modeling a virtual object, often on a real-world example.

Reconstruction from measurements is often formalized as a regularized optimization problem, in which a data term ensures consistency between model and data and a regularization term promotes solutions that have high a priori probability, a popular recent example being compressed sensing. In this thesis, different regularized optimization techniques are applied to reconstruction problems in computer graphics, namely the calculation of images from interferometric measurements.

Moreover, it is shown that similar optimization methods can also be used to solve data-based modeling problems, where the amount of information that can be obtained from measurements is insufficient for an accurate reconstruction. By formalizing a priori knowledge about the object through the appropriate choice of a regularizer, plausible models can be generated in a largely automatic process with minimal user interaction. Two such techniques are demonstrated on the example of symmetry-based modeling of astronomical nebulae.

Talk BA-Talk: Implicit Image Segmentation Using Minimal Surfaces

25.11.2013 13:30
Informatikzentrum, Seminarraum G30

Speaker(s): Marc Kassubeck

mage segmentation is one of the main research topics in computer graphics and computer vision. The main goal is splitting a given image up into multiple parts according to some – usually visual – ground rules. A proper mathematical treatment therefore needs an efficient way to describe these image segments and a model to implement those ground rules. The probably most common way of describing the image segments is using binary functions. Less common is an explicit description of the boundary of those regions. But regardless of whether the representation focuses on the boundary or not, the desired properties of the segmentation are usually modeled using a minimization problem. Hence the results depend heavily on the objective function used to describe the problem. Amongst the most influential approaches are the model of Kass, the Mumford-Shah model and the Chan-Vese model.

The key idea of this thesis is to use the zero level set of a certain function to describe the boundary of the segmentation regions. Building on that foundation, a proper model that extends the ideas of already established models and at the same time carefully implements the implicit boundary description will be developed. The obtained minimization problem will be transformed into a convex saddle point problem. This structure finally allows for usage of a primal dual optimization algorithm for convex problems.

Talk BA-Talk: Real-time retargeting of human skeletal structures

25.11.2013 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Torsten Thoben

Different animation and motion capture tools used in computer graphics and computer animation use different animation models to describe human motion. These models vary in detail and thus use skeletal models varying in complexity such as joint count and degrees of freedom per joint. To be able to work with different human motion models a proper translation of the motion between different skeletons has to be found. This is called motion retargeting.

The topic of this thesis is to develop a real-time retargeting module, being able to translate motions between different skeletal models. Focusing on human skeletal models this module should act as a black-box when used, while given both skeletal model descriptions as input. As an example the module should be able to translate between BVH and C3D, allowing to transfer motions between two common skeletal hierarchy formats. While concentrating on these two hierarchy descriptions, the developed system must be extensible to other formats and provide a proper interface to add more human skeletal models.

Talk Geschichte des CGI vor der Erfindung des Computers

15.11.2013 15:00
Informatikzentrum, Hörsaal M 161

Speaker(s): Harald Klinke

Raytracing ist keine Erfindung des Computerzeitalters. Die Vorstellung, eine zweidimensionale Abbildung durch die Verfolgung des Strahlengangs zu erzeugen, geht tatsächlich auf die Frührenaissance zurück (Alberti) und selbst Apparaturen nach diesem Prinzip wurden bereits vor 500 Jahren entwickelt (Dürer). Die Prinzipien des Drahtgittermodells, des Shadings und Oberflächenreflexionen, grundsätzlich die Bilderzeugung (Imaging) war schon immer Teil der künstlerischen Auseinandersetzung. Daher kann Kunst- und Mediengeschichte langfristige Entwicklungen aufzeigen, die die Gegenwart besser bewerten lässt, und aus der historischen Betrachtung der Bildmedien Megatrends ausmachen und einen Ausblick auf die Zukunft bieten.

In der Literatur wird die Geschichte der Computer Graphics meist mit ersten Anwendungen auf Computern eingeleitet. So stehen Fernschreiber, das MIT-System Whirlwind oder Ivan Sutherlands Sketchpad im Zentrum dieser historischen Abrisse. Dabei liegen gerade die gedanklichen Grundlagen des Computer Generated Imagery bereits in der Antike. Erst mit Licht- und Sehtheorien konnte das Visuelle verstanden werden. Dies legte die Grundlage für die Entwicklung der Zentralperspektive in der Frührenaissance. Dem ging ein Bildbegriff voraus, der ein Bild als den Schnitt durch die „Sehpyramide“ definierte. Diese Bildvorstellung, die sich seit der Renaissance bis zur Erfindung der Fotografie durch die europäische Kulturgeschichte zieht hat nichts anderes zum Prinzip als das, was wir heute als Raytracing bezeichnen. Daher ist es auch kein Wunder, dass aufgrund dieser Vorstellung bereits früh Apparaturen entwickelt wurden, die ein mechanisches Abbild der der sichtbaren oder vorgestellten Wirklichkeit erzeugten.