Computer Graphics
TU Braunschweig


Talk Disputation: Regularized Optimization Methods for Reconstruction and Modeling in Computer Graphics

17.06.2014 10:00
Informatikzentrum, Seminarraum 105

Speaker(s): Stephan Wenger

The field of computer graphics deals with virtual representations of the real world. These can be obtained either through reconstruction of a model from measurements, or by directly modeling a virtual object, often on a real-world example. The former is often formalized as a regularized optimization problem, in which a data term ensures consistency between model and data and a regularization term promotes solutions that have high a priori probability.

In this dissertation, different reconstruction problems in computer graphics are shown to be instances of a common class of optimization problems which can be solved using a uniform algorithmic framework. Moreover, it is shown that similar optimization methods can also be used to solve data-based modeling problems, where the amount of information that can be obtained from measurements is insufficient for accurate reconstruction.

As real-world examples of reconstruction problems, sparsity and group sparsity methods are presented for radio interferometric image reconstruction in static and time-dependent settings. As a modeling example, analogous approaches are investigated to automatically create volumetric models of astronomical nebulae from single images based on symmetry assumptions.

Talk Disputation: Visual Analysis of High-Dimensional Spaces

09.05.2014 10:00
Informatikzentrum, Seminarraum 105

Speaker(s): Georgia Albuquerque

The visual exploration and analysis of high-dimensional data sets commonly requires projecting the data into lower-dimensional representations. The number of possible representations grows rapidly with the number of dimensions, and manual exploration quickly becomes ineffective or even infeasible. In this thesis I present automatic algorithms to compute visual quality metrics and show different situations where they can be used to support the analysis of high-dimensional data sets. The proposed methods can be applied to different specific user tasks and can be combined with established visualization techniques to sort or select projections of the data based on their information-bearing content. These approaches can effectively ease the task of finding truly useful visualizations and potentially speed up the data exploration task. Additionally, I present a framework designed to generate synthetic data, for evaluation by users who can interactively navigate through high dimensional data sets.

Talk Promotions-Vor-Vortrag: Augmenting People in Monocular Video

28.04.2014 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Lorenz Rogge

Promotions-Vor-Vortrag: Aiming at realistic video augmentation, i.e. the embedding of virtual, 3-dimensional objects into an scene's original content, a series of challenging problems has to be solved. This is especially the case when working with solely monocular input material, as important additional 3D information is missing and has to be recovered during the process, if necessary.

In this talk, I will present a semi-automatic strategy to tackle this task by providing solutions to individual problems in the context of virtual clothing as an example for realistic video augmentation. Starting with two different approaches for monocular pose- and motion estimation, I continue to build up a 3D human body model by estimating detailed shape information as well as basic surface material properties. Using this information allows to further extract a dynamic illumination model from the provided input material. This illumination model is particularly important for rendering a realistic virtual object and adds a lot of realism to the final video augmentation. The animated human model is able to interact with virtual 3D object and is used in the context of virtual clothing to animate a simulated garment. To achieve the desired realism, I present an additional image-based compositing approach, to realistically embed the simulated garment into the original scene content. Combining the presented approaches provide an integrated strategy for realistic augmentation of actors in monocular video sequences.

Talk Nebel in 3D - die Gestalt astronomischer Nebel entschlüsseln

25.04.2014 19:00
Astronomieverein Pegasus Wolfenbüttel

Speaker(s): Marcus Magnor

Public lecture - Astronomie-Verein Pegasus Wolfenbüttel (website)

Talk Regularized Optimization Methods for Reconstruction and Modeling in Computer Graphics

07.03.2014 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Stephan Wenger


The field of computer graphics deals with virtual representations of the real world. They can be obtained either through reconstruction of a model from measurements, or by directly modeling a virtual object, often on a real-world example.

Reconstruction from measurements is often formalized as a regularized optimization problem, in which a data term ensures consistency between model and data and a regularization term promotes solutions that have high a priori probability, a popular recent example being compressed sensing. In this thesis, different regularized optimization techniques are applied to reconstruction problems in computer graphics, namely the calculation of images from interferometric measurements.

Moreover, it is shown that similar optimization methods can also be used to solve data-based modeling problems, where the amount of information that can be obtained from measurements is insufficient for an accurate reconstruction. By formalizing a priori knowledge about the object through the appropriate choice of a regularizer, plausible models can be generated in a largely automatic process with minimal user interaction. Two such techniques are demonstrated on the example of symmetry-based modeling of astronomical nebulae.

Talk BA-Talk: Implicit Image Segmentation Using Minimal Surfaces

25.11.2013 13:30
Informatikzentrum, Seminarraum G30

Speaker(s): Marc Kassubeck

mage segmentation is one of the main research topics in computer graphics and computer vision. The main goal is splitting a given image up into multiple parts according to some – usually visual – ground rules. A proper mathematical treatment therefore needs an efficient way to describe these image segments and a model to implement those ground rules. The probably most common way of describing the image segments is using binary functions. Less common is an explicit description of the boundary of those regions. But regardless of whether the representation focuses on the boundary or not, the desired properties of the segmentation are usually modeled using a minimization problem. Hence the results depend heavily on the objective function used to describe the problem. Amongst the most influential approaches are the model of Kass, the Mumford-Shah model and the Chan-Vese model.

The key idea of this thesis is to use the zero level set of a certain function to describe the boundary of the segmentation regions. Building on that foundation, a proper model that extends the ideas of already established models and at the same time carefully implements the implicit boundary description will be developed. The obtained minimization problem will be transformed into a convex saddle point problem. This structure finally allows for usage of a primal dual optimization algorithm for convex problems.

Talk BA-Talk: Real-time retargeting of human skeletal structures

25.11.2013 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Torsten Thoben

Different animation and motion capture tools used in computer graphics and computer animation use different animation models to describe human motion. These models vary in detail and thus use skeletal models varying in complexity such as joint count and degrees of freedom per joint. To be able to work with different human motion models a proper translation of the motion between different skeletons has to be found. This is called motion retargeting.

The topic of this thesis is to develop a real-time retargeting module, being able to translate motions between different skeletal models. Focusing on human skeletal models this module should act as a black-box when used, while given both skeletal model descriptions as input. As an example the module should be able to translate between BVH and C3D, allowing to transfer motions between two common skeletal hierarchy formats. While concentrating on these two hierarchy descriptions, the developed system must be extensible to other formats and provide a proper interface to add more human skeletal models.

Talk Geschichte des CGI vor der Erfindung des Computers

15.11.2013 15:00
Informatikzentrum, Hörsaal M 161

Speaker(s): Harald Klinke

Raytracing ist keine Erfindung des Computerzeitalters. Die Vorstellung, eine zweidimensionale Abbildung durch die Verfolgung des Strahlengangs zu erzeugen, geht tatsächlich auf die Frührenaissance zurück (Alberti) und selbst Apparaturen nach diesem Prinzip wurden bereits vor 500 Jahren entwickelt (Dürer). Die Prinzipien des Drahtgittermodells, des Shadings und Oberflächenreflexionen, grundsätzlich die Bilderzeugung (Imaging) war schon immer Teil der künstlerischen Auseinandersetzung. Daher kann Kunst- und Mediengeschichte langfristige Entwicklungen aufzeigen, die die Gegenwart besser bewerten lässt, und aus der historischen Betrachtung der Bildmedien Megatrends ausmachen und einen Ausblick auf die Zukunft bieten.

In der Literatur wird die Geschichte der Computer Graphics meist mit ersten Anwendungen auf Computern eingeleitet. So stehen Fernschreiber, das MIT-System Whirlwind oder Ivan Sutherlands Sketchpad im Zentrum dieser historischen Abrisse. Dabei liegen gerade die gedanklichen Grundlagen des Computer Generated Imagery bereits in der Antike. Erst mit Licht- und Sehtheorien konnte das Visuelle verstanden werden. Dies legte die Grundlage für die Entwicklung der Zentralperspektive in der Frührenaissance. Dem ging ein Bildbegriff voraus, der ein Bild als den Schnitt durch die „Sehpyramide“ definierte. Diese Bildvorstellung, die sich seit der Renaissance bis zur Erfindung der Fotografie durch die europäische Kulturgeschichte zieht hat nichts anderes zum Prinzip als das, was wir heute als Raytracing bezeichnen. Daher ist es auch kein Wunder, dass aufgrund dieser Vorstellung bereits früh Apparaturen entwickelt wurden, die ein mechanisches Abbild der der sichtbaren oder vorgestellten Wirklichkeit erzeugten.

Talk Wie wir räumlich wahrnehmen, oder: Warum auch Piraten Auto fahren können

13.11.2013 19:00
Roter Saal im Schloss, Braunschweig

Speaker(s): Marcus Magnor

Public lecture - Akademievorlesung im Schloss (website)

In der Natur treten Augen meist in Paaren auf. Allerdings gönnen sich nur wenige Tiere den Luxus, mit beiden Augen in dieselbe Richtung zu schauen, um die Welt in Stereo sehen zu können. Denn in vielen Fällen reicht schon eine einzelne Ansicht, um die räumliche Struktur unserer Umgebung zu ermitteln. Im Vortrag geht es darum, wie wir räumlich sehen, mit einem und mit zwei Augen, weshalb 3D-Kino (manchmal nicht) funktioniert und warum wir wohl auch weiterhin gerne konventionelle Filme anschauen werden.

Talk BA-Talk: Optimizing the Object-Median Split for Ray Tracing Applications

11.11.2013 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Jakob Garbe

Acceleration data structures are the most efficient way to speed up ray tracing. One of the most common acceleration data structures is the Bounding Volume Hierarchy (BVH). It's performance is highly dependent on the partitioning criteria used with the Surface-Area-Heuristic being the most famous. Memory saving approximations of the BVH, like the implicit object space partitioning scheme by Eisemann et al., however, require an object median split during construction which is known to be inferior to other schemes.

In this work several partitioning approaches are developed and tested which try to improve the classic object median split, e.g. by different projections or sorting along a morton curve .

Talk BA-Talk: Shortened Shadow Rays

04.11.2013 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Leslie Wöhler

Shadow computation is a major task in a rendering pipeline today. Ray tracing can compute exact shadows by computing the visibility between two scene points. This is usually done by casting a shadow ray. The BA talk gives informations about the concept of “shortened shadow rays”, a technique to speed up shadow computations for ray tracing systems.

Talk Image-Based Approaches for Photo-Realistic Rendering of Complex Objects

27.09.2013 14:00
Informatikzentrum, Seminarraum G30

Speaker(s): Anna Hilsmann

One principal aim in computer graphics is the achievement of photorealism. In this talk, I will propose methods for image-based rendering and modification of objects with complex appearance properties, concentrating on the example of clothes. With physical simulation methods, rendering of clothes is computationally demanding because of complex cloth drapery and shading. In contrast, the proposed methods use real images, which capture these properties and serve as appearance examples to guide complex animation or texture modification processes. Texture deformation and shading are extracted as image warps both in the spatial and in the intensity domain. Based on these warps, a pose-dependent image-based rendering method synthesizes new images of clothing from a database of pre-recorded images. For rendering, the images and warps are parameterized and interpolated in pose-space, i.e. the space of body poses, using scattered data interpolation. To allow for appearance changes, an image-based retexturing method is proposed, which exchanges the cloth texture in an image while maintaining texture deformation and shading properties, without the knowledge of the scene geometry and lighting conditions.

Altogether, the presented approaches shift computational complexity from the rendering to an a-priori training phase. The use of real images and warp-based extraction of deformation and shading allow a photo-realistic visualization and modification of clothes, including fine details, without computationally demanding simulation of the underlying scene and object properties.

Talk Promotions-Vor-Vortrag Benjamin Meyer

02.08.2013 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Benjamin Meyer

Talk MA-Vortrag: Detail Hallucinated Image Interpolation

13.06.2013 14:00
Informatikzentrum, Seminarraum G30

Speaker(s): Alexander Lerpe

Image-interpolation and warping is the basis of most image-based rendering techniques where real-world footage is used as input instead of complex 3D geometry. The difficulty is that visible artifacts appear in the image-interpolation if correspondences between images are not correctly estimated beforehand. One simple solution would be to downsample the images until the errors disappear but this solution is not practical as most of the details disappear as well.

In this talk a new algorithm is presented that is orthogonal to classic image-interpolation algorithms in the sense that not the correspondences are corrected but the input images are changed to match the correspondences and create a more plausible image-interpolation result. The algorithm is very versatile as it can also be used to add details to prevent interpolation artifacts or to add details to input images captured with a lower resolution or where the camera settings were accidentally wrong during recording.

Talk Promotionsvortrag: Virtual Video Camera: a System for Free Viewpoint Video of Arbitrary Dynamic Scen

05.06.2013 14:00
Informatikzentrum, IZ 161

Speaker(s): Christian Lipski

Talk BA-Vortrag: The Splitted Grid - An acceleration structure for ray tracing

08.04.2013 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Marc A. Kastner

Uniform Grids are commonly employed acceleration structures for speeding up ray tracing. Unfortunately, the classic Grid suffers from several problems such as a high memory footprint, the inability to handle clustered geometry (tea-pot-in-a-stadium problem) and a scaling linear to its resolution. However, their construction and traversal times are slightly faster compared to Bounding Volume Hierarchies (BVH) or other hierarchical approaches. Extensions to the uniform Grid are Hierarchical or Nested grids, which try to overcome some of the known drawbacks.

This thesis provides a new acceleration data structure for ray tracing that takes the advantage of the best of previous methods. It combines common approaches of hierarchical space-partitioning data structures with nested grids. The algorithm is introduced in a naive way which nests 1-dimensional grids in a hierarchy. Due to its structure, it can use techniques similar to 3D-DDA from uniform grids for traversal. Heuristic approaches to calculate parameters like resolution and hierarchy depth adaptively are evaluated. Furthermore, an additional technique to reduce the number of double references by using a loose variant is presented. Both approaches are implemented on CPU, evaluated in detail with common scenes and compared to other state-of-the-art acceleration data structures.

Talk Bild - Wahrnehmung

05.04.2013 18:00
Kunstgeschichtliches Seminar und Sammlung, Universität Göttingen

invited talk Tagung Digitalbild "Euphorie und Angst"

Talk MA-Vortrag: Reconstructing dynamic models from video sequences

05.11.2012 13:45
Informatikzentrum, Seminarraum G30

Speaker(s): Andreas König

In computer graphics a varity of methods which compute a 3D model of a scene from 2D images are known. These methods are able to reconstruct static scenes captured with synchronised cameras. There exists a method to reconstruction position as well as the motion of a point from a dynamic scene. This approach simultaneously estimates depth, orientation and 3D motion of the scene from the input views. The approach computes a sparse set of patches from the input views which are iterativly expanded along the object surfaces. The result is a quasi-dense surface representation of the dynamic scene. However, the method only considers a small time section of frames from the video sequences. The main contribution of this thesis is to perform a consistent reconstruction of the whole video sequences of the scene. Therefore the approach is modified to consider the whole video sequences fo the reconstruction process. Hereby the geometry and motion of the scene is transfered to a model which represents the captured scene. Farther on, methods to cope with occlusion and disocclusion of objects throughout the scene are presented. The dynamic model is later on used for free viewpoint rendering of the scene. In addition a new patch expansion approach is presented to generate a more dense scene representation. The proposed expansion is based on a domain transform filter for image and video processing. In conclusion, the approach is evaluated with synthetic and real world scenes captured with consumer video cameras.

Talk BA-Vortrag: On-the-fly Displacement Mapping Algorithms and Architecture Consideration for Raytracing

05.11.2012 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Rashid Kratou

Rendering images with photorealistic quality is a major challenge in the field of computer graphics. To achieve visual richness of geometry, usually a large amount of geometric data needs to be stored and processed. An alternative is displacement mapping, where a low-polygon model is used to define the basic structure while a special texture, the displacement map, is used to add the geometric details. In contrast to techniques such as bump-, normal- or parallax mapping, actual geometry is created and thus, tessellation of geometry is fundamental. Advantages of displacement mapping are the reduced memory footprint and its ability to be applied relative to a given viewpoint allowing adaptive creation of geometry only in areas where additional geometry is actually needed to improve the visual result.

While commodity GPU architectures incorporate a tessellation unit in the rasterization pipeline which allows a straightforward integration of displacement mapping techniques in real-time applications, the integration of On-the-fly tessellation into Ray Tracing systems is not straightforward. This is mainly due to the fact that these systems usually rely on acceleration structures for high performance. Previous algorithms that incorporate tessellation and displacement mapping into Ray Tracing systems exist, but none of them achieve interactive or real-time performance.

The main target of the thesis is to analyze previous algorithms regarding its bottlenecks and suitability for many-core architectures (e.g. GPUs).

Talk BA-Vortrag: Rekonstruktion von Kameraparametern aus asynchronen Aufnahmen dynamischer Szenen

29.10.2012 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Matthias Überheide

In der Computergrafik sind verschiedene Verfahren bekannt, um die Bilder mehrerer Kameras zur Rekonstruktion der aufgezeichneten Szene zu nutzen. Dabei müssen die üblichen Verfahren jedoch entweder bewegte Szenenelemente verwerfen oder synchronisierte Kameras fordern.

Es existiert ein Verfahren zur Rekonstruktion bewegter Szenen mit unsychronisierten Kameras, dass ohne Verwerfen von Szenenelementen auskommt. Für die Rekonstruktion wird jedoch nur ein sehr kurzer Zeitausschnitt von wenigen Frames aus den Eingabevideos verwendet. Dabei wird für jeden rekonstruierten Punkt, zusätzlich zur Position, eine Bewegung ermittelt und in einem kleinen Zeitausschnitt linear angenommen, sodass dort Zwischenpositionen errechenbar werden.

Im Rahmen dieser Arbeit wird das eben genannte Verfahren so erweitert, dass es auf der Länge des gesamten Videos angewendet werden kann. Dazu wird ein Szenenmodell erarbeitet, das zusätzlich die Bewegung der Szene modelliert sowie Bewegungen und Folgepositionen konsistent hält. Die Bewegung wird dabei implizit durch die Folgepositionen modelliert. Weiterhin wird angenommen, dass sich die Bewegung von zu rekonstruierenden Punkten in hinreichend kleinen zeitlichen Umgebungen durch eine geeignet parametrisierbare Funktion, zum Beispiel eine Gerade, darstellen lässt. Es wird zudem gezeigt, dass mit dem Modell alle Parameter einer Szene optimiert werden können.

Abschließend wird das Verfahren an sieben synthetischen Szenen evaluiert.

Talk BA talk: Volume modeling and editing in Blender

01.10.2012 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Julius Saretzki

This thesis describes an approach how to edit volumetric data, like astronomical nebulae or other semi- transparent phenomena, using Blender. Traditional modeling works on surfaces and solids, but it performs not very well on semi-transparent datasets like nebulae. The problem is how to edit those phenomena like astronomical nebulae using Blender. This approach transforms nebulae to solids and back. For the editing operations an algorithm (MarchingCubes) transforms semi-transparent datasets into an editable representation which Blender is able to edit. After editing, the resulting data will be retransformed into a voxel representation. The problem is that the surface does not contain all data. For this, the free spaces are approximated using another algorithm (Di ffusion-Curves) which is extended to performs on 3D data too.

Talk BA-Vortrag: "Perceptually realistic rendering of glare sources"

24.09.2012 09:15
Informatikzentrum, Seminarraum G30

Speaker(s): Sören Petersen

Nowadays virtually created scenes are required to look as realistic as possible. Computer based illustration of light, especially in nighttime situations, is a hallenging topic in computer graphics. In nighttime situations glare e ffects occur in addition to usual lightning and need to be applied manually. Based on the current state of research, task of this bachelor thesis is to implement such a glare e ffect running at real time frame rates.

Talk BA-Vortrag Pattern- and Markerless RGB and Depth Sensor Calibration

21.09.2012 10:00
Informatikzentrum, Seminarraum G30

Speaker(s): Dennis Franke

With the help of Structure from Motion (SfM) algorithms the camera motion and a rough estimates of the scene geometry can be estimated from a video recording. Such approaches are an integral part of state-of-the-art commercial software. Recently, a fordable depth sensors have become available on the consumer market, which estimate the spatial depth for each pixel. There have already been approaches to jointly calibrate RGB and depth sensors, but they require special patterns or markers. The goal of this bachelor thesis is to develop an approach that calibrates RGB and depth sensors without the help of a marker or pattern.

Talk Promotionsvortrag: Measuring, Modeling and Verification of Light-Matter-Interaction Phenomena

12.09.2012 09:30
Informatikzentrum, IZ 161

Speaker(s): Kai Berger

Talk 3D reconstruction - from science to art and back again

10.08.2012 11:00
Informatikzentrum, Seminarraum G30

Speaker(s): Rahul Nair

"So you want a CG T-Rex dancing around in this footage of your living room? Fine - give me some depth information and I can do that for you...` Visual effects are ubiquitous in the film and broadcast industry, even in productions where this is not directly apparent. The core element in combining vfx with shot footage is the knowledge of scene geometry, which is plausible in some sense. In computer vision, on the other hand, knowledge of metrically correct scene geometry solves the inverse problem created by the imaging system. This can then be used to quantitatively benchmark almost any kind of computer vision algorithm class.

In my talk I will present the various ongoing research activities in our group related to 3D reconstruction. The topics will range from `cheap` 3D reconstruction using sensor fusion systems to 3D conversion of monoscopic sequences. Additionally, some work on the evaluation of the obtained 3D scans shall be shown in order to assess the scan quality. Furthermore, I will describe how we are applying these techniques to the post-production pipeline as well as to performance analysis.