Computer Graphics
TU Braunschweig

Events


Talk Methods for Analyzing the Influence of Molecular Dynamics on Neuronal Activity

26.06.2015 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Stefan Sokoll

Investigating the functioning of neurons at the molecular level is an important foundation to understand how higher brain functions like perception, behavior, or learning and memory are accomplished. Since molecular processes occur in the nanometer range and have to be studied in living samples, recently developed optical super-resolution techniques have boosted their characterization. However, super-resolution techniques require complex instrumentation, are hardly applicable to organotypic samples and still suffer from relatively low temporal resolution. This talk presents new analysis tools that aim to overcome these limitations and allow to study how the dynamics and the interplay of molecules modulate synaptic transmission efficiency. At first, an approach for the detection of individual presynaptic activity will be briefly introduced, but the major part focuses on an algorithm that facilitates fast 3D molecular dynamic analyses within brain slices. It adjusts astigmatism-based 3D single-particle tracking (SPT) techniques to depth-dependent optical aberrations induced by the refractive index mismatch so that they are applicable to complex samples. In contrast to existing techniques, the presented online calibration method determines the aberration directly from the acquired 2D image stream by exploiting the inherent particle movement and the redundancy introduced by the astigmatism. The method improves the positioning by reducing the systematic errors introduced by the aberrations and allows to correctly derive the cellular morphology and molecular diffusion parameters in 3D independently of the imaging depth.

Talk Promotions-V-Vg: Real-World Video Processing Using Unstructured Scene Representations

26.06.2015 10:00
Informatikzentrum, Seminarraum G30

Speaker(s): Felix Klose

When processing single or multi-view video data recorded in uncontrolled environments using scene reconstruction algorithms a multitude of factors can negatively influence the result quality. These factors include camera, lens or color miscalibrations, errors in temporal or spatial camera alignment, unsynchronized and rolling shutters on the camera side, as well as specular, untextured, repetitive objects or objects with visually complex appearances inside the scene. These circumstances make working with computer vision algorithms on real-world data a very challenging task and errors in measurements in real-world recoding setups can not be avoided and have to be accounted for.

In this talk I will give an overview of my work in single and multi-view video processing of real world data using unstructured scene representations. I show how dense 2D correspondence based stereoscopic free-viewpoint video can be created, using tools for user guided error correction. How the complexity of real-world multi-view data can be handled by tracking small surface patches and using a strict motion model to resolve ambiguities and create quasi dense scene

representations. And finally how to create high quality video effects that can handle extreme amounts of noise in estimated depth maps by leveraging the redundancy inherent in video data.

Talk MA-Talk: Compressed Sensing-based Progressive Reconstruction for Image Synthesis

11.05.2015 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Cong Wang

Compressed Sensing (CS) is a new mathematical framework for reconstruction of signals with missing information. Recently, its application to sparse image reconstruction, reconstruction of an image from a small set of known pixels, has shown promising results. The key idea is derived from the fact that most natural images are highly compressible because they are sparse in a transform domain. This leads to the obvious questions: Why waste resources on evaluating information (here, individual pixels) that are discarded later on or have only small impact on the overall visual impression? So far the measurements (evaluated pixels of the image) are chosen in random fashion (usually based on a Blue Noise distribution) to uniformly cover the image domain. Theoretically, if salient features of the image are known in advance, fewer measurements would be needed for high-quality reconstruction. For real-world images taken by a photo or video camera it is very hard to evaluate important features of the image without actually capturing them. However, during image synthesis more knowledge about the scene, camera and lighting situation is present. If carefully observed, the rendering process can potentially provide useful cues which are more efficient to evaluate than the actual measurements, can guide the image sampling process, and thus accelerate convergence.

Talk BA Talk: Tiefenbasierte Blickpunktgenerierung für interaktive Videos in immersiven VR-Systemen

27.04.2015 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Inga Menke

Virtual Reality Head-mounted Displays ermöglichen bei der Betrachtung von Bildern und Videos den Sprung von der traditionellen Fenster-Metapher hin zum virtuellen Rundum-Blick des Betrachters. Die Kombination aus Panoramavideo und Virtual Reality Display mit Head-Tracking schaffen die Voraussetzungen für die die freie Wahl der Blickrichtung auf die Inhalte. Die im Vortrag präsentierte Bachelorarbeit betrachtet für die Entwicklung immersiver medialer Inhalte den nächsten logischen Schritt : Ein Verfahren für die freie Wahl der Betrachterposition in der dargestellten Szene. Präziser wird es dem Betrachter ermöglicht, bei Positionsänderung des Kopfes die Bewegungsparallaxe innerhalb des Videos wahrzunehmen. Durch die Möglichkeit der Bewegung wird der Immersionsgrad im Video erhöht.

Talk Promotions-V-Vg: Electroencephalographics:A Novel Modality for Graphics Research

24.04.2015 11:00
Informatikzentrum, Seminarraum G30

Speaker(s): Maryam Mustafa

In this thesis I present the application of ElectroEncephaloGraphy(EEG) as a novel modality for investigating perceptual graphics problems.

Until recently, EEG has predominantly been used for clinical diagnosis, in psychology and by the BCI community. Here I extend its scope to assist in understanding the perception of visual output from graphics applications and to create new methods based on direct neural feedback.

My work uses EEG data to determine the perceptual quality of videos and images which is of paramount importance for most graphics algorithms. This is especially important given the gap between perceived quality of an image and physical accuracy.

One of the main impediments to the use of an EEG is the very low Signal-to-Noise Ratio (SNR) which requires averaging the data from many trials and participants to get a meaningful result. I propose a novel method for evaluating EEG signals which allows prediction of perceived image quality from only a single trial.

This thesis also explores the possibilities for automatic optimization of rendering parameters for images and videos based on implicit neural feedback.

Talk User-guided Image Pre-Segmentation

17.04.2015 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Ying Wang

This is the presentation of a specialisation project done at the Institut für Computergraphik. The student shows a method for computing a locally linear image structure which is then used to propagate user input in the form of brush strokes in the image at hand. The talk includes some aspect of the numerical computation done in MATLAB and presents a variety of results.

Talk Promotions-V-Vg: Interactive Scene Reconstruction and Image Correspondence Estimation

10.04.2015 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Kai Ruhl

High-quality dense correspondence maps between images and scene reconstruction, be it optical flow, stereo or scene flow, are an essential prerequisite for a multitude of computer vision and graphics tasks, e.g. scene editing or view interpolation in visual media production. Due to the ill-posed nature of the estimation problem in typical setups (i.e. limited amount of cameras and limited frame rate), automated estimation approaches are prone to erroneous correspondences and subsequent quality degradation in many non-trivial cases such as occlusions, ambiguous movements, long displacements, or low texture. While improving estimation algorithms is one possible direction, this thesis complementarily concerns itself with minimal user interactions that lead to improved correspondence maps and scene reconstruction. Where visually convincing results are essential, rendering artifacts resulting from estimation errors are usually repaired by hand with image editing tools, which is time consuming and therefore costly. New forms of user interaction that integrate human scene recognition capabilities to guide a semi-automatic correspondence or scene reconstruction algorithm have the potential to save considerable effort, enabling faster and more efficient production of visually convincing rendered footage.

Talk Promotions-V-Vg: Advanced Denoising and Memory-efficient Acceleration for Realistic Image Synthesis

06.03.2015 11:00
Informatikzentrum, Seminarraum G30

Speaker(s): Pablo Bauszat

Stochastic ray tracing methods have become the industry's standard for today's realistic image synthesis thanks to their ability to achieve a supreme degree of realism by physically simulating various natural phenomena of light and cameras (e.g. global illumination, depth-of-field or motion blur). Unfortunately, high computational costs for more complex scenes and image noise from insufficient simulations are major issues of these methods and, hence, acceleration and denoising are key components in stochastic ray tracing systems. In this thesis, we introduce three new filtering methods for advanced lighting and camera effects, as well as two new concepts for memory-efficient acceleration structures. In particular, we present a filter for global illumination aiming at real-time performance, an interactive filter for global illumination in the presence of depth-of-field and a general and robust adaptive reconstruction framework for high-quality images with arbitrary rendering effects. To address complex scene geometry, we propose an extension to the classic Bounding Volume Hierarchy reducing its footprint down to 1 bit per node and a new concept which models the acceleration structure completely implicit, i.e. without any additional memory cost at all, while maintaining interactive performance. Our contributions advances the state-of-the-art of denoising techniques for realistic image synthesis as well as the field of memory-efficient acceleration for ray tracing systems.

Talk MA-Talk: Compressed Sensing and Sparse Coding for Depth and RGB-D Images

12.02.2015 14:00
Informatikzentrum, Seminarraum G30

Speaker(s): Emmy-Charlotte Förster

In this talk, new methods for the compression of natural images and depth maps using compressed sensing and sparse coding are presented. Sarkis and Diepold recently presented an approach for compression of depth maps using compressed sensing. We expand upon this approach by using sparse coding, and enhance the depth map compression by adding the available RGB information. By modifying the underlying optimization problem of compressed sensing, we are able to further enhance the depth maps of compressed RGB-D images. We create our dictionaries and evaluate our result by using both synthetic as well as natural data sets, captured using a light-field camera.

Talk Participating Media - Fast Rendering and Artistic Stylization

29.01.2015 14:00
Informatikzentrum, Seminarraum G30

Speaker(s): Oliver Klehm

Talk MA-Talk: Space-time reconstruction of very fast fluid dynamic processes

10.12.2014 11:00
Informatikzentrum, Seminarraum G30

Speaker(s): Matthias Überheide

In this talk a method will be presented to reconstruct very fast fluid dynamic processes in space and time from a single camera view. The underlying physical relations are used to resolve the ambiguity of the problem. The captured images are used to guide a fluid simulation resulting in an animated 3d volume of the captured effect.

An optimization problem is formulated and the adjoint method is applied to allow the computation of the radient in reasonable time.

The difficulties inherent in this optimization problem are shown with multiple artificial and real data testcases. Possible approaches for the individual difficulties are analyzed.

Talk BA-Talk: Natural Eye Adaptation for Real-Time HDR Applications

28.11.2014 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Andreas Bauerfeld

In the Bachelor Thesis of Andreas Bauernfeld a perception model capable of performing a natural rendering of high dynamic range scenes is developed. It is real-time capable and simulates the most important properties of the human visual system. Furthermore, and in contrast to the often used assumption of fully adapted photoreceptors, a continuous state of maladaption is considered, resulting in a more precise prediction of contrast perception. In addition, the impression tarnishing effects will not be suppressed - unlike photo reproduction methods - but rather are enhanced and physiologically accurately rendered. These components provide a realistic impression of the rendered high dynamic range environment and consider individual observer properties as well as setup- and display device-related constraints.

Talk Industrial Visual Computing - metaio GmbH

11.07.2014 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Christian Lipski

ICG alumnus and PhD graduate Dr. Christian Lipski talks about what it is like to work at metaio GmbH.

Talk Disputation: Regularized Optimization Methods for Reconstruction and Modeling in Computer Graphics

17.06.2014 10:00
Informatikzentrum, Seminarraum 105

Speaker(s): Stephan Wenger

The field of computer graphics deals with virtual representations of the real world. These can be obtained either through reconstruction of a model from measurements, or by directly modeling a virtual object, often on a real-world example. The former is often formalized as a regularized optimization problem, in which a data term ensures consistency between model and data and a regularization term promotes solutions that have high a priori probability.

In this dissertation, different reconstruction problems in computer graphics are shown to be instances of a common class of optimization problems which can be solved using a uniform algorithmic framework. Moreover, it is shown that similar optimization methods can also be used to solve data-based modeling problems, where the amount of information that can be obtained from measurements is insufficient for accurate reconstruction.

As real-world examples of reconstruction problems, sparsity and group sparsity methods are presented for radio interferometric image reconstruction in static and time-dependent settings. As a modeling example, analogous approaches are investigated to automatically create volumetric models of astronomical nebulae from single images based on symmetry assumptions.

Talk Disputation: Visual Analysis of High-Dimensional Spaces

09.05.2014 10:00
Informatikzentrum, Seminarraum 105

Speaker(s): Georgia Albuquerque

The visual exploration and analysis of high-dimensional data sets commonly requires projecting the data into lower-dimensional representations. The number of possible representations grows rapidly with the number of dimensions, and manual exploration quickly becomes ineffective or even infeasible. In this thesis I present automatic algorithms to compute visual quality metrics and show different situations where they can be used to support the analysis of high-dimensional data sets. The proposed methods can be applied to different specific user tasks and can be combined with established visualization techniques to sort or select projections of the data based on their information-bearing content. These approaches can effectively ease the task of finding truly useful visualizations and potentially speed up the data exploration task. Additionally, I present a framework designed to generate synthetic data, for evaluation by users who can interactively navigate through high dimensional data sets.

Talk Promotions-Vor-Vortrag: Augmenting People in Monocular Video

28.04.2014 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Lorenz Rogge

Promotions-Vor-Vortrag: Aiming at realistic video augmentation, i.e. the embedding of virtual, 3-dimensional objects into an scene's original content, a series of challenging problems has to be solved. This is especially the case when working with solely monocular input material, as important additional 3D information is missing and has to be recovered during the process, if necessary.

In this talk, I will present a semi-automatic strategy to tackle this task by providing solutions to individual problems in the context of virtual clothing as an example for realistic video augmentation. Starting with two different approaches for monocular pose- and motion estimation, I continue to build up a 3D human body model by estimating detailed shape information as well as basic surface material properties. Using this information allows to further extract a dynamic illumination model from the provided input material. This illumination model is particularly important for rendering a realistic virtual object and adds a lot of realism to the final video augmentation. The animated human model is able to interact with virtual 3D object and is used in the context of virtual clothing to animate a simulated garment. To achieve the desired realism, I present an additional image-based compositing approach, to realistically embed the simulated garment into the original scene content. Combining the presented approaches provide an integrated strategy for realistic augmentation of actors in monocular video sequences.

Talk Regularized Optimization Methods for Reconstruction and Modeling in Computer Graphics

07.03.2014 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Stephan Wenger

Promotions-Vor-Vortrag:

The field of computer graphics deals with virtual representations of the real world. They can be obtained either through reconstruction of a model from measurements, or by directly modeling a virtual object, often on a real-world example.

Reconstruction from measurements is often formalized as a regularized optimization problem, in which a data term ensures consistency between model and data and a regularization term promotes solutions that have high a priori probability, a popular recent example being compressed sensing. In this thesis, different regularized optimization techniques are applied to reconstruction problems in computer graphics, namely the calculation of images from interferometric measurements.

Moreover, it is shown that similar optimization methods can also be used to solve data-based modeling problems, where the amount of information that can be obtained from measurements is insufficient for an accurate reconstruction. By formalizing a priori knowledge about the object through the appropriate choice of a regularizer, plausible models can be generated in a largely automatic process with minimal user interaction. Two such techniques are demonstrated on the example of symmetry-based modeling of astronomical nebulae.

Talk BA-Talk: Implicit Image Segmentation Using Minimal Surfaces

25.11.2013 13:30
Informatikzentrum, Seminarraum G30

Speaker(s): Marc Kassubeck

mage segmentation is one of the main research topics in computer graphics and computer vision. The main goal is splitting a given image up into multiple parts according to some – usually visual – ground rules. A proper mathematical treatment therefore needs an efficient way to describe these image segments and a model to implement those ground rules. The probably most common way of describing the image segments is using binary functions. Less common is an explicit description of the boundary of those regions. But regardless of whether the representation focuses on the boundary or not, the desired properties of the segmentation are usually modeled using a minimization problem. Hence the results depend heavily on the objective function used to describe the problem. Amongst the most influential approaches are the model of Kass, the Mumford-Shah model and the Chan-Vese model.

The key idea of this thesis is to use the zero level set of a certain function to describe the boundary of the segmentation regions. Building on that foundation, a proper model that extends the ideas of already established models and at the same time carefully implements the implicit boundary description will be developed. The obtained minimization problem will be transformed into a convex saddle point problem. This structure finally allows for usage of a primal dual optimization algorithm for convex problems.

Talk BA-Talk: Real-time retargeting of human skeletal structures

25.11.2013 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Torsten Thoben

Different animation and motion capture tools used in computer graphics and computer animation use different animation models to describe human motion. These models vary in detail and thus use skeletal models varying in complexity such as joint count and degrees of freedom per joint. To be able to work with different human motion models a proper translation of the motion between different skeletons has to be found. This is called motion retargeting.

The topic of this thesis is to develop a real-time retargeting module, being able to translate motions between different skeletal models. Focusing on human skeletal models this module should act as a black-box when used, while given both skeletal model descriptions as input. As an example the module should be able to translate between BVH and C3D, allowing to transfer motions between two common skeletal hierarchy formats. While concentrating on these two hierarchy descriptions, the developed system must be extensible to other formats and provide a proper interface to add more human skeletal models.

Talk Geschichte des CGI vor der Erfindung des Computers

15.11.2013 15:00
Informatikzentrum, Hörsaal M 161

Speaker(s): Harald Klinke

Raytracing ist keine Erfindung des Computerzeitalters. Die Vorstellung, eine zweidimensionale Abbildung durch die Verfolgung des Strahlengangs zu erzeugen, geht tatsächlich auf die Frührenaissance zurück (Alberti) und selbst Apparaturen nach diesem Prinzip wurden bereits vor 500 Jahren entwickelt (Dürer). Die Prinzipien des Drahtgittermodells, des Shadings und Oberflächenreflexionen, grundsätzlich die Bilderzeugung (Imaging) war schon immer Teil der künstlerischen Auseinandersetzung. Daher kann Kunst- und Mediengeschichte langfristige Entwicklungen aufzeigen, die die Gegenwart besser bewerten lässt, und aus der historischen Betrachtung der Bildmedien Megatrends ausmachen und einen Ausblick auf die Zukunft bieten.

In der Literatur wird die Geschichte der Computer Graphics meist mit ersten Anwendungen auf Computern eingeleitet. So stehen Fernschreiber, das MIT-System Whirlwind oder Ivan Sutherlands Sketchpad im Zentrum dieser historischen Abrisse. Dabei liegen gerade die gedanklichen Grundlagen des Computer Generated Imagery bereits in der Antike. Erst mit Licht- und Sehtheorien konnte das Visuelle verstanden werden. Dies legte die Grundlage für die Entwicklung der Zentralperspektive in der Frührenaissance. Dem ging ein Bildbegriff voraus, der ein Bild als den Schnitt durch die „Sehpyramide“ definierte. Diese Bildvorstellung, die sich seit der Renaissance bis zur Erfindung der Fotografie durch die europäische Kulturgeschichte zieht hat nichts anderes zum Prinzip als das, was wir heute als Raytracing bezeichnen. Daher ist es auch kein Wunder, dass aufgrund dieser Vorstellung bereits früh Apparaturen entwickelt wurden, die ein mechanisches Abbild der der sichtbaren oder vorgestellten Wirklichkeit erzeugten.

Talk BA-Talk: Optimizing the Object-Median Split for Ray Tracing Applications

11.11.2013 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Jakob Garbe

Acceleration data structures are the most efficient way to speed up ray tracing. One of the most common acceleration data structures is the Bounding Volume Hierarchy (BVH). It's performance is highly dependent on the partitioning criteria used with the Surface-Area-Heuristic being the most famous. Memory saving approximations of the BVH, like the implicit object space partitioning scheme by Eisemann et al., however, require an object median split during construction which is known to be inferior to other schemes.

In this work several partitioning approaches are developed and tested which try to improve the classic object median split, e.g. by different projections or sorting along a morton curve .

Talk BA-Talk: Shortened Shadow Rays

04.11.2013 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Leslie Wöhler

Shadow computation is a major task in a rendering pipeline today. Ray tracing can compute exact shadows by computing the visibility between two scene points. This is usually done by casting a shadow ray. The BA talk gives informations about the concept of “shortened shadow rays”, a technique to speed up shadow computations for ray tracing systems.

Talk Image-Based Approaches for Photo-Realistic Rendering of Complex Objects

27.09.2013 14:00
Informatikzentrum, Seminarraum G30

Speaker(s): Anna Hilsmann

One principal aim in computer graphics is the achievement of photorealism. In this talk, I will propose methods for image-based rendering and modification of objects with complex appearance properties, concentrating on the example of clothes. With physical simulation methods, rendering of clothes is computationally demanding because of complex cloth drapery and shading. In contrast, the proposed methods use real images, which capture these properties and serve as appearance examples to guide complex animation or texture modification processes. Texture deformation and shading are extracted as image warps both in the spatial and in the intensity domain. Based on these warps, a pose-dependent image-based rendering method synthesizes new images of clothing from a database of pre-recorded images. For rendering, the images and warps are parameterized and interpolated in pose-space, i.e. the space of body poses, using scattered data interpolation. To allow for appearance changes, an image-based retexturing method is proposed, which exchanges the cloth texture in an image while maintaining texture deformation and shading properties, without the knowledge of the scene geometry and lighting conditions.

Altogether, the presented approaches shift computational complexity from the rendering to an a-priori training phase. The use of real images and warp-based extraction of deformation and shading allow a photo-realistic visualization and modification of clothes, including fine details, without computationally demanding simulation of the underlying scene and object properties.

Talk Promotions-Vor-Vortrag Benjamin Meyer

02.08.2013 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Benjamin Meyer

Talk MA-Vortrag: Detail Hallucinated Image Interpolation

13.06.2013 14:00
Informatikzentrum, Seminarraum G30

Speaker(s): Alexander Lerpe

Image-interpolation and warping is the basis of most image-based rendering techniques where real-world footage is used as input instead of complex 3D geometry. The difficulty is that visible artifacts appear in the image-interpolation if correspondences between images are not correctly estimated beforehand. One simple solution would be to downsample the images until the errors disappear but this solution is not practical as most of the details disappear as well.

In this talk a new algorithm is presented that is orthogonal to classic image-interpolation algorithms in the sense that not the correspondences are corrected but the input images are changed to match the correspondences and create a more plausible image-interpolation result. The algorithm is very versatile as it can also be used to add details to prevent interpolation artifacts or to add details to input images captured with a lower resolution or where the camera settings were accidentally wrong during recording.