Computer Graphics
TU Braunschweig


Talk MA-Talk: Interactive Realtime Image Segmentation

29.04.2016 11:00
Informatikzentrum, Seminarraum G30

Speaker(s): Moritz Mühlhausen

Talk New Technologies driving Visual Computing Research

27.04.2016 14:30
Hochschule Bonn-Rhein-Sieg

Speaker(s): Marcus Magnor

Recent developments in consumer electronics have a profound impact even on fundamental research agendas and conference programs in visual computing. Programmable GPUs, 3D movies, Kinect, HDR displays, 4k video projectors, Oculus Rift, or all-in-one smartphones are just a few examples of how sudden, widespread availability and adoption of “new” technologies drive contemporary research (even though most of it had, in fact, already been available in the lab for quite some time). In my talk, I will concentrate on a few ongoing consumer technology trends and demonstrate how they are triggering intriguing new research in visual computing.

Talk MA-Talk: Globally Weighted 3D Reconstruction with Accurate Visibility Computation

25.04.2016 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Marc Kassubeck

Talk Image-based Methods to Measuring and Modeling Flow Phenomena of Gases and Liquids

14.04.2016 10:00
Honolulu, USA

Speaker(s): Marcus Magnor

invited plenary speaker at  ISIMet 2016 (abstract)

Talk Teamprojekt-Abschluss: Real-time Augmented Reality Hologram

12.02.2016 14:15
Informatikzentrum, Seminarraum G30

Präsentation der Ergebnisse des studentischen Teamprojekts.

Talk Context-aware dynamic Sensor Fusion

28.09.2015 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Thiemo Alldieck

Sensor fusion aims to compensate for individual strengths and weaknesses of different sensors. When recording a complex scene over a longer period, these characteristics might change, resulting in possible system instabilities. The presented work introduces a new dynamic method for data fusion of different imaging devices, namely RGB and thermal camera. Therefor background conformity values of two image sources are fused in order to enable stable background subtraction for persistent surveillance. Image quality heuristics based on image characteristics and contextual information are specified to evaluate the usefulness of the modalities and perform the fusion context aware.

Talk Three Geometric Structures and their Applications

18.09.2015 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Nabil Mustafa

Since the beginning of systematic research on geometric computing almost forty years ago, there has been a very fruitful interplay between the mathematical study of geometric structures and the search for efficient and practical solutions for a variety of problems involving geometric data. In this talk I will illustrate this with applications to three different areas: computer graphics, algorithms and combinatorics.

Talk Disputation

07.08.2015 10:00
Informatikzentrum, IZ 161

Speaker(s): Pablo Bauszat

Advanced Denoising and Memoryless Acceleration for Realistic Image Synthesis

Talk Disputation

27.07.2015 14:00
Informatikzentrum, Seminarraum G04

Speaker(s): Benjamin Meyer

Measuring, modeling and simulating the re-adaptation process of the Human Visual System after short-time glares in traffic scenarios

Talk Disputation

17.07.2015 15:00
Informatikzentrum, IZ 161

Speaker(s): Kai Ruhl

Interactive Spacetime Reconstruction in Computer Graphics

Talk Disputation

13.07.2015 13:15
Informatikzentrum, Seminarraum G04

Speaker(s): Maryam Mustafa

ElectroEncephaloGraphics - a Novel Modality for Graphics Research

Talk Disputation

03.07.2015 10:00
Informatikzentrum, Seminarraum G04

Speaker(s): Lorenz Rogge

Augmenting People in Monocular Video Data

Talk Methods for Analyzing the Influence of Molecular Dynamics on Neuronal Activity

26.06.2015 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Stefan Sokoll

Investigating the functioning of neurons at the molecular level is an important foundation to understand how higher brain functions like perception, behavior, or learning and memory are accomplished. Since molecular processes occur in the nanometer range and have to be studied in living samples, recently developed optical super-resolution techniques have boosted their characterization. However, super-resolution techniques require complex instrumentation, are hardly applicable to organotypic samples and still suffer from relatively low temporal resolution. This talk presents new analysis tools that aim to overcome these limitations and allow to study how the dynamics and the interplay of molecules modulate synaptic transmission efficiency. At first, an approach for the detection of individual presynaptic activity will be briefly introduced, but the major part focuses on an algorithm that facilitates fast 3D molecular dynamic analyses within brain slices. It adjusts astigmatism-based 3D single-particle tracking (SPT) techniques to depth-dependent optical aberrations induced by the refractive index mismatch so that they are applicable to complex samples. In contrast to existing techniques, the presented online calibration method determines the aberration directly from the acquired 2D image stream by exploiting the inherent particle movement and the redundancy introduced by the astigmatism. The method improves the positioning by reducing the systematic errors introduced by the aberrations and allows to correctly derive the cellular morphology and molecular diffusion parameters in 3D independently of the imaging depth.

Talk Promotions-V-Vg: Real-World Video Processing Using Unstructured Scene Representations

26.06.2015 10:00
Informatikzentrum, Seminarraum G30

Speaker(s): Felix Klose

When processing single or multi-view video data recorded in uncontrolled environments using scene reconstruction algorithms a multitude of factors can negatively influence the result quality. These factors include camera, lens or color miscalibrations, errors in temporal or spatial camera alignment, unsynchronized and rolling shutters on the camera side, as well as specular, untextured, repetitive objects or objects with visually complex appearances inside the scene. These circumstances make working with computer vision algorithms on real-world data a very challenging task and errors in measurements in real-world recoding setups can not be avoided and have to be accounted for.

In this talk I will give an overview of my work in single and multi-view video processing of real world data using unstructured scene representations. I show how dense 2D correspondence based stereoscopic free-viewpoint video can be created, using tools for user guided error correction. How the complexity of real-world multi-view data can be handled by tracking small surface patches and using a strict motion model to resolve ambiguities and create quasi dense scene

representations. And finally how to create high quality video effects that can handle extreme amounts of noise in estimated depth maps by leveraging the redundancy inherent in video data.

Talk Bild-Aspekte

15.06.2015 17:00
Braunschweigische Wissenschaftliche Gesellschaft

Speaker(s): Marcus Magnor

plenary talk

Bilder sind ein faszinierendes Phänomen der Natur. Jeder selbst leuchtende und jeder beleuchtete Gegenstand sendet ununterbrochen und in alle Richtungen Bilder aus, die Auskunft über ihren Ursprungsort und ihre Entstehung geben. Bilder breiten sich im freien Raum ungehindert aus, sie konservieren Information über Raum und Zeit hinweg und transportieren sie mitunter Milliarden Lichtjahre weit. Bilder sind universell. Mit unserem Sehsinn empfangen wir einen (sehr kleinen) Teil der Bilder, die die Natur ununterbrochen aussendet. Ohne Zeitverzögerung informieren sie uns uber unsere nähere und weitere Umgebung. Auf Basis von Bildern lernen wir unsere Welt kennen, verstehen und auf sie zu reagieren.

Talk MA-Talk: Compressed Sensing-based Progressive Reconstruction for Image Synthesis

11.05.2015 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Cong Wang

Compressed Sensing (CS) is a new mathematical framework for reconstruction of signals with missing information. Recently, its application to sparse image reconstruction, reconstruction of an image from a small set of known pixels, has shown promising results. The key idea is derived from the fact that most natural images are highly compressible because they are sparse in a transform domain. This leads to the obvious questions: Why waste resources on evaluating information (here, individual pixels) that are discarded later on or have only small impact on the overall visual impression? So far the measurements (evaluated pixels of the image) are chosen in random fashion (usually based on a Blue Noise distribution) to uniformly cover the image domain. Theoretically, if salient features of the image are known in advance, fewer measurements would be needed for high-quality reconstruction. For real-world images taken by a photo or video camera it is very hard to evaluate important features of the image without actually capturing them. However, during image synthesis more knowledge about the scene, camera and lighting situation is present. If carefully observed, the rendering process can potentially provide useful cues which are more efficient to evaluate than the actual measurements, can guide the image sampling process, and thus accelerate convergence.

Talk BA Talk: Tiefenbasierte Blickpunktgenerierung für interaktive Videos in immersiven VR-Systemen

27.04.2015 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Inga Menke

Virtual Reality Head-mounted Displays ermöglichen bei der Betrachtung von Bildern und Videos den Sprung von der traditionellen Fenster-Metapher hin zum virtuellen Rundum-Blick des Betrachters. Die Kombination aus Panoramavideo und Virtual Reality Display mit Head-Tracking schaffen die Voraussetzungen für die die freie Wahl der Blickrichtung auf die Inhalte. Die im Vortrag präsentierte Bachelorarbeit betrachtet für die Entwicklung immersiver medialer Inhalte den nächsten logischen Schritt : Ein Verfahren für die freie Wahl der Betrachterposition in der dargestellten Szene. Präziser wird es dem Betrachter ermöglicht, bei Positionsänderung des Kopfes die Bewegungsparallaxe innerhalb des Videos wahrzunehmen. Durch die Möglichkeit der Bewegung wird der Immersionsgrad im Video erhöht.

Talk Promotions-V-Vg: Electroencephalographics:A Novel Modality for Graphics Research

24.04.2015 11:00
Informatikzentrum, Seminarraum G30

Speaker(s): Maryam Mustafa

In this thesis I present the application of ElectroEncephaloGraphy(EEG) as a novel modality for investigating perceptual graphics problems.

Until recently, EEG has predominantly been used for clinical diagnosis, in psychology and by the BCI community. Here I extend its scope to assist in understanding the perception of visual output from graphics applications and to create new methods based on direct neural feedback.

My work uses EEG data to determine the perceptual quality of videos and images which is of paramount importance for most graphics algorithms. This is especially important given the gap between perceived quality of an image and physical accuracy.

One of the main impediments to the use of an EEG is the very low Signal-to-Noise Ratio (SNR) which requires averaging the data from many trials and participants to get a meaningful result. I propose a novel method for evaluating EEG signals which allows prediction of perceived image quality from only a single trial.

This thesis also explores the possibilities for automatic optimization of rendering parameters for images and videos based on implicit neural feedback.

Talk User-guided Image Pre-Segmentation

17.04.2015 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Ying Wang

This is the presentation of a specialisation project done at the Institut für Computergraphik. The student shows a method for computing a locally linear image structure which is then used to propagate user input in the form of brush strokes in the image at hand. The talk includes some aspect of the numerical computation done in MATLAB and presents a variety of results.

Talk Promotions-V-Vg: Interactive Scene Reconstruction and Image Correspondence Estimation

10.04.2015 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Kai Ruhl

High-quality dense correspondence maps between images and scene reconstruction, be it optical flow, stereo or scene flow, are an essential prerequisite for a multitude of computer vision and graphics tasks, e.g. scene editing or view interpolation in visual media production. Due to the ill-posed nature of the estimation problem in typical setups (i.e. limited amount of cameras and limited frame rate), automated estimation approaches are prone to erroneous correspondences and subsequent quality degradation in many non-trivial cases such as occlusions, ambiguous movements, long displacements, or low texture. While improving estimation algorithms is one possible direction, this thesis complementarily concerns itself with minimal user interactions that lead to improved correspondence maps and scene reconstruction. Where visually convincing results are essential, rendering artifacts resulting from estimation errors are usually repaired by hand with image editing tools, which is time consuming and therefore costly. New forms of user interaction that integrate human scene recognition capabilities to guide a semi-automatic correspondence or scene reconstruction algorithm have the potential to save considerable effort, enabling faster and more efficient production of visually convincing rendered footage.

Talk Promotions-V-Vg: Advanced Denoising and Memory-efficient Acceleration for Realistic Image Synthesis

06.03.2015 11:00
Informatikzentrum, Seminarraum G30

Speaker(s): Pablo Bauszat

Stochastic ray tracing methods have become the industry's standard for today's realistic image synthesis thanks to their ability to achieve a supreme degree of realism by physically simulating various natural phenomena of light and cameras (e.g. global illumination, depth-of-field or motion blur). Unfortunately, high computational costs for more complex scenes and image noise from insufficient simulations are major issues of these methods and, hence, acceleration and denoising are key components in stochastic ray tracing systems. In this thesis, we introduce three new filtering methods for advanced lighting and camera effects, as well as two new concepts for memory-efficient acceleration structures. In particular, we present a filter for global illumination aiming at real-time performance, an interactive filter for global illumination in the presence of depth-of-field and a general and robust adaptive reconstruction framework for high-quality images with arbitrary rendering effects. To address complex scene geometry, we propose an extension to the classic Bounding Volume Hierarchy reducing its footprint down to 1 bit per node and a new concept which models the acceleration structure completely implicit, i.e. without any additional memory cost at all, while maintaining interactive performance. Our contributions advances the state-of-the-art of denoising techniques for realistic image synthesis as well as the field of memory-efficient acceleration for ray tracing systems.

Talk MA-Talk: Compressed Sensing and Sparse Coding for Depth and RGB-D Images

12.02.2015 14:00
Informatikzentrum, Seminarraum G30

Speaker(s): Emmy-Charlotte Förster

In this talk, new methods for the compression of natural images and depth maps using compressed sensing and sparse coding are presented. Sarkis and Diepold recently presented an approach for compression of depth maps using compressed sensing. We expand upon this approach by using sparse coding, and enhance the depth map compression by adding the available RGB information. By modifying the underlying optimization problem of compressed sensing, we are able to further enhance the depth maps of compressed RGB-D images. We create our dictionaries and evaluate our result by using both synthetic as well as natural data sets, captured using a light-field camera.

Talk Participating Media - Fast Rendering and Artistic Stylization

29.01.2015 14:00
Informatikzentrum, Seminarraum G30

Speaker(s): Oliver Klehm

Talk Technology Trends Driving Visual Computing Research

16.01.2015 12:00
University of Utah, USA

Speaker(s): Marcus Magnor

invited talk at the Scientific Computing and Imaging Institute (SCI) (abstract)

Talk MA-Talk: Space-time reconstruction of very fast fluid dynamic processes

10.12.2014 11:00
Informatikzentrum, Seminarraum G30

Speaker(s): Matthias Überheide

In this talk a method will be presented to reconstruct very fast fluid dynamic processes in space and time from a single camera view. The underlying physical relations are used to resolve the ambiguity of the problem. The captured images are used to guide a fluid simulation resulting in an animated 3d volume of the captured effect.

An optimization problem is formulated and the adjoint method is applied to allow the computation of the radient in reasonable time.

The difficulties inherent in this optimization problem are shown with multiple artificial and real data testcases. Possible approaches for the individual difficulties are analyzed.