Computer Graphics
TU Braunschweig


Talk The future of driver assistance

07.09.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Tobi Vaudrey

Imagine a futuristic world where cars drive themselves. In New Zealand alone, there have been about driving 5000 fatalities over the past 10 years. Think of the reduction in accidents as cars are always looking at the road and making decisions based on its perception of the environment. The car will never get drunk, never fall asleep at the wheel, be always attentive and never suffer from road rage! This seems like a far-distant possibility that is only in the minds of science fi ction writers, such as those of Star Trek. But this reality is perhaps not so far away. We are still a while of the future depicted above, but there are smart cars that do take control of vehicles already, even if the driver is not aware. My research is on vision based driver assistance systems, in other words, I try and create eyes for cars. In this talk I will cover the basics of vision based environment perception: removing harmful illumination effects, stereo disparity estimation (distance), optical flow (2D motion), and scene flow (3D motion). Hopefully this brief introduction will spark some passion in driver assistance, and help save lives in the future.

Talk MultiView Camera Calibration

31.08.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Denis Bose

Die Kamerakalibrierung ist ein bedeutender Schwerpunkt in der Computer Vision und auch Computergraphik, bspw. im Bereich Image- und Video-based Rendering. Obwohl es verschiedenste Ansätze gibt, die gute Ergebnisse liefern können, sind diese in ihrer Bedienbarkeit und damit benötigtem Zeitaufwand sehr unterschiedlich. Der Traum bleibt weiterhin ein vollautomatisches System, welches einem erlaubt einfach eine Videokamera auf Aufnahme zu stellen und direkt die entsprechenden Kalibrierungsdaten liefert. Ein paar solcher Ansätze existieren bereits, insbesondere die Forschungsgruppe um Marc Pollefeys oder die Arbeit von Noah Snavely haben sich hier hervorgetan. Nachteil dieser Verfahren ist, dass der der zeitlich benötigte Aufwand meist sehr hoch und vor allem entweder quadratisch in der Anzahl zu untersuchender Bilder ist oder mit Drifts zu kämpfen hat. Insbesondere bei Multi-Kamera Systemen ist der Aufwand schnell sehr hoch und tausende von Bildern müssen verarbeitet werden. Ziel dieser Arbeit ist es ein möglichst leicht zu bedienendes System zu entwerfen, welches einem die internen und externen oder nur externen Kameraparameter für ein aus mehreren Kameras bestehendes Video-based Rendering System liefert. Dabei soll auf dem Sparse Bundle Adjustment Package Bundler von Noah Snavely aufgesetzt werden und dieses in das am ICG entwickelte System zum Bestimmen des zeitlichen Versatzes von Videos integriert werden, damit man letztendlich aus unsynchronisierten Videos, synchronisierte Videos mit den entsprechenden Kameraparametern erhält. In einem nächsten Schritt werden dann verschiedene Ansätze ausgetestet, welche die Berechnungszeit optimieren sollen. Bspw. zur Berechnung des i-ten Bildes nur jedes n-te Bild heranziehen; nur Bilder in einem bestimmten Zeitfenster verwenden; oder Auswahl geeigneter Repräsentanten aus der kompletten Sammlung an Bildern. Diese Ansätze werden einer Qualitätsanalyse unterzogen in Hinsicht auf die Ergebnisse des Originalalgorithmus.

[Weiterführende Informationen]

Talk Genetic Morphs

24.08.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Daniel Gohlke

Optical Flow is widely used in the graphics and vision community for estimating the flow or warp field that matches one image with another, but optical flow is a highly ill-posed problem. Evolutionary or genetic algorithms are often used in computer science to solve such ill-posed problems. These are based on the principles of selection, mutation and recombination. This talk will present a framework based on genetic algorithms to estimate flow fields and tries to improve on existing optical flow algorithms by adding their results to the initial population.

Talk Video Matting

24.08.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Julia Wolf

Matting is the process of estimating a foreground object in an image and placing this object into another surrounding, i.e. image or video. For complex objects like hair or transparent objects a simple binary map is insufficient for describing the object and a more complex alpha matte with continuous values is needed. While many robust methods exist for still images, matting of complete videos is still a difficult and labour-intensive work. In this talk an extension of the Spectral Matting method by Levin et al. towards video matting is presented. Given certain assumptions on the scene, the presented method is robust and very easy to use.

Talk Texture Synthesis with Transformed Neighborhoods with Application to SuperResolution

03.08.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Lea Lindemann

Texture synthesis is usually used to create a larger texture image out of a small texture patch, so that the output image looks similar to the small patch when looking only at a small part of the output texture image. But texture synthesis can also be used for other task, such as image completion, super-resolution and more. In this talk an adaptation of the texture synthesis algorithm of Kwatra et al.'s "Texture Optimization for Example-based Synthesis" is presented, which allows for perspectively transformed synthesis and is applicable for super-resolution.

Talk Infrared Tracking: Getting Started

27.07.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Andrea Keil

Human Computer Interaction regarding Gaze Tracking has been an interesting research topic in the last years, but most of the systems use head-mounted special gear that limit the subjects significantly, are complicated to use and may be quite expensive. In my diploma thesis, I aim to develop infrared tracking applications -such as eye tracking- with an easy interface, meaning no headgear, no high definition camera and an easy setup, working in real time. Using the specific characteristics of infrared light towards the human eye, one can estimate not only the eye position in the image but the gaze, meaning the viewing direction, as well. Additionally, Infrared Tracking can be applied to the human body, using a special invisible infrared pen to mark the body for motion capturing or hands for gesture recognition; or to mark paper, letters, etc. for safety and encoding. Next Monday will be about our starting point: "Gaze Tracking and the related work."

Talk Modeling Human Color Perception under Extended Luninance Levels

20.07.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Benjamin Meyer

Display technology is advancing quickly with peak luminance increasing significantly, enabling high-dynamic-range displays. However, perceptual color appearance under extended luminance levels has not been studied, mainly due to the unavailability of psychophysical data. Therefore, we conduct a psychophysical study in order to acquire appearance data for many different luminance levels covering most of the dynamic range of the human visual system. These experimental data allow us to quantify human color perception under extended luminance levels, yielding a generalized color appearance model. Our proposed appearance model is efficient, accurate and invertible. It can be used to adapt the tone and color of images to different dynamic ranges for crossmedia reproduction while maintaining appearance that is close to human perception.

Talk SEP 2009

13.07.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Georgia Albuquerque

In this colloquium we revise on the SEP projects of this semester. The two student groups who participated in the SEP will present their achieved results.

Talk A Superresolution Framework for High-Accuracy Multiview Reconstruction

10.07.2009 14:00
Informatikzentrum, Seminarraum G30

Speaker(s): Bastian Goldlücke

Focus of this talk are the current state-of-the-art methods for 3D reconstruction from multiple calibrated views which were developed in our group. Based on a convex formulation for the 3D reconstruction problem, an initial 3D model is obtained. The geometry is further refined in a combined variational approach to jointly estimate a displacement map and a super-resolved texture for the model. The super-resolution approach to texture reconstruction allows to obtain fine details in the texture map which surpass individual input image resolution. Some of the interesting mathematical background can be applied quite generally and is briefly discussed. This includes convex relaxation techniques for global optimization of certain total variation based energy functionals, as well as an algorithm for solving partial differential equations on surfaces.

Talk Recent Trends in Interactive Multi-Image Segmentation

06.07.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Christian Lipski

Extraction of meaningful regions in images has been an active research topic for decades. Recently, coherent detection of such regions across image sequences received special attention. Especially when high-quality output is demanded by the application, e.g., in video matting, such systems are designed to incorporate user input that steers and improves the segmentation algorithm. The talk will cover a selection of recent work in this area. We will identify recent trends and see how we can relate them to other areas of research, such as interactive correspondence estimation.

Talk EuroVis 2009

29.06.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Georgia Albuquerque

I will talk about two of the most interesting papers presented at the Eurographics/IEEE Symposium on Visualization June 10 - 12, 2009 in Berlin, Germany. In the first article: "Splatting the Lines in Parallel Coordinates", the authors presented two method to enhance the visibility of clusters using parallel coordinates. In the second one: "Selecting good views of high-dimensional data using class consistency", the authors present a measure to evaluate scatterplots of data sets considering class information.

Talk Efficient and accurate visualization of complex light sources

26.06.2009 14:00
Informatikzentrum, Seminarraum G30

Speaker(s): Stefan Kniep

We present a new method for estimating the radiance function of complex area light sources. The method is based on Jensens photon mapping algorithm. In order to capture high angular frequencies in the radiance function, we incorporate the angular domain into the density estimation. However, density estimation in position-direction space makes it necessary to and a tradeoff between the spatial and angular accuracy of the estimation. We identify the parameters which are important for this tradeoff and investigate the typical estimation errors. We show how the large data size, which is inherent to the underlying problem, can be handled. The method is applied to different automotive tail lights. It can be applied to a wide range of other real-world light sources.

Talk Compressed Sensing

15.06.2009 14:00
Informatikzentrum, Seminarraum G30

Speaker(s): Stephan Wenger

The dogma of signal processing maintains that a signal must be sampled at a rate at least twice its highest frequency in order to be represented without error. However, in practice, we often compress the data soon after sensing, trading off signal representation complexity (bits) for some error (consider JPEG image compression in digital cameras, for example). Clearly, this is wasteful of valuable sensing resources. Over the past few years, a new theory of "compressive sensing" has begun to emerge, in which the signal is sampled (and simultaneously compressed) at a greatly reduced rate. Compressive sensing is also referred to in the literature by the terms: compressed sensing, compressive sampling, and sketching/heavy-hitters.

Talk Fast Reconstruction of the World from Photos and Videos

08.06.2009 14:00
Informatikzentrum, Seminarraum G30

Speaker(s): Jan-Michael Frahm

In recent years photo/video sharing web sites like Flickr and YouTube have become increasingly popular. Nowadays, every day terra bytes of photos and videos are uploaded. These data survey large parts of the world throughout the different seasons, various weather conditions and all times of the day. In the talk I will present my work on the highly efficient reconstruction of 3D models from these data. It addresses a variety of the current challenges that have to be addressed to achieve a concurrent 3D model from these data. The challenges are: estimation of the geometric and radiometric camera calibration from videos and photos, efficient robust camera motion estimation for (quasi-)degenerate estimation problems, high performance stereo estimation from multiple views, automatic selection of correct views from noisy image/video collections, image based location recognition for topology detection. In the talk I will discuss the details of our real-time camera motion estimation from video using our Adaptive Real-Time Random Sample Consensus (ARRSAC) and our high performance salient feature tracker, which simultaneously estimates the radiometric camera calibration and tracks the motion of the salient feature points. Furthermore our technique to achieve robustness against (quasi-) degenerate data will be introduced. It allows to detect and overcome the case of data, which under-constrain the camera motion estimation problem. Additionally our optimal stereo technique for determining the scene depths with constant precision throughout the scene volume will be explained during the talk. It allows to perform the scene depth estimation from a large set of views with optimal computational effort while obtaining the depth with constant precision throughout the reconstruction volume. I also discuss our fast technique for the image based location recognition, which uses commodity graphics processors to achieve real-time performance while providing high recognition rates. Furthermore in the talk I present our work on 3D reconstruction from internet photo collections. It combines image based recognition with geometric constraints to efficiently perform the simultaneous selection of correct views and the 3D reconstruction from large collections of photos. The talk will also explain the future challenges in all the mentioned areas. Jan-Michael Frahm is a Research Assistant Professor at University of North Carolina at Chapel Hill. He received his Ph.D in computer vision in 2005 from the Christian-Albrechts University of Kiel, Germany. His Diploma in Computer Science is from the University of Lübeck. Dr.- Ing. Frahm`s research interests include a variety of computer vision problems. He has worked on structure from motion for single/multi-camera systems for static and dynamic scenes to create 3D models of the scene; real-time multi-view stereo to create a dense scene geometry from camera images; use of camera-sensor systems for 3D scene reconstruction with fusion of multiple orthogonal sensors; improved robust and fast estimation methods from noisy data to compensate for highly noisy measurements in various stages of the reconstruction process; high performance feature tracking for salient image-point motion extraction; and the development of data-parallel algorithms for commodity graphics hardware for efficient 3D reconstruction.

Talk Real-time playback and navigation of multi-view video sequences based on space-time tetrahedra

25.05.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Björn Scholz

In this diploma thesis, a real-time capable player for free-viewpoint video will be presented. Free-viewpoint video usually consists of huge amounts of video data captured from multiple cameras and additional inter-view and frame information, such as dense correspondence fields between single images or camera positions, to provide smooth navigation. Neither the size nor the representation of the source data is appropriate to be used in such an environment. Thus, besides the player itself, a container combining the data as well as certain compression to reduce its size will be introduced.

Talk Moving Gradients: A Path-Based Method for Plausible Image Interpolation

18.05.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Christian Linz

We describe a method for plausible interpolation of images, with a wide range of applications like temporal up-sampling for smooth playback of lower frame rate video, smooth view interpolation, and animation of still images. The method is based on the intuitive idea, that a given pixel in the interpolated frames traces out a path in the source images. Therefore, we simply move and copy pixel gradients from the input images along this path. A key innovation is to allow arbitrary (asymmetric) transition points, where the path moves from one image to the other. This flexible transition preserves the frequency content of the originals without ghosting or blurring, and maintains temporal coherence. Perhaps most importantly, our framework makes occlusion handling particularly simple. The transition points allow for matches away from the occluded regions, at any suitable point along the path. Indeed, occlusions do not need to be handled explicitly at all in our initial graph-cut optimization. Moreover, a simple comparison of computed path lengths after the optimization, allows us to robustly identify occluded regions, and compute the most plausible interpolation in those areas. Finally, we show that significant improvements are obtained by moving gradients and using Poisson reconstruction.

Talk Best of ICCP 2009

04.05.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Anita Sellent

The first International Conference on Computational Photography (ICCP'09) took place in San Francisco at March 16 th and 17 th.

I will give a short overview on the bandwidth of the conference and summarize 2 of the most interesting articles:

In `Light Field Superresolution` Bishop et al. show how the trade off between angular and spatial resolution in light field cameras can be mitigated under the assumption of Lambertian scene surfaces. Thus images of surprisingly high resolution can be reconstructed.

High Dynamic Range images that are composited of several low dynamic range images usually show ghosting artifacts if objects in the scene move. The article of Gallo et al. about `Artifact-free High Dynamic Range Imaging` describes, how these ghosting artifacts can be totally avoided.

Talk 3D Visualisierung der Erdoberfläche im Wandel der Zeitalter (SA)

27.04.2009 13:30
Informatikzentrum, Seminarraum G30

Speaker(s): Lorenz Rogge

Ziel dieser Studienarbeit ist es, ein Programm zu entwickeln, welches die paläontologische Veränderung des Globus im Laufe der Zeit visualisiert. Basierend auf geologischen Daten und Kartenmaterial soll es möglich sein, die Erdoberfläche zu gewünschten Zeitpunkten annähernd realistisch und in Echtzeit als 3D-Objekt darzustellen. Hierzu müssen die benötigten Daten beschafft und mit einer geeigneten Technologie eingebunden werden.

Talk Some thoughts on perceptual realism in computer graphics

20.04.2009 10:00
Informatikzentrum, Hörsaal 160

Speaker(s): Philip Dutré

One of our research goals is to construct a realism scale, that allows the user to choose the desired level of visual realism in the final image. The higher the realism, the closer the image will appear to look like a photograph. If the user selects a lower realism-level, the gradual decline in realism should appear in the image as smoothly as possible. Preferably, image components that do not contribute to the realistic appearance of the image, should be eliminated from the computations first. Such a realism scale has not yet been constructed in computergraphics. This talk will focus on some of the recent efforts we have pursued in order to construct such a realism scale. Although we are still far away from this goal, we feel that some of the underlying mechanisms and thoughts are useful enough to warrant discussion, and hopefully provide some additional discussion.

Talk Best of Eurographics 2009

06.04.2009 14:00
Informatikzentrum, Seminarraum G30

Speaker(s): Kai Berger, Benjamin Meyer

The Eurographics 2009 took place in Munic, from 30th of march to 3rd of april. We will present some of its highlights and summarize the most important proceedings from the different topics.

Talk Enhancing and Experiencing Spacetime Resolution with Videos and Stills

06.04.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Martin Eisemann

An algorithm by A. Gupta will be presented for enhancing the spatial and/or temporal resolution of videos. It targets the emerging consumer-level hybrid cameras that can simultaneously capture video and high-resolution stills. The technique produces a high spacetime resolution video using the high-resolution stills for rendering and the low-resolution video to guide the reconstruction and the rendering process. The presented framework integrates and extends two existing algorithms, namely a high-quality optical flow algorithm and a high-quality image-based-rendering algorithm. The framework enables a variety of applications that were previously unavailable to the amateur user, such as the ability to (1) automatically create videos with high spatiotemporal resolution, and (2) shift a high-resolution still to nearby points in time to better capture a missed event.

Talk Multi-view Reconstruction of Detailed Garment Motion

23.03.2009 13:00
Informatikzentrum, Hörsaal 160

Speaker(s): Derek Bradley

A lot of research has recently focused on the problem of capturing the geometry and motion of garments. Such work usually relies on special markers printed on the fabric to establish temporally coherent correspondences between points on the garment's surface at different times. Unfortunately, this approach is tedious and prevents the capture of off-the-shelf clothing made from interesting fabrics. In this talk I will summarize recent advances in detailed, marker-free garment capture resulting from our research at the University of British Columbia. I will first discuss our physical acquisition setup, including camera synchronization and rolling shutter compensation using stroboscopic illumination. I will then describe our garment capture approach, where we establish temporally coherent parameterizations between incomplete geometries that we extract at each timestep with a multiview stereo algorithm. Finally, I will discuss a method for reintroducing fine folds into the captured models using data-driven dynamic wrinkling. As a result of this work, we are able to capture the geometry and motion of unpatterned, off-the-shelf garments made from a range of different fabrics, with realistic dynamic folds.

Talk Graph-Theoretic Scagnostics

16.03.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Georgia Albuquerque

Graph-Theoretic Scagnostics was presented in the IFOVIS 2005 and proposes different quality measurements for scatterplots. The proposed quality measures are based on an exploratory visualization method developed by John and Paul Tukey around 20 years ago, called Scagnostics.

Talk Lunar Surface Reconstruction from Single Images

09.03.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Stephan Wenger

For the planned Lunar Obervation Center and Islamic Museum in Mekka, Saudi-Arabia, a 3-meter-sized moon globe with realistic surface relief is to be created. Since the resolution of directly measured height maps is about a factor of ten too low for that purpose, the missing detail has to be plausibly (but not necessarily exactly) estimated from photographic images at heigher resolution. Usual shape- from-shading approaches fail because most moon regions have only be photographed in detail at a single light condition. Our heuristic algorithm nevertheless finds plausible surface normals that are integrated to yield a detailed height map of the whole moon.

Talk Image-based Viewpoint Navigation through space and time

06.03.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Christian Lipski

We present an image-based rendering system to viewpoint-navigate through space and time of complex real-world, dynamic scenes. Our approach accepts unsynchronized, uncalibrated multi-video footage as input. Inexpensive, consumer-grade camcorders suffice to acquire arbitrary scenes, e.g., in the outdoors, without elaborate recording setup procedures. Instead of scene depth estimation, layer segmentation, or 3D reconstruction, our approach is based on dense image correspondences, treating view interpolation uniformly in space and time: spatial viewpoint navigation, slow motion, and freeze-and-rotate effects can all be created in the same way. Acquisition simplification, generalization to difficult scenes, and space-time symmetric interpolation.