Computer Graphics
TU Braunschweig


Talk Modeling the planetary nebula IC418: a combined approach using stellar and nebular model

25.02.2010 14:00
Informatikzentrum, Hörsaal 160

Speaker(s): Christophe Morisset

We present a coherent stellar and nebular model reproducing the observations of the Planetary Nebula IC418. We want to test whether a stellar model obtained by fitting the stellar observations is able to satisfactory ionize the nebula and reproduce the nebular observations, which is by no mean evident. This allows us to determine all the physical parameters of both the star and the nebula, including the abundances and the distance. We used all the observational material available (FUSE, IUE, STIS and optical spectra) to constrain the stellar atmosphere model performed using the CMFGEN code. The photoionization model is done with Cloudy_3D, and is based on CTIO, Lick, SPM, IUE and ISO spectra as well as HST images. More than 140 nebular emission lines are compared to the observed intensities. We reproduce all the observations for the star and the nebula. The 3D morphology of the gas distribution is determined. The effective temperature of the star is 36.7kK. Its luminosity is 7700 solar luminosity. We describe an original method to determine the distance of the nebula using evolutionary tracks. No clumping factor is need to reproduce the age-luminosity relation. The distance of 1.25 kpc is found in very good agreement with recent determination using parallax method. The chemical composition of both the star and the nebula are determined. Both are Carbon-rich. The nebula presents evidence of depletion of elements Mg, Si, S, Cl (0.5 dex lower than solar) and Fe (2.9 dex lower than solar). This is the first self-consistent stellar and nebular model for a Planetary Nebula that reproduces all the available observations ranging from IR to UV, showing that the combined approach for the modeling process leads to more restrictive constraints and, in principle, more trustworthy results.

Talk Shape 2010: 3D Strahlungstransport für die Visualisierung astrophysikalischer Phänomene

25.02.2010 14:00
Informatikzentrum, Hörsaal 160

Speaker(s): Wolfgang Steffen

Das Programm Shape wurde ursprünglich zur 3D-Modellierung der Struktur und Kinematic astrophysikalischer Nebel geschrieben. Shape ist das erste astrophysikalische Simulationsprogramm ist, das interaktive 3D-Graphiktechniken zur Modelerzeugung verwendet. Jetzt wurde aber zusätzlich 3D-Strahlungstransport eingebaut, was realistischere Darstellungen erlaubt. Das neue Animationsmodul und "Texturing" und andere Neuheiten erweitern deutlich die Anwendungsmöglichkeiten in der Visualisierung und astrophysikalischen Forschung.

Talk C for CUDA for and from Dummies

08.02.2010 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Martin Eisemann

C for CUDA is an extension to the C language to exploit the processing power of NVidia GPUs for general purpose computation. While for a long time now it seemed that Computer Graphics does not benefit all too much from this, due to GLSL, it turned out, that especially in Vision tasks or specialized applications C for CUDA can give your application a real boost in speed. Plus, it can be simple, if you know the right tools and have the right framework (which you will have after this talk! Hooray!). In this talk I will report on my first hand on experience with CUDA. I will give you an introduction into the basic memory management, kernel execution on the GPU and some other useful stuff. After this talk you will be able to directly write your own CUDA programms, right from the start.

Talk Visualisierung von Multispektralbildern anhand spektraler Komponenten

11.01.2010 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): David Ben Yacov

Spektraldaten, die mit ortsauflösenden Bildsensoren aufgenommen wurden, liegen zunächst in einer Form vor, bei der zwei Dimensionen eines Datensatzes den Bildkoordinaten einer Flächenkamera und die dritte dem Intensitätsverlauf über der Wellenlänge entsprechen. Auch nach geeigneter Aufbereitung der Spektren (Normierung und Differenzierung) sind ihre Komponenten, die z.B. den Anteil chemischer Substanzen in bestimmten Teilen einer Szene charakterisieren, nicht ohne weitere Auswertung am spektralen Verlauf erkennbar. In dieser Arbeit wurden die Pixel von Multispektralbildern durch geeignete Verfahren so klassifiziert, dass in den Bildern Regionen mit bestimmten spektralen Eigenschaften segmentiert und visualisiert werden können. Dadurch soll es möglich gemacht werden z.B. den Klebstoffanteil in Spänevliesen (mit Klebstoff vermischte Flachspäne für die Spanplattenproduktion) nachzuweisen. Das für diese Aufgabe entwickelte System wird in diesem Vortrag vorgestellt.

Talk Data Clustering using Affinity Propagation

07.12.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Christian Lipski

Clustering data is an everyday task in computer graphics with an already wide selection of available algorithms. We will take a look at affinity propagation, a new technique that is based on message passing among data points. The vital benefit is that the amount of clusters is not strictly predefined, but emerges from the data. In addition, the very generic approach lends itself to a wide variety of application. Together with speed-up strategies developed in the graphics community, it can be used for large data sets (i.e., images).

Talk Local Model of Eye Adaption for High Dynamic Range Images

05.11.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Benjamin Meyer

In the real world, the human eye is confronted with a wide range of luminances from bright sunshine to low night light. Our eyes cope with this vast range of intensities by adaptation; changing their sensitivity to be responsive at different illumination levels. This adaptation is highly localized, allowing us to see both dark and bright regions of a high dynamic range environment. In this paper we present a new model of eye adaptation based on physiological data. The model, which can be easily integrated into existing renderers, can function either as a static local tone mapping operator for single high dynamic range image, or as a temporal adaptation model taking into account time elapsed and intensity of preadaptation for a dynamic sequence. We finally validate our technique with a high dynamic range display and a psychophysical study.

Talk Integrating NonPhotorealistic Visual Effects into the Virtual Video Camera System

26.10.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Alexander Schäfer

Visuelle Effekte lassen einen Film zu einem Erlebnis werden. Aus diesem Grund wird für aktuelle Kinofilme ein gewaltiger Zeit- und Kostenaufwand in Kauf genommen. Im Rahmen dieser Bachelorarbeit werden solche Nicht-Photorealistischen Effekte deshalb besonders kostengünstig realisiert. Als Grundlage dient dazu das virtuelle Videokamera-System, welches bereits am Computergrafik-Institut der TU-Braunschweig entwickelt wurde. Die zentralen Probleme beim Erzeugen eines Videoeffektes sind die Maskierung des gewünschten Zielobjektes in der Szene und das zeitlich kohärente Zeichnen dieses Effekts im Verlauf der Videosequenz. Eine Besonderheit stellt dabei die Möglichkeit zur zeitlichen Manipulation des Videos durch das virtuelle Videokamera-System dar. Vor diesem Hintergrund wurden die vier Bewegungseffekte "Time Lapse", "Temporal-Flare", "Particle-Effect" und "Speedlines" umgesetzt.

Talk Making Shape from Shading Work for Real-World Images

13.10.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Anita Sellent

Although shape from shading has been studied for almost four decades, the performance of most methods applied to real-world images is still unsatisfactory: This is often caused by oversimplified reflectance and projection models as well as by ignoring light attenuation and nonconstant albedo behavior. Vogel et al. address this problem by poposing a novel approach that combines three powerful concepts.

Talk Combining automated analysis and visualization techniques for effective exploration of high-dimensio

28.09.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Georgia Albuquerque

Visual exploration of multivariate data typically requires projection onto lower-dimensional representations. The number of possible representations grows rapidly with the number of dimensions, and manual exploration quickly becomes ineffective or even unfeasible. This paper proposes automatic analysis methods to extract potentially relevant visual structures from a set of candidate visualizations. Based on features, the visualizations are ranked in accordance with a specified user task. The user is provided with a manageable number of potentially useful candidate visualizations, which can be used as a starting point for interactive data analysis. This can effectively ease the task of finding truly useful visualizations and potentially speed up the data exploration task. In this paper, we present ranking measures for class-based as well as non class-based Scatterplots and Parallel Coordinates visualizations. The proposed analysis methods are evaluated on different datasets.

Talk Finding the fraud: Matching and comparing photographies of art

21.09.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Dirk Fortmeier

Seitdem Menschen für Kunst Geld bezahlen gibt es Fälschungen und somit das Bedürfnis den Fälschern auf die Schliche zu kommen. Einige Fälschungen sind leicht zu erkennen, andere nur durch sehr genaues hinschauen. Im Rahmen dieses Projekts wurden Methoden erdacht und evaluiert um Photographien von Gemälden und Kunstwerken mit Photographien ihrer Originale zu vergleichen. Die zentralen Probleme die dabei auftraten sind einerseits das Matching (Eliminierung der Unterschiedlichen Blickwinkel und Aufnahmebedingungen) sowie anderseits der anschließende Vergleich.

Talk The future of driver assistance

07.09.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Tobi Vaudrey

Imagine a futuristic world where cars drive themselves. In New Zealand alone, there have been about driving 5000 fatalities over the past 10 years. Think of the reduction in accidents as cars are always looking at the road and making decisions based on its perception of the environment. The car will never get drunk, never fall asleep at the wheel, be always attentive and never suffer from road rage! This seems like a far-distant possibility that is only in the minds of science fi ction writers, such as those of Star Trek. But this reality is perhaps not so far away. We are still a while of the future depicted above, but there are smart cars that do take control of vehicles already, even if the driver is not aware. My research is on vision based driver assistance systems, in other words, I try and create eyes for cars. In this talk I will cover the basics of vision based environment perception: removing harmful illumination effects, stereo disparity estimation (distance), optical flow (2D motion), and scene flow (3D motion). Hopefully this brief introduction will spark some passion in driver assistance, and help save lives in the future.

Talk MultiView Camera Calibration

31.08.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Denis Bose

Die Kamerakalibrierung ist ein bedeutender Schwerpunkt in der Computer Vision und auch Computergraphik, bspw. im Bereich Image- und Video-based Rendering. Obwohl es verschiedenste Ansätze gibt, die gute Ergebnisse liefern können, sind diese in ihrer Bedienbarkeit und damit benötigtem Zeitaufwand sehr unterschiedlich. Der Traum bleibt weiterhin ein vollautomatisches System, welches einem erlaubt einfach eine Videokamera auf Aufnahme zu stellen und direkt die entsprechenden Kalibrierungsdaten liefert. Ein paar solcher Ansätze existieren bereits, insbesondere die Forschungsgruppe um Marc Pollefeys oder die Arbeit von Noah Snavely haben sich hier hervorgetan. Nachteil dieser Verfahren ist, dass der der zeitlich benötigte Aufwand meist sehr hoch und vor allem entweder quadratisch in der Anzahl zu untersuchender Bilder ist oder mit Drifts zu kämpfen hat. Insbesondere bei Multi-Kamera Systemen ist der Aufwand schnell sehr hoch und tausende von Bildern müssen verarbeitet werden. Ziel dieser Arbeit ist es ein möglichst leicht zu bedienendes System zu entwerfen, welches einem die internen und externen oder nur externen Kameraparameter für ein aus mehreren Kameras bestehendes Video-based Rendering System liefert. Dabei soll auf dem Sparse Bundle Adjustment Package Bundler von Noah Snavely aufgesetzt werden und dieses in das am ICG entwickelte System zum Bestimmen des zeitlichen Versatzes von Videos integriert werden, damit man letztendlich aus unsynchronisierten Videos, synchronisierte Videos mit den entsprechenden Kameraparametern erhält. In einem nächsten Schritt werden dann verschiedene Ansätze ausgetestet, welche die Berechnungszeit optimieren sollen. Bspw. zur Berechnung des i-ten Bildes nur jedes n-te Bild heranziehen; nur Bilder in einem bestimmten Zeitfenster verwenden; oder Auswahl geeigneter Repräsentanten aus der kompletten Sammlung an Bildern. Diese Ansätze werden einer Qualitätsanalyse unterzogen in Hinsicht auf die Ergebnisse des Originalalgorithmus.

[Weiterführende Informationen]

Talk Genetic Morphs

24.08.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Daniel Gohlke

Optical Flow is widely used in the graphics and vision community for estimating the flow or warp field that matches one image with another, but optical flow is a highly ill-posed problem. Evolutionary or genetic algorithms are often used in computer science to solve such ill-posed problems. These are based on the principles of selection, mutation and recombination. This talk will present a framework based on genetic algorithms to estimate flow fields and tries to improve on existing optical flow algorithms by adding their results to the initial population.

Talk Video Matting

24.08.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Julia Wolf

Matting is the process of estimating a foreground object in an image and placing this object into another surrounding, i.e. image or video. For complex objects like hair or transparent objects a simple binary map is insufficient for describing the object and a more complex alpha matte with continuous values is needed. While many robust methods exist for still images, matting of complete videos is still a difficult and labour-intensive work. In this talk an extension of the Spectral Matting method by Levin et al. towards video matting is presented. Given certain assumptions on the scene, the presented method is robust and very easy to use.

Talk Texture Synthesis with Transformed Neighborhoods with Application to SuperResolution

03.08.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Lea Lindemann

Texture synthesis is usually used to create a larger texture image out of a small texture patch, so that the output image looks similar to the small patch when looking only at a small part of the output texture image. But texture synthesis can also be used for other task, such as image completion, super-resolution and more. In this talk an adaptation of the texture synthesis algorithm of Kwatra et al.'s "Texture Optimization for Example-based Synthesis" is presented, which allows for perspectively transformed synthesis and is applicable for super-resolution.

Talk Infrared Tracking: Getting Started

27.07.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Andrea Keil

Human Computer Interaction regarding Gaze Tracking has been an interesting research topic in the last years, but most of the systems use head-mounted special gear that limit the subjects significantly, are complicated to use and may be quite expensive. In my diploma thesis, I aim to develop infrared tracking applications -such as eye tracking- with an easy interface, meaning no headgear, no high definition camera and an easy setup, working in real time. Using the specific characteristics of infrared light towards the human eye, one can estimate not only the eye position in the image but the gaze, meaning the viewing direction, as well. Additionally, Infrared Tracking can be applied to the human body, using a special invisible infrared pen to mark the body for motion capturing or hands for gesture recognition; or to mark paper, letters, etc. for safety and encoding. Next Monday will be about our starting point: "Gaze Tracking and the related work."

Talk Modeling Human Color Perception under Extended Luninance Levels

20.07.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Benjamin Meyer

Display technology is advancing quickly with peak luminance increasing significantly, enabling high-dynamic-range displays. However, perceptual color appearance under extended luminance levels has not been studied, mainly due to the unavailability of psychophysical data. Therefore, we conduct a psychophysical study in order to acquire appearance data for many different luminance levels covering most of the dynamic range of the human visual system. These experimental data allow us to quantify human color perception under extended luminance levels, yielding a generalized color appearance model. Our proposed appearance model is efficient, accurate and invertible. It can be used to adapt the tone and color of images to different dynamic ranges for crossmedia reproduction while maintaining appearance that is close to human perception.

Talk SEP 2009

13.07.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Georgia Albuquerque

In this colloquium we revise on the SEP projects of this semester. The two student groups who participated in the SEP will present their achieved results.

Talk A Superresolution Framework for High-Accuracy Multiview Reconstruction

10.07.2009 14:00
Informatikzentrum, Seminarraum G30

Speaker(s): Bastian Goldlücke

Focus of this talk are the current state-of-the-art methods for 3D reconstruction from multiple calibrated views which were developed in our group. Based on a convex formulation for the 3D reconstruction problem, an initial 3D model is obtained. The geometry is further refined in a combined variational approach to jointly estimate a displacement map and a super-resolved texture for the model. The super-resolution approach to texture reconstruction allows to obtain fine details in the texture map which surpass individual input image resolution. Some of the interesting mathematical background can be applied quite generally and is briefly discussed. This includes convex relaxation techniques for global optimization of certain total variation based energy functionals, as well as an algorithm for solving partial differential equations on surfaces.

Talk Recent Trends in Interactive Multi-Image Segmentation

06.07.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Christian Lipski

Extraction of meaningful regions in images has been an active research topic for decades. Recently, coherent detection of such regions across image sequences received special attention. Especially when high-quality output is demanded by the application, e.g., in video matting, such systems are designed to incorporate user input that steers and improves the segmentation algorithm. The talk will cover a selection of recent work in this area. We will identify recent trends and see how we can relate them to other areas of research, such as interactive correspondence estimation.

Talk EuroVis 2009

29.06.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Georgia Albuquerque

I will talk about two of the most interesting papers presented at the Eurographics/IEEE Symposium on Visualization June 10 - 12, 2009 in Berlin, Germany. In the first article: "Splatting the Lines in Parallel Coordinates", the authors presented two method to enhance the visibility of clusters using parallel coordinates. In the second one: "Selecting good views of high-dimensional data using class consistency", the authors present a measure to evaluate scatterplots of data sets considering class information.

Talk Efficient and accurate visualization of complex light sources

26.06.2009 14:00
Informatikzentrum, Seminarraum G30

Speaker(s): Stefan Kniep

We present a new method for estimating the radiance function of complex area light sources. The method is based on Jensens photon mapping algorithm. In order to capture high angular frequencies in the radiance function, we incorporate the angular domain into the density estimation. However, density estimation in position-direction space makes it necessary to and a tradeoff between the spatial and angular accuracy of the estimation. We identify the parameters which are important for this tradeoff and investigate the typical estimation errors. We show how the large data size, which is inherent to the underlying problem, can be handled. The method is applied to different automotive tail lights. It can be applied to a wide range of other real-world light sources.

Talk Compressed Sensing

15.06.2009 14:00
Informatikzentrum, Seminarraum G30

Speaker(s): Stephan Wenger

The dogma of signal processing maintains that a signal must be sampled at a rate at least twice its highest frequency in order to be represented without error. However, in practice, we often compress the data soon after sensing, trading off signal representation complexity (bits) for some error (consider JPEG image compression in digital cameras, for example). Clearly, this is wasteful of valuable sensing resources. Over the past few years, a new theory of "compressive sensing" has begun to emerge, in which the signal is sampled (and simultaneously compressed) at a greatly reduced rate. Compressive sensing is also referred to in the literature by the terms: compressed sensing, compressive sampling, and sketching/heavy-hitters.

Talk Fast Reconstruction of the World from Photos and Videos

08.06.2009 14:00
Informatikzentrum, Seminarraum G30

Speaker(s): Jan-Michael Frahm

In recent years photo/video sharing web sites like Flickr and YouTube have become increasingly popular. Nowadays, every day terra bytes of photos and videos are uploaded. These data survey large parts of the world throughout the different seasons, various weather conditions and all times of the day. In the talk I will present my work on the highly efficient reconstruction of 3D models from these data. It addresses a variety of the current challenges that have to be addressed to achieve a concurrent 3D model from these data. The challenges are: estimation of the geometric and radiometric camera calibration from videos and photos, efficient robust camera motion estimation for (quasi-)degenerate estimation problems, high performance stereo estimation from multiple views, automatic selection of correct views from noisy image/video collections, image based location recognition for topology detection. In the talk I will discuss the details of our real-time camera motion estimation from video using our Adaptive Real-Time Random Sample Consensus (ARRSAC) and our high performance salient feature tracker, which simultaneously estimates the radiometric camera calibration and tracks the motion of the salient feature points. Furthermore our technique to achieve robustness against (quasi-) degenerate data will be introduced. It allows to detect and overcome the case of data, which under-constrain the camera motion estimation problem. Additionally our optimal stereo technique for determining the scene depths with constant precision throughout the scene volume will be explained during the talk. It allows to perform the scene depth estimation from a large set of views with optimal computational effort while obtaining the depth with constant precision throughout the reconstruction volume. I also discuss our fast technique for the image based location recognition, which uses commodity graphics processors to achieve real-time performance while providing high recognition rates. Furthermore in the talk I present our work on 3D reconstruction from internet photo collections. It combines image based recognition with geometric constraints to efficiently perform the simultaneous selection of correct views and the 3D reconstruction from large collections of photos. The talk will also explain the future challenges in all the mentioned areas. Jan-Michael Frahm is a Research Assistant Professor at University of North Carolina at Chapel Hill. He received his Ph.D in computer vision in 2005 from the Christian-Albrechts University of Kiel, Germany. His Diploma in Computer Science is from the University of Lübeck. Dr.- Ing. Frahm`s research interests include a variety of computer vision problems. He has worked on structure from motion for single/multi-camera systems for static and dynamic scenes to create 3D models of the scene; real-time multi-view stereo to create a dense scene geometry from camera images; use of camera-sensor systems for 3D scene reconstruction with fusion of multiple orthogonal sensors; improved robust and fast estimation methods from noisy data to compensate for highly noisy measurements in various stages of the reconstruction process; high performance feature tracking for salient image-point motion extraction; and the development of data-parallel algorithms for commodity graphics hardware for efficient 3D reconstruction.

Talk Real-time playback and navigation of multi-view video sequences based on space-time tetrahedra

25.05.2009 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Björn Scholz

In this diploma thesis, a real-time capable player for free-viewpoint video will be presented. Free-viewpoint video usually consists of huge amounts of video data captured from multiple cameras and additional inter-view and frame information, such as dense correspondence fields between single images or camera positions, to provide smooth navigation. Neither the size nor the representation of the source data is appropriate to be used in such an environment. Thus, besides the player itself, a container combining the data as well as certain compression to reduce its size will be introduced.