# Events

Talk Promotions-vor-vortrag Berger

27.02.2012 13:00

Informatikzentrum, Seminarraum G30

*
Speaker(s):
Kai Berger*

In this talk i lay out the parts of my thesis, that is

modeling and verification of light-matter interaction phenomena.

The first part focuses on a new setup for capturing and modeling immersed surface reflectances, while the second part introduces ellipsometry as a new way verify the real-world applicability of existing reflectance models, which are widely considered physically plausible.

Talk MA-Vortrag Inverse erweiterte Realität auf einem Mobiltelefon

06.02.2012 13:00

Informatikzentrum, Seminarraum G30

*
Speaker(s):
Martin Fiebig*

Bei normaler erweiterter Realität (Augmented Reality) wird ein

Kamerabild um Informationen erweitert. Diese

Informationen werden aufgrund von Sensorendaten dem Kamerabild

angepasst. Auf einem Smartphone können dies z. B. ein GPS-Signal,

Kompassdaten, Gyroskopdaten oder eine beliebige

Kombination aus diesen Daten sein. Auch ausgedruckte Marker oder

QR-Codes können verwendet

werden, um die Lage der Kamera zu bestimmen oder Bereiche im Kamerabild

zu

identifizieren.

Bei der inversen erweiterten Realität wird eine virtuelle Umgebung

erschaffen und mit

realen Inhalten ergänzt. Die realen Inhalte sollen in diesem Fall von

der Kamera stammen.

Eine vor die Videokamera getretene Person soll vom Hintergrund getrennt

und in eine

virtuelle Szene eingefügt werden.

Talk BA-Vortrag "Plug-in-based Rapid Prototype Development Tool"

23.01.2012 13:00

Informatikzentrum, Seminarraum G30

*
Speaker(s):
Timo Veit*

Eine Reihe von aktuellen Grafikalgorithmen nutzen Fotos, Videos oder Bilddatenbanken als Eingabe. Eine vereinheitlichte Schnittstelle und intuitive Benutzeroberfläche zur Verarbeitung dieser Daten könnte dabei von großem Nutzen sein, um die Entwicklung neuer Algorithmen zu beschleunigen, sowie

bereits bestehende zu integrieren. Ein solches Framework benötigt dabei eine Reihe von Funktionalitäten, wie bspw. das Laden, Speichern, Anzeigen von Bildern oder Videoframes, Scribble

Funktionalität, Streaming, graphenbasierter Datenfluss, und vieles mehr. In dieser Arbeit soll ein entsprechendes Framework entwickelt werden, welches eben diesen Ansprüchen genügt. Dafür soll eine graphenbasierte Oberfläche generiert werden bei der Eingabe, Algorithmen und Ausgabe über

Knotenpunkte verbunden und kombiniert werden können. Für die Knotenpunkte muss ein einheitliches Interface geschaffen werden, so dass sich diese beliebig kombinieren lassen.

Über einen Plug-In Mechanismus soll dabei sichergestellt werden, dass neue Funktionalität auf möglichst einfache Art und Weise hinzugefügt werden kann.

Talk Probevortrag CVMP

11.11.2011 11:11

Informatikzentrum, Seminarraum G30

*
Speaker(s):
Christian Lipski*

Probevortrag von Christian Lipski

`Who Carez - Making Of`

Talk BA-Vortrag Christian Brümmer

14.10.2011 13:00

Informatikzentrum, Seminarraum G30

*
Speaker(s):
Christian Brümmer*

Title "Avatar: Depth-based Multi-Camera Motion Capturing"

Talk Optimization in Function Spaces

12.10.2011 13:00

Informatikzentrum, Seminarraum G30

*
Speaker(s):
Benjamin Hell*

This talk shall give a brief overview of some aspects concerning the wide ?eld of optimization in

function spaces, i.e. the optimization variables are functions. Although most research work done

by me is of a rather theoretical nature with connections to applications this talk shall focus on

techniques for obtaining solutions. Therefore the main emphasis will lie on numerical methods and

programming aspects, while some more complicated analytical mathematics will only be presented

when necessary or when they serve as interesting side notes. In general this talk shall just serve as

a brief overview rather than a lecture on mathematics in the ?eld of optimization.

Talk Optimal control based image sequence interpolation

11.10.2011 13:00

Informatikzentrum, Seminarraum G30

*
Speaker(s):
Kanglin Chen*

The introduced interpolation methods are mainly based on finding an

appropriate optical flow field, with which the objects in an initial

image can be ``transported'' and ``warped'' to a certain time. To

identify the optical flow field the interpolation problem is considered

in the framework of optimal control governed by the transport equation.

To improve the interpolation quality, the models are introduced so that

the edges of the optical flow are preserved, the forward and backward

interpolation are locally selected. Basically the smooth version of

total variation and the active contours for segmentation are used.

Talk The Portable Light Field Camera: Extended Depth of Field, Aliasing and Superresolution

26.09.2011 14:00

Informatikzentrum, Seminarraum G30

*
Speaker(s):
Paolo Favaro*

Portable light field cameras have demonstrated capabilities beyond

conventional cameras. In a single snapshot, they enable digital image

refocusing, i.e., the ability to change the camera focus after taking

the snapshot, and 3D reconstruction. We show that they also achieve

a larger depth of field while maintaining the ability to reconstruct

detail at high resolution. More interestingly, we show that their depth

of field is essentially inverted compared to regular cameras.

Crucial to the success of the light field camera is the way it samples

the light field, trading off spatial vs. angular resolution, and how

aliasing affects the light field. We present a novel algorithm that

estimates a full resolution sharp image and a full resolution depth map

from a single input light field image. The algorithm is formulated in a

variational framework and it is based on novel image priors designed

for light field images. We demonstrate the algorithm on synthetic and

real images captured with our own light field camera, and show that it

can outperform other computational camera systems.

Talk Perception in Real-Time Rendering

24.08.2011 14:00

Informatikzentrum, Seminarraum G30

*
Speaker(s):
Michael Guthe*

Current graphics hardware is able to render scenes of striking realism in real-time: the ever

growing processing power, memory size and bandwidth allows for the rendering of global

illumination, realistic materials and smooth animations reserved to offline-rendering a few years

ago. Nevertheless, the consumers expectations are almost increasing at the same rate.

While traditional approaches become more efficient due to increasing processing power, the

ultimate goal of realistically looking renderings is not a purely mathematical one. Due to the

limitations of the human visual system, images that are far from realistic in a physical sense still

look real. On the other hand seemingly minor inaccuracies can cause highly visible differences.

Therefore it is necessary to consider human vision when generating images for both offline- and

real-time rendering. Unfortunately estimating the visual difference itself can often be more time

consuming than image generation. Therefore special visual models and pre-computed visual

difference need to be used for interactive real-time rendering. The talk introduces two such models

that were successfully applied in this context. The first one is tailored to perception of complex

materials where especially compression to a manageable size is especially important. The second

one proposes an efficient pre-computed difference measure for the reduction of complex polygon

models. Based on the material with which the model is rendered a visually optimized reduction is

performed. Finally an outlook to other fields that benefit from perception models is given.

Talk Wim Sweldens' Building your own wavelets at home

17.08.2011 13:00

Informatikzentrum, Seminarraum G30

*
Speaker(s):
Stefan Guthe*

Wavelets have been making appearance in many pure and applied areas of science and engineering. Computer Graphics with its many and varied computational problems has been no exception to this rule. In these notes we will attempt to motivate and explain the basic ideas behind wavelets and what makes them so successful in application areas.

Talk How to get a Verdière-Matrix from a Permutahedron?

15.07.2011 13:00

Informatikzentrum, Seminarraum G30

*
Speaker(s):
Kai Lawonn*

In 1990 Colin de Verdière introduced a new graph parameter. This parameter is based on spectral properties of matrices associated with the graph. He showed that his number is invariant under taking minors. Additionally one can conclude topological properties to the graph with special values of the Verdière-Number. For example the graph is planar if and only if the parameter is less than 4.

In general it is not well-known to get the Verdière-Matrix and the associated Verdière-Number of a given graph. In 2008 Ivan Izmestiev generalized the methods of Lovász to get a Verdière-Matrix from a polytope.

This talk gives an introductive overview about the topic of the Verdière-Number and shows how to get a Verdière-Matrix from a special polytope the Permutahedron.

Talk Temporal Coherence and Adapted Computations for High-quality Real-Time Rendering

08.07.2011 13:15

Informatikzentrum, IZ 161

*
Speaker(s):
Elmar Eisemann*

Nowadays, there is a strong trend towards rendering to higher-resolution displays and at high frame rates. This development aims at delivering more detail and better accuracy, but it also comes at a significant cost. Although graphics cards continue to evolve with an ever-increasing amount of computational power, the processing gain is counteracted to a high degree by increasingly complex and sophisticated pixel computations. For real-time applications, the direct consequence is that image resolution and temporal resolution are often the first candidates to bow to the performance constraints (e.g., although full HD is possible, PS3 and XBox often render at lower resolutions). In order to achieve high-quality rendering at a lower cost, one can exploit temporal coherence (TC). The underlying observation is that a higher resolution and frame rate do not necessarily imply a much higher workload, but a larger amount of redundancy and a higher potential for amortizing rendering over several frames. In this session, we will investigate methods that make use of this principle and provide practical and theoretical advice on how to exploit temporal coherence for performance optimization. These methods not only allow us to incorporate more computationally intensive shading effects into many existing applications, but also offer exciting opportunities for extending high-end graphics applications to lower-spec consumer-level hardware.

Talk PADI-Präsentationen

08.07.2011 13:15

Informatikzentrum, CIP-Pool G40

*
Speaker(s):
Christian Lipski*

bis ca. 14:15

Talk PADI-Präsentationen

07.07.2011 14:00

Informatikzentrum, CIP-Pool G40

*
Speaker(s):
Christian Lipski*

bis ca. 15:20

Talk Abschlussvortag zur Bachelorarbeit

17.06.2011 13:00

Informatikzentrum, Seminarraum G30

*
Speaker(s):
Paul Wiemann*

tbd

Talk From video to models of natural phenomena for graphics applications

09.06.2011 14:00

Informatikzentrum, Seminarraum G30

*
Speaker(s):
Peter Hall*

Natural phenomena such as fire, smoke, trees, and water are ubiquitous; it

is important for computer graphics to model them. Unfortunately modelling

such phenomena is notoriously difficult computationally, so that a wide

variety of techniques have been developed based on particles, textures,

L-systems and plenty of user interaction.

We will consider two case studies that show video is a potential source

for acquiring models of natural phenomena, and the models produced can be

easily controlled. Specifically, we will show how three-dimensional models

of moving trees can be obtained from video with very little user

interaction, how these models can serve as exemplars for automatic

production of similar trees, and how the trees can be rendered and

controlled in a wide variety of ways. We we also show how bodies of open

water, from quiescent pools to breaking waves and waterfalls, can modelled

through a single video camera.

Talk Subdivision Surface Reduction

27.05.2011 13:00

Informatikzentrum, Seminarraum G30

Diplomarbeit Matthias Richter

Talk Reference Data Generation and Performance Analysis for Image Processing

20.04.2011 14:00

Informatikzentrum, Seminarraum G30

*
Speaker(s):
Daniel Kondermann*

A commonly accepted hypothesis of the image processing community is that synthetic (rendered) reference data for image processing algorithms such as optical flow and stereo disparity estimation does not help in finding out whether they work in real-world environments. On the other hand, the

creation of ground truth for real scenes is a difficult task, because no other measurement technique directly yields reference data of sufficiently high accuracy.

In this talk I will discuss various ideas for ground truth generation. It turns out that this problem is closely linked to camera tracking, 3D scene reconstruction, material property estimation and related tasks. I will show our current approaches in this field and first results we obtained with respect to the comparison of real and synthetic image data.

Talk Raising user experience by real-time physics simulation

18.02.2011 13:00

Informatikzentrum, Seminarraum G30

*
Speaker(s):
Michael Stengel*

User experience describes the impression of a user concerning a product or software. In the area of interactive software the application of physics simulation can improve usability and raise the joy of use. In this presentation two different examples in the areas of desktop applications and virtual reality are mentioned. Both have a benefit from using physics. It will be discussed in which way the simulation is integrated.

Talk XFlow - Declarative Data Processing in XML3D

15.02.2011 14:00

Informatikzentrum, Seminarraum G30

Our digital environment has become increasingly three-dimensional, but still the Web continues to remain comprised of two-dimensional elements. Several approaches exist to add 3D content to the Web, but none of them has found a widespread acceptance among web developers. XML3D is a new declarative approach that directly extends HTML5 to add interactive 3D graphics to the Web. The concept of XML3D follows a lightweight design and fully leverages existing web technologies, such as JavaScript, DOM, or CSS. As a joint research project of the Intel Visual Computing Institute (IVCI), the German Research Center for Artificial Intelligence (DFKI) and the Computer Graphics Lab of Saarland University, XML3D has recently spawned a W3C Incubator Group targeting the future of the 3D Web. With XFlow, a new dataflow-driven data processing mechanism was added to the XML3D specification. It further extends the scope of graphical effects achievable with XML3D. Simulating natural effects such as water surfaces, human skin, or vegetation requires mesh and image processing capabilities. In XML3D, meshes and shaders act as sinks for dataflow processes, allowing mesh and texture data to be the result of one or more data-processing scripts. XFlow introduces an implicit encoding of such processing graphs inside of HTML, considering various usability aspects. Due to the declarative approach, a transparent acceleration of the processing scripts can be achieved by offloading the actual calculations from the CPU onto available computing resources. As one of many applications, virtual avatars with realistic, animated faces embedded inside of a webpage have become possible.

Talk Air-Liquid Interaction with SPH

02.02.2011 11:00

Informatikzentrum, Seminarraum G30

*
Speaker(s):
Julian Bader*

Air bubbles are a natural phenomenon which appears in everyday life. Thus, modeling air-liquid interaction can significantly improve the realism of fluid simulations such as Smoothed Particle Hydrodynamics (SPH). However, handling the large density ratio of water and air with SPH is problematic. This is due to high pressure forces which result in numerical instabilities. In order to circumvent these instabilities, the proposed air-liquid approach computes the two phases separately. The interaction is modeled by employing a two-way drag force. The method is capable of simulating typical air bubble phenomena, such as merging, deformation and volume-dependent buoyancy. To not explicitly represent the air surrounding the water, air particles are generated on-the-fly during simulation. Here, a heuristic is used that creates air bubbles only where they are likely to appear.

Talk Very Low Power Graphics (or How to design a Mobile Graphics Chip)

25.01.2011 14:00

Informatikzentrum, Seminarraum G30

*
Speaker(s):
Stefan Guthe*

A variety of embedded systems require at least some kind of 3D graphics processing power. However, in a mobile setting power is the main constraint rather than features or performance alone. In this talk, we explore a very simple approach to design 3D graphics hardware tailored to the two mainstream 3D graphics APIs in the mobile market, OpenGL ES and DirectX. Besides achieving a functional low power 3D graphics implementation, we also need to take both hardware and software development requirements into account.

Talk CUDA Expression Templates

24.01.2011 13:00

Informatikzentrum, Seminarraum G30

*
Speaker(s):
Paul Wiemann*

Many algorithms require vector algebra operations such as the dot product, vector norms or component-wise manipulations. Especially for large-scale vectors, the efficiency of algorithms depends on an efficient implementation of those calculations. The calculation of vector operations benefits from the continually increasing chip level parallelism on graphics hardware. Very efficient basic linear algebra libraries like CUBLAS make use of the parallelism provided by CUDA-enabled GPUs. However, existing libraries are often not intuitively to use and programmers may shy away from working with cumbersome and errorprone interfaces. In this paper we introduce an approach to simplify the usage of parallel graphics hardware for vector calculus. Our approach is based on expression templates that make it possible to obtain the performance of a hand-coded implementation while providing an intuitive and math-like syntax. We use this technique to automatically generate CUDA kernels for various vector calculations. In several performance tests our implementation shows a superior performance compared to CPU-based libraries and comparable results to a GPU-based library.

Talk Multi-aperture imaging systems inspired by insect compound eyes

10.01.2011 14:00

Informatikzentrum, Seminarraum G30

*
Speaker(s):
Alexander Oberdörster*

Miniaturization of camera modules and reduction of their manufacturing cost is an important goal in many markets such as mobile phone cameras and automotive applications. Multi-aperture imaging systems inspired by insect compound eyes promise advances in both regards. A traditional optical system is a stack of lenses that images a certain field of view onto the image sensor. A multi-aperture-system consists of an array of microlenses (channels), each imaging a fraction of the total field of view onto its own small area on the image sensor, forming an array of microimages. With careful adjustment of the viewing directions of the channels, the focal lengths of the microlenses can be reduced to a fraction of the focal length of a single-aperture system. This decreases track length and increases depth of field. As each microimage spans only a small field of view, the optical systems can be simple. Because the microlenses are small -- they have a diameter of hundreds of microns and a sag of tens of microns -- they can be manufactured on wafer scale with microfabrication techniques. This makes production cost-effective and precise. To obtain a complete image from the partial images, they are combined electronically. For an accurate aligment of all pixels, the distortion of each microimage is derived from the optical design. Each pixel is treated as a measurement of radiance and placed on the image plane according to the viewing direction of the microlens, pixel position under the microlens, distortion at that position and parallax due to object distance. The final image is generated by interpolating between the known measurements.