Events
Talk Efficient and accurate visualization of complex light sources
26.06.2009 14:00
Informatikzentrum, Seminarraum G30
Speaker(s): Stefan Kniep
We present a new method for estimating the radiance function of complex area light sources. The method is based on Jensens photon mapping algorithm. In order to capture high angular frequencies in the radiance function, we incorporate the angular domain into the density estimation. However, density estimation in position-direction space makes it necessary to and a tradeoff between the spatial and angular accuracy of the estimation. We identify the parameters which are important for this tradeoff and investigate the typical estimation errors. We show how the large data size, which is inherent to the underlying problem, can be handled. The method is applied to different automotive tail lights. It can be applied to a wide range of other real-world light sources.
Talk Compressed Sensing
15.06.2009 14:00
Informatikzentrum, Seminarraum G30
Speaker(s): Stephan Wenger
The dogma of signal processing maintains that a signal must be sampled at a rate at least twice its highest frequency in order to be represented without error. However, in practice, we often compress the data soon after sensing, trading off signal representation complexity (bits) for some error (consider JPEG image compression in digital cameras, for example). Clearly, this is wasteful of valuable sensing resources. Over the past few years, a new theory of "compressive sensing" has begun to emerge, in which the signal is sampled (and simultaneously compressed) at a greatly reduced rate. Compressive sensing is also referred to in the literature by the terms: compressed sensing, compressive sampling, and sketching/heavy-hitters.
Talk Fast Reconstruction of the World from Photos and Videos
08.06.2009 14:00
Informatikzentrum, Seminarraum G30
Speaker(s): Jan-Michael Frahm
In recent years photo/video sharing web sites like Flickr and YouTube have become increasingly popular. Nowadays, every day terra bytes of photos and videos are uploaded. These data survey large parts of the world throughout the different seasons, various weather conditions and all times of the day. In the talk I will present my work on the highly efficient reconstruction of 3D models from these data. It addresses a variety of the current challenges that have to be addressed to achieve a concurrent 3D model from these data. The challenges are: estimation of the geometric and radiometric camera calibration from videos and photos, efficient robust camera motion estimation for (quasi-)degenerate estimation problems, high performance stereo estimation from multiple views, automatic selection of correct views from noisy image/video collections, image based location recognition for topology detection. In the talk I will discuss the details of our real-time camera motion estimation from video using our Adaptive Real-Time Random Sample Consensus (ARRSAC) and our high performance salient feature tracker, which simultaneously estimates the radiometric camera calibration and tracks the motion of the salient feature points. Furthermore our technique to achieve robustness against (quasi-) degenerate data will be introduced. It allows to detect and overcome the case of data, which under-constrain the camera motion estimation problem. Additionally our optimal stereo technique for determining the scene depths with constant precision throughout the scene volume will be explained during the talk. It allows to perform the scene depth estimation from a large set of views with optimal computational effort while obtaining the depth with constant precision throughout the reconstruction volume. I also discuss our fast technique for the image based location recognition, which uses commodity graphics processors to achieve real-time performance while providing high recognition rates. Furthermore in the talk I present our work on 3D reconstruction from internet photo collections. It combines image based recognition with geometric constraints to efficiently perform the simultaneous selection of correct views and the 3D reconstruction from large collections of photos. The talk will also explain the future challenges in all the mentioned areas. Jan-Michael Frahm is a Research Assistant Professor at University of North Carolina at Chapel Hill. He received his Ph.D in computer vision in 2005 from the Christian-Albrechts University of Kiel, Germany. His Diploma in Computer Science is from the University of Lübeck. Dr.- Ing. Frahm`s research interests include a variety of computer vision problems. He has worked on structure from motion for single/multi-camera systems for static and dynamic scenes to create 3D models of the scene; real-time multi-view stereo to create a dense scene geometry from camera images; use of camera-sensor systems for 3D scene reconstruction with fusion of multiple orthogonal sensors; improved robust and fast estimation methods from noisy data to compensate for highly noisy measurements in various stages of the reconstruction process; high performance feature tracking for salient image-point motion extraction; and the development of data-parallel algorithms for commodity graphics hardware for efficient 3D reconstruction.
Talk Real-time playback and navigation of multi-view video sequences based on space-time tetrahedra
25.05.2009 13:00
Informatikzentrum, Seminarraum G30
Speaker(s): Björn Scholz
In this diploma thesis, a real-time capable player for free-viewpoint video will be presented. Free-viewpoint video usually consists of huge amounts of video data captured from multiple cameras and additional inter-view and frame information, such as dense correspondence fields between single images or camera positions, to provide smooth navigation. Neither the size nor the representation of the source data is appropriate to be used in such an environment. Thus, besides the player itself, a container combining the data as well as certain compression to reduce its size will be introduced.
Talk Moving Gradients: A Path-Based Method for Plausible Image Interpolation
18.05.2009 13:00
Informatikzentrum, Seminarraum G30
Speaker(s): Christian Linz
We describe a method for plausible interpolation of images, with a wide range of applications like temporal up-sampling for smooth playback of lower frame rate video, smooth view interpolation, and animation of still images. The method is based on the intuitive idea, that a given pixel in the interpolated frames traces out a path in the source images. Therefore, we simply move and copy pixel gradients from the input images along this path. A key innovation is to allow arbitrary (asymmetric) transition points, where the path moves from one image to the other. This flexible transition preserves the frequency content of the originals without ghosting or blurring, and maintains temporal coherence. Perhaps most importantly, our framework makes occlusion handling particularly simple. The transition points allow for matches away from the occluded regions, at any suitable point along the path. Indeed, occlusions do not need to be handled explicitly at all in our initial graph-cut optimization. Moreover, a simple comparison of computed path lengths after the optimization, allows us to robustly identify occluded regions, and compute the most plausible interpolation in those areas. Finally, we show that significant improvements are obtained by moving gradients and using Poisson reconstruction.
Talk Best of ICCP 2009
04.05.2009 13:00
Informatikzentrum, Seminarraum G30
Speaker(s): Anita Sellent
The first International Conference on Computational Photography (ICCP'09) took place in San Francisco at March 16 th and 17 th.
I will give a short overview on the bandwidth of the conference and summarize 2 of the most interesting articles:
In `Light Field Superresolution` Bishop et al. show how the trade off between angular and spatial resolution in light field cameras can be mitigated under the assumption of Lambertian scene surfaces. Thus images of surprisingly high resolution can be reconstructed.
High Dynamic Range images that are composited of several low dynamic range images usually show ghosting artifacts if objects in the scene move. The article of Gallo et al. about `Artifact-free High Dynamic Range Imaging` describes, how these ghosting artifacts can be totally avoided.
Talk 3D Visualisierung der Erdoberfläche im Wandel der Zeitalter (SA)
27.04.2009 13:30
Informatikzentrum, Seminarraum G30
Speaker(s): Lorenz Rogge
Ziel dieser Studienarbeit ist es, ein Programm zu entwickeln, welches die paläontologische Veränderung des Globus im Laufe der Zeit visualisiert. Basierend auf geologischen Daten und Kartenmaterial soll es möglich sein, die Erdoberfläche zu gewünschten Zeitpunkten annähernd realistisch und in Echtzeit als 3D-Objekt darzustellen. Hierzu müssen die benötigten Daten beschafft und mit einer geeigneten Technologie eingebunden werden.
Talk Some thoughts on perceptual realism in computer graphics
20.04.2009 10:00
Informatikzentrum, Hörsaal 160
Speaker(s): Philip Dutré
One of our research goals is to construct a realism scale, that allows the user to choose the desired level of visual realism in the final image. The higher the realism, the closer the image will appear to look like a photograph. If the user selects a lower realism-level, the gradual decline in realism should appear in the image as smoothly as possible. Preferably, image components that do not contribute to the realistic appearance of the image, should be eliminated from the computations first. Such a realism scale has not yet been constructed in computergraphics. This talk will focus on some of the recent efforts we have pursued in order to construct such a realism scale. Although we are still far away from this goal, we feel that some of the underlying mechanisms and thoughts are useful enough to warrant discussion, and hopefully provide some additional discussion.
Talk Best of Eurographics 2009
06.04.2009 14:00
Informatikzentrum, Seminarraum G30
Speaker(s): Kai Berger, Benjamin Meyer
The Eurographics 2009 took place in Munic, from 30th of march to 3rd of april. We will present some of its highlights and summarize the most important proceedings from the different topics.
Talk Enhancing and Experiencing Spacetime Resolution with Videos and Stills
06.04.2009 13:00
Informatikzentrum, Seminarraum G30
Speaker(s): Martin Eisemann
An algorithm by A. Gupta will be presented for enhancing the spatial and/or temporal resolution of videos. It targets the emerging consumer-level hybrid cameras that can simultaneously capture video and high-resolution stills. The technique produces a high spacetime resolution video using the high-resolution stills for rendering and the low-resolution video to guide the reconstruction and the rendering process. The presented framework integrates and extends two existing algorithms, namely a high-quality optical flow algorithm and a high-quality image-based-rendering algorithm. The framework enables a variety of applications that were previously unavailable to the amateur user, such as the ability to (1) automatically create videos with high spatiotemporal resolution, and (2) shift a high-resolution still to nearby points in time to better capture a missed event.
Talk Multi-view Reconstruction of Detailed Garment Motion
23.03.2009 13:00
Informatikzentrum, Hörsaal 160
Speaker(s): Derek Bradley
A lot of research has recently focused on the problem of capturing the geometry and motion of garments. Such work usually relies on special markers printed on the fabric to establish temporally coherent correspondences between points on the garment's surface at different times. Unfortunately, this approach is tedious and prevents the capture of off-the-shelf clothing made from interesting fabrics. In this talk I will summarize recent advances in detailed, marker-free garment capture resulting from our research at the University of British Columbia. I will first discuss our physical acquisition setup, including camera synchronization and rolling shutter compensation using stroboscopic illumination. I will then describe our garment capture approach, where we establish temporally coherent parameterizations between incomplete geometries that we extract at each timestep with a multiview stereo algorithm. Finally, I will discuss a method for reintroducing fine folds into the captured models using data-driven dynamic wrinkling. As a result of this work, we are able to capture the geometry and motion of unpatterned, off-the-shelf garments made from a range of different fabrics, with realistic dynamic folds.
Talk Graph-Theoretic Scagnostics
16.03.2009 13:00
Informatikzentrum, Seminarraum G30
Speaker(s): Georgia Albuquerque
Graph-Theoretic Scagnostics was presented in the IFOVIS 2005 and proposes different quality measurements for scatterplots. The proposed quality measures are based on an exploratory visualization method developed by John and Paul Tukey around 20 years ago, called Scagnostics.
Talk Lunar Surface Reconstruction from Single Images
09.03.2009 13:00
Informatikzentrum, Seminarraum G30
Speaker(s): Stephan Wenger
For the planned Lunar Obervation Center and Islamic Museum in Mekka, Saudi-Arabia, a 3-meter-sized moon globe with realistic surface relief is to be created. Since the resolution of directly measured height maps is about a factor of ten too low for that purpose, the missing detail has to be plausibly (but not necessarily exactly) estimated from photographic images at heigher resolution. Usual shape- from-shading approaches fail because most moon regions have only be photographed in detail at a single light condition. Our heuristic algorithm nevertheless finds plausible surface normals that are integrated to yield a detailed height map of the whole moon.
Talk Image-based Viewpoint Navigation through space and time
06.03.2009 13:00
Informatikzentrum, Seminarraum G30
Speaker(s): Christian Lipski
We present an image-based rendering system to viewpoint-navigate through space and time of complex real-world, dynamic scenes. Our approach accepts unsynchronized, uncalibrated multi-video footage as input. Inexpensive, consumer-grade camcorders suffice to acquire arbitrary scenes, e.g., in the outdoors, without elaborate recording setup procedures. Instead of scene depth estimation, layer segmentation, or 3D reconstruction, our approach is based on dense image correspondences, treating view interpolation uniformly in space and time: spatial viewpoint navigation, slow motion, and freeze-and-rotate effects can all be created in the same way. Acquisition simplification, generalization to difficult scenes, and space-time symmetric interpolation.
Talk Image/Video error metrics
02.03.2009 13:00
Informatikzentrum, Seminarraum G30
Speaker(s): Kai Berger
When the subjective quality of e.g. compressed image or video data is to be approximated by an objective metric, popular formulae such as MSE or PSNR appear to do the job well. However, in several scenarios these standard formulae don't correspond subjective quality evaluations. In this talk three different types of image/video quality assessment methods are presented. Exemplaric algorithms for these types will be compared to MSE/PSNR.
Talk Multi-View Stereo algorithms - overview and comparison
23.02.2009 13:00
Informatikzentrum, Seminarraum G30
Speaker(s): Christian Lipski
Multi-View Stereo reconstruction algorithms are a very active field in computer graphics and computer vision. Seitz et al. have established multi-view data sets with known ground truth as a basis for evaluation and comparison (http://vision.middlebury.edu/mview/). We took a closer look at the current top performers, evaluate approaches with our own data sets and identify cases where these approaches fail. Possible solutions are presented and discussed.
Talk Space-time Surface Reconstruction Using Incompressible Flow
16.02.2009 13:00
Informatikzentrum, Seminarraum G30
Speaker(s): Christian Linz
We introduce a volumetric space-time technique for the reconstruction of moving and deforming objects from point data. The output of our method is a four-dimensional generalized cylinder in space-time, made up of spatial slices, each of which is a three-dimensional solid bounded by a watertight manifold. The motion of the object is described as an incompressible flow of material through time. We optimize the flow so that the distance material moves from one time frame to the next is bounded, the density of material remains constant, and the object remains compact. This formulation overcomes deficiencies in the acquired data, such as persistent occlusions, errors, and missing frames. We demonstrate the performance of our flow-based technique by reconstructing coherent sequences of watertight models from incomplete scanner data.
Talk Pixel-exact shadow-mapping based on GPU ray tracing using CUDA
06.02.2009 13:00
Informatikzentrum, Seminarraum G30
Speaker(s): Rolf Krämer
Die stetig fortschreitende Entwicklung von Grafikkarten Prozessoren (GPUs) eröffnen neue Möglichkeiten die Standard-Grafikpipeline zu durchbrechen und generellere Konzepte auf der GPU umzusetzen (GPGPU). Gerade auch in der gängigen Grafikdarstellung mittels Rasterisierung ist die Erweiterung sinnvoll, da man in vielen Bereichen an seine Grenzen gestossen zu sein scheint. Ein Beispiel dafür ist die Darstellung von harten Schatten. Gängige echtzeitfähige Ansätze wie ShadowMaps können für Aliasing-Artefakte an den Schattenkanten sorgen, welche unangenehm auffallen. In dieser Arbeit wurde ein Hybrid-System entworfen, welches diese Nachteile umgeht und pixelgenaue Schatten berechnet. Dafür werden die Primärstrahlen und das Shading mittels gewöhnlicher Standardpipeline berechnet, während die Schatten durch einen GPU-unterstützten Ray Tracer berechnet werden, welcher unter Zuhilfenahme von NVidias CUDA entworfen wurde.
Talk Unterstützung der visuellen Analyse hochdimensionaler Daten durch klassenbasierte Projektionsgütemas
30.01.2009 13:00
Informatikzentrum, Hörsaal 160
Speaker(s): Andrada Tatu
Zunehmende Dimensionalität und anwachsendes Datenvolumen erfordern effektive Explorationstechniken, um dem Benutzer Einblicke in Muster in den Daten zu ermöglichen. Bei der visuellen Analyse müssen dabei hochdimensionale Daten auf niedrigdimensionale Sichten projiziert werden. Es gibt verschiedene Ansätze um niedrigdimensionale Darstellungen aus hochdimensionalen Daten zu erzeugen und diese zu bewerten. In dieser Arbeit werden Verfahren vorgestellt um gute Projektionen hochdimensionaler Daten zu erhalten. Es werden dazu Konzepte präsentiert um lineare und nichtlineare Projektionen zu finden, die an realen Daten getestet werden. Kern der Arbeit ist dabei mit Hilfe von achsenparallelen und nicht-achsenparallelen Projektionen Dimensionen und Unterräume der Daten zu finden, die die Datenstruktur am besten darstellen. Zur Bewertung der Projektionen werden Projektionsgütemasse vorgestellt, welche im Gegensatz zu existierenden Verfahren die Dichteverteilung der Daten berücksichtigen. Vorteil dieser Methode ist es, dass nicht nur Beziehungen zwischen zwei Dimensionen, sondern auch zwischen mehreren Dimensionen gefunden werden.
Talk Computational Vision for Graphics
26.01.2009 14:00
Informatikzentrum, Seminarraum G30
Speaker(s): Oliver Grau
Talk Imperfect Shadow Maps for Efficient Computation of Indirect Illumination
26.01.2009 13:00
Informatikzentrum, Seminarraum G30
Speaker(s): Benjamin Meyer
We present a method for interactive computation of indirect illumination in large and fully dynamic scenes based on approximate visibility queries. While the high-frequency nature of direct lighting requires accurate visibility, indirect illumination mostly consists of smooth gradations, which tend to mask errors due to incorrect visibility. We exploit this by approximating visibility for indirect illumination with imperfect shadow maps - low-resolution shadow maps rendered from a crude point-based representation of the scene. These are used in conjunction with a global illumination algorithm based on virtual point lights enabling indirect illumination of dynamic scenes at real-time frame rates. We demonstrate that imperfect shadow maps are a valid approximation to visibility, which makes the simulation of global illumination an order of magnitude faster than using accurate visibility.
Talk Unwrap Mosaics: a new representation for video editing
12.01.2009 13:00
Informatikzentrum, Seminarraum G30
Speaker(s): Anita Sellent
Though photoediting is very common today, extending such editing task on video is still very tedious work. In their SIGGRAPH 2008 paper Rav-Acha et al. present a new way to edit uncalibrated monocular video footage. The basic idea of the approach is to represent a video sequence as a 2D mosaic accompanied by a mapping from the mosaic to the frames of the video and an occlusion map. Editing operations can than be performed directly on the mosaic and resomposited with the original video. In this colloquium, ideas and details necessary for the determination of the mosaic, mapping and occlusion map are disscussed.
Talk Tomographic Reconstruction of Gas Flows in the Presence of Occluders
15.12.2008 13:30
Informatikzentrum, Seminarraum G30
Speaker(s): Kai Berger
Nowadays, non-stationary and time-varying gas flows, for example the heated air above a camping stove, can be captured and tomographically reconstructed by making use of the Background-oriented Schlieren (BOS) method. In this method a high-frequency noise pattern, for example generated by wavelet noise, is placed behind the volume under observation and the per pixel deflections in the input image caused by the varying refractive index of the gas volume is measured. These deflections, which can be computed by optical flow methods, can be integrated to a tomographically consistent refractional index volume by solving an equation system for 3D deflection vectors and integrating them, e.g. by a Poisson-integration.
However, when the interaction of such a gas with occluding objects is examined, for example when a marshmellow is moved above the camping stove, different alterations to the existing tomographic reconstruction algorithm have to be applied. In this talk the existing method is introduced and the occuring problems with occluding objects are stated. Afterwards, alterations to the stages of this method are proposed and evaluated.
Talk The OMNI Display: Using a Large Peripheral Display for Workgroup Awareness in Distributed Groups
05.12.2008 14:00
Informatikzentrum, Seminarraum G30
Speaker(s): Maryam Mustafa
The initiation of interaction in face-to-face work environments is a gradual process and takes place in a rich information landscape of awareness and social signals. This gradual approach to interaction is missing from most online messaging systems. In this talk I will be discussing a prototype system called the Open Messenger Notification and Interaction (OMNI) display that uses a projected peripheral display to provide dynamic, real-time awareness information using a peripheral display in the architectural space of the user in ways that are both rich and subtle. OMNI has been designed to use people's natural abilities to absorb information in the periphery without being distracted from their primary task. OMNI accomplishes this by using motion and color to subtly provide several types of information about online contacts. I will discuss our approach to capturing and presenting relevant information for awareness of the surroundings, facilitating interaction and creating an online collaborative environment.
Talk Exhaustive visual search for information in multi-dimensional data-sets
05.12.2008 13:00
Informatikzentrum, Seminarraum G30
Speaker(s): Georgia Albuquerque, Martin Eisemann
Goal of this research project is to develop and evaluate a fundamentally new approach to exhaustively search for, and interactively characterize any non-random mutual relationship between attribute dimensions in general data sets. To be able to systematically consider all possible attribute combinations, we propose to apply image analysis to visualization results in order to automatically pre-select only those attribute combinations featuring non-random relationships. To characterize the found information and to build mathematical descriptions, we rely on interactive visual inspection and visualization-assisted interactive information modeling. This way, we intend to discover and explicitly characterize all information implicitly represented in unbiased sets of multi-dimensional data points.