Computer Graphics
TU Braunschweig


Talk BA-Talk: Optimizing the Object-Median Split for Ray Tracing Applications

11.11.2013 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Jakob Garbe

Acceleration data structures are the most efficient way to speed up ray tracing. One of the most common acceleration data structures is the Bounding Volume Hierarchy (BVH). It's performance is highly dependent on the partitioning criteria used with the Surface-Area-Heuristic being the most famous. Memory saving approximations of the BVH, like the implicit object space partitioning scheme by Eisemann et al., however, require an object median split during construction which is known to be inferior to other schemes.

In this work several partitioning approaches are developed and tested which try to improve the classic object median split, e.g. by different projections or sorting along a morton curve .

Talk BA-Talk: Shortened Shadow Rays

04.11.2013 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Leslie Wöhler

Shadow computation is a major task in a rendering pipeline today. Ray tracing can compute exact shadows by computing the visibility between two scene points. This is usually done by casting a shadow ray. The BA talk gives informations about the concept of “shortened shadow rays”, a technique to speed up shadow computations for ray tracing systems.

Talk Image-Based Approaches for Photo-Realistic Rendering of Complex Objects

27.09.2013 14:00
Informatikzentrum, Seminarraum G30

Speaker(s): Anna Hilsmann

One principal aim in computer graphics is the achievement of photorealism. In this talk, I will propose methods for image-based rendering and modification of objects with complex appearance properties, concentrating on the example of clothes. With physical simulation methods, rendering of clothes is computationally demanding because of complex cloth drapery and shading. In contrast, the proposed methods use real images, which capture these properties and serve as appearance examples to guide complex animation or texture modification processes. Texture deformation and shading are extracted as image warps both in the spatial and in the intensity domain. Based on these warps, a pose-dependent image-based rendering method synthesizes new images of clothing from a database of pre-recorded images. For rendering, the images and warps are parameterized and interpolated in pose-space, i.e. the space of body poses, using scattered data interpolation. To allow for appearance changes, an image-based retexturing method is proposed, which exchanges the cloth texture in an image while maintaining texture deformation and shading properties, without the knowledge of the scene geometry and lighting conditions.

Altogether, the presented approaches shift computational complexity from the rendering to an a-priori training phase. The use of real images and warp-based extraction of deformation and shading allow a photo-realistic visualization and modification of clothes, including fine details, without computationally demanding simulation of the underlying scene and object properties.

Talk Promotions-Vor-Vortrag Benjamin Meyer

02.08.2013 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Benjamin Meyer

Talk MA-Vortrag: Detail Hallucinated Image Interpolation

13.06.2013 14:00
Informatikzentrum, Seminarraum G30

Speaker(s): Alexander Lerpe

Image-interpolation and warping is the basis of most image-based rendering techniques where real-world footage is used as input instead of complex 3D geometry. The difficulty is that visible artifacts appear in the image-interpolation if correspondences between images are not correctly estimated beforehand. One simple solution would be to downsample the images until the errors disappear but this solution is not practical as most of the details disappear as well.

In this talk a new algorithm is presented that is orthogonal to classic image-interpolation algorithms in the sense that not the correspondences are corrected but the input images are changed to match the correspondences and create a more plausible image-interpolation result. The algorithm is very versatile as it can also be used to add details to prevent interpolation artifacts or to add details to input images captured with a lower resolution or where the camera settings were accidentally wrong during recording.

Talk Promotionsvortrag: Virtual Video Camera: a System for Free Viewpoint Video of Arbitrary Dynamic Scen

05.06.2013 14:00
Informatikzentrum, IZ 161

Speaker(s): Christian Lipski

Talk BA-Vortrag: The Splitted Grid - An acceleration structure for ray tracing

08.04.2013 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Marc A. Kastner

Uniform Grids are commonly employed acceleration structures for speeding up ray tracing. Unfortunately, the classic Grid suffers from several problems such as a high memory footprint, the inability to handle clustered geometry (tea-pot-in-a-stadium problem) and a scaling linear to its resolution. However, their construction and traversal times are slightly faster compared to Bounding Volume Hierarchies (BVH) or other hierarchical approaches. Extensions to the uniform Grid are Hierarchical or Nested grids, which try to overcome some of the known drawbacks.

This thesis provides a new acceleration data structure for ray tracing that takes the advantage of the best of previous methods. It combines common approaches of hierarchical space-partitioning data structures with nested grids. The algorithm is introduced in a naive way which nests 1-dimensional grids in a hierarchy. Due to its structure, it can use techniques similar to 3D-DDA from uniform grids for traversal. Heuristic approaches to calculate parameters like resolution and hierarchy depth adaptively are evaluated. Furthermore, an additional technique to reduce the number of double references by using a loose variant is presented. Both approaches are implemented on CPU, evaluated in detail with common scenes and compared to other state-of-the-art acceleration data structures.

Talk Bild - Wahrnehmung

05.04.2013 18:00
Kunstgeschichtliches Seminar und Sammlung, Universität Göttingen

invited talk Tagung Digitalbild "Euphorie und Angst"

Talk MA-Vortrag: Reconstructing dynamic models from video sequences

05.11.2012 13:45
Informatikzentrum, Seminarraum G30

Speaker(s): Andreas König

In computer graphics a varity of methods which compute a 3D model of a scene from 2D images are known. These methods are able to reconstruct static scenes captured with synchronised cameras. There exists a method to reconstruction position as well as the motion of a point from a dynamic scene. This approach simultaneously estimates depth, orientation and 3D motion of the scene from the input views. The approach computes a sparse set of patches from the input views which are iterativly expanded along the object surfaces. The result is a quasi-dense surface representation of the dynamic scene. However, the method only considers a small time section of frames from the video sequences. The main contribution of this thesis is to perform a consistent reconstruction of the whole video sequences of the scene. Therefore the approach is modified to consider the whole video sequences fo the reconstruction process. Hereby the geometry and motion of the scene is transfered to a model which represents the captured scene. Farther on, methods to cope with occlusion and disocclusion of objects throughout the scene are presented. The dynamic model is later on used for free viewpoint rendering of the scene. In addition a new patch expansion approach is presented to generate a more dense scene representation. The proposed expansion is based on a domain transform filter for image and video processing. In conclusion, the approach is evaluated with synthetic and real world scenes captured with consumer video cameras.

Talk BA-Vortrag: On-the-fly Displacement Mapping Algorithms and Architecture Consideration for Raytracing

05.11.2012 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Rashid Kratou

Rendering images with photorealistic quality is a major challenge in the field of computer graphics. To achieve visual richness of geometry, usually a large amount of geometric data needs to be stored and processed. An alternative is displacement mapping, where a low-polygon model is used to define the basic structure while a special texture, the displacement map, is used to add the geometric details. In contrast to techniques such as bump-, normal- or parallax mapping, actual geometry is created and thus, tessellation of geometry is fundamental. Advantages of displacement mapping are the reduced memory footprint and its ability to be applied relative to a given viewpoint allowing adaptive creation of geometry only in areas where additional geometry is actually needed to improve the visual result.

While commodity GPU architectures incorporate a tessellation unit in the rasterization pipeline which allows a straightforward integration of displacement mapping techniques in real-time applications, the integration of On-the-fly tessellation into Ray Tracing systems is not straightforward. This is mainly due to the fact that these systems usually rely on acceleration structures for high performance. Previous algorithms that incorporate tessellation and displacement mapping into Ray Tracing systems exist, but none of them achieve interactive or real-time performance.

The main target of the thesis is to analyze previous algorithms regarding its bottlenecks and suitability for many-core architectures (e.g. GPUs).

Talk BA-Vortrag: Rekonstruktion von Kameraparametern aus asynchronen Aufnahmen dynamischer Szenen

29.10.2012 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Matthias Überheide

In der Computergrafik sind verschiedene Verfahren bekannt, um die Bilder mehrerer Kameras zur Rekonstruktion der aufgezeichneten Szene zu nutzen. Dabei müssen die üblichen Verfahren jedoch entweder bewegte Szenenelemente verwerfen oder synchronisierte Kameras fordern.

Es existiert ein Verfahren zur Rekonstruktion bewegter Szenen mit unsychronisierten Kameras, dass ohne Verwerfen von Szenenelementen auskommt. Für die Rekonstruktion wird jedoch nur ein sehr kurzer Zeitausschnitt von wenigen Frames aus den Eingabevideos verwendet. Dabei wird für jeden rekonstruierten Punkt, zusätzlich zur Position, eine Bewegung ermittelt und in einem kleinen Zeitausschnitt linear angenommen, sodass dort Zwischenpositionen errechenbar werden.

Im Rahmen dieser Arbeit wird das eben genannte Verfahren so erweitert, dass es auf der Länge des gesamten Videos angewendet werden kann. Dazu wird ein Szenenmodell erarbeitet, das zusätzlich die Bewegung der Szene modelliert sowie Bewegungen und Folgepositionen konsistent hält. Die Bewegung wird dabei implizit durch die Folgepositionen modelliert. Weiterhin wird angenommen, dass sich die Bewegung von zu rekonstruierenden Punkten in hinreichend kleinen zeitlichen Umgebungen durch eine geeignet parametrisierbare Funktion, zum Beispiel eine Gerade, darstellen lässt. Es wird zudem gezeigt, dass mit dem Modell alle Parameter einer Szene optimiert werden können.

Abschließend wird das Verfahren an sieben synthetischen Szenen evaluiert.

Talk BA talk: Volume modeling and editing in Blender

01.10.2012 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Julius Saretzki

This thesis describes an approach how to edit volumetric data, like astronomical nebulae or other semi- transparent phenomena, using Blender. Traditional modeling works on surfaces and solids, but it performs not very well on semi-transparent datasets like nebulae. The problem is how to edit those phenomena like astronomical nebulae using Blender. This approach transforms nebulae to solids and back. For the editing operations an algorithm (MarchingCubes) transforms semi-transparent datasets into an editable representation which Blender is able to edit. After editing, the resulting data will be retransformed into a voxel representation. The problem is that the surface does not contain all data. For this, the free spaces are approximated using another algorithm (Di ffusion-Curves) which is extended to performs on 3D data too.

Talk BA-Vortrag: "Perceptually realistic rendering of glare sources"

24.09.2012 09:15
Informatikzentrum, Seminarraum G30

Speaker(s): Sören Petersen

Nowadays virtually created scenes are required to look as realistic as possible. Computer based illustration of light, especially in nighttime situations, is a hallenging topic in computer graphics. In nighttime situations glare e ffects occur in addition to usual lightning and need to be applied manually. Based on the current state of research, task of this bachelor thesis is to implement such a glare e ffect running at real time frame rates.

Talk BA-Vortrag Pattern- and Markerless RGB and Depth Sensor Calibration

21.09.2012 10:00
Informatikzentrum, Seminarraum G30

Speaker(s): Dennis Franke

With the help of Structure from Motion (SfM) algorithms the camera motion and a rough estimates of the scene geometry can be estimated from a video recording. Such approaches are an integral part of state-of-the-art commercial software. Recently, a fordable depth sensors have become available on the consumer market, which estimate the spatial depth for each pixel. There have already been approaches to jointly calibrate RGB and depth sensors, but they require special patterns or markers. The goal of this bachelor thesis is to develop an approach that calibrates RGB and depth sensors without the help of a marker or pattern.

Talk Promotionsvortrag: Measuring, Modeling and Verification of Light-Matter-Interaction Phenomena

12.09.2012 09:30
Informatikzentrum, IZ 161

Speaker(s): Kai Berger

Talk 3D reconstruction - from science to art and back again

10.08.2012 11:00
Informatikzentrum, Seminarraum G30

Speaker(s): Rahul Nair

"So you want a CG T-Rex dancing around in this footage of your living room? Fine - give me some depth information and I can do that for you...` Visual effects are ubiquitous in the film and broadcast industry, even in productions where this is not directly apparent. The core element in combining vfx with shot footage is the knowledge of scene geometry, which is plausible in some sense. In computer vision, on the other hand, knowledge of metrically correct scene geometry solves the inverse problem created by the imaging system. This can then be used to quantitatively benchmark almost any kind of computer vision algorithm class.

In my talk I will present the various ongoing research activities in our group related to 3D reconstruction. The topics will range from `cheap` 3D reconstruction using sensor fusion systems to 3D conversion of monoscopic sequences. Additionally, some work on the evaluation of the obtained 3D scans shall be shown in order to assess the scan quality. Furthermore, I will describe how we are applying these techniques to the post-production pipeline as well as to performance analysis.

Talk BA-Vortrag: "Audio Resynthesis on the Dancefloor: A Music Structural Approach"

23.07.2012 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Jan-Philipp Tauscher

Im Rahmen der Bachelorarbeit soll ein existierendes Verfahren zur Audio-Resynthese verbessert und neue Anwendungsfelder dafür erschlossen werden. Das bestehende Verfahren findet mögliche Sprungstellen in einem Musikstück und synthetisiert durch Aneinanderreihung von Segmenten des Stückes ein neues, das vorgegebene Randbedingungen erfüllt. Im Rahmen der Arbeit soll mittels Beat Tracking ein zuverlässigeres Matching rhythmischer Strukturen bei der Sprungstellensuche erreicht werden. Darauf aufbauend soll eine Benutzerschnittstelle entwickelt werden, die den Einsatz des Verfahrens im Rahmen von Live-Performances ermöglicht. Dabei soll insbesondere die Verwendung bei Konzeption, Vermittlung und Übung von Tanzchoreographien Berücksichtigung finden. Die Qualität der Ergebnisse soll anhand von Beispielen und Vergleichen mit dem ursprünglichen Verfahren demonstriert werden.

Talk BA-Vortrag: "3D-Rekonstruktion von solaren Ereignissen"

28.06.2012 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Thomas Schrader

Immer wieder erreichen Sonnenstürme die Erde, die von koronalen Massenauswürfen ausgelöst werden. Um diese Naturphänomene besser untersuchen zu können, wurden in den letzten Jahrzehnten mehrere Satelliten zur Beobachtung eingesetzt, zum Beispiel SDO. Die meisten betrachten nur die Sonnenseite, die man von der Erde aus sieht. Zusätzlich gibt es das Duo STEREO, welches sich auf der Erdumlaufbahn vor und hinter der Erde um die Sonne herum bewegt. Da es noch keine überzeugende Visualisierung der Sonne gibt, behandelt diese Arbeit die Rekonstruktion eines dreidimensionalen Voxel-Volumens aus SDO und STEREO Bildern und die Darstellung mit Hilfe eines Ray-Casters aus beliebigen Blickpunkten. So erhält man einen besseren Überblick über die Sonne und ihre Aktivitäten. Dabei entsteht ein für das Auge plausibles, aber physikalisch nicht unbedingt korrektes Ergebnis.

Talk EEG Analysis of Implicit Human Visual Perception

27.04.2012 10:30
Informatikzentrum, Seminarraum G30

Speaker(s): Maryam Mustafa

Image Based Rendering (IBR) allows interactive scene exploration from images alone. However, despite considerable development in the area, one of the main obstacles to better quality and more realistic visualizations is the occurrence of visually disagreeable artifacts. In this paper we present a methodology to map out the perception of IBR-typical artifacts. This work presents an alternative to traditional image and video quality evaluation methods by using an EEG device to determine the implicit visual processes in the human brain. Our work demonstrates the distinct differences in the perception of different types of visual artifacts and the implications of these differences.

Talk Promotions-vor-vortrag Albuquerque

13.04.2012 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Georgia Albuquerque

Promotions-vor-vortrag Georgia Albuquerque

Talk Promotions-vor-vortrag Lipski

26.03.2012 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Christian Lipski

Promotions-vor-vortrag christian Lipski

Talk MA-Vortrag "Folding Textures"

19.03.2012 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Thorben Schulze

Texturen werden seid bereits mehr als 30 Jahren in der Computergraphik verwendet um Modellkomplexität vorzutäuschen, welche nicht in der Geometrie selbst widergespiegelt wird. Dabei werden zumeist statische Texturkoordinaten an die Eckpunkte eines Modells zugewiesen, welche während der Bildsynthese interpoliert und dann zum Nachschlagen in der Texturkarte verwendet werden.

Zwar bieten heutige Graphik-APIs rudimentäre Möglichkeiten diese Koordinaten zu verändern, doch ist dies im Grunde beschränkt auf einfache Transformationen, welche sich mittels einer Homographie beschreiben lassen. Komplexere Funktionen werden nicht unterstützt.

Ziel dieser Arbeit ist es die Transformation der Zuweisung von Texturkoordinaten auf einem Dreiecksmodell zu verallgemeinern, um auf diese Weise komplexere Effekte darstellen zu können, dazu gehört im einfachsten Falle beliebige Transformationsfunktionen anzuwenden, aber auch komplexere Abläufe, wie Sprünge in der Textur zu erlauben, oder übereinander gefaltete Texturen zu erzeugen. Damit wäre es beispielsweise möglich Effekte zu erzeugen, wie etwa sich überlagernde Schuppen, die sich der Bewegung des Modells anpassen, Dehnungsvisualisierungen, Wassereffekte über einem beliebigen Untergrund und das alles ohne die Geometrie selbst zu verändern.

Talk BA-Vortrag The capturing of turbulent gas flows using Kinect

05.03.2012 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Mark Albers

Vortrag über die Bachelorarbeit "The capturing of turbulent gas flows using Kinect` - die Tiefenkarten mehrerer Kinects werden ausgewertet und minimale Änderungen induziert durch ein Gas, welches von allen Kinects erfasst wird, werden zur Rekonstruktion desselben verwendet.

Talk Promotions-vor-vortrag Berger

27.02.2012 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Kai Berger

In this talk i lay out the parts of my thesis, that is

modeling and verification of light-matter interaction phenomena.

The first part focuses on a new setup for capturing and modeling immersed surface reflectances, while the second part introduces ellipsometry as a new way verify the real-world applicability of existing reflectance models, which are widely considered physically plausible.

Talk MA-Vortrag Inverse erweiterte Realität auf einem Mobiltelefon

06.02.2012 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Martin Fiebig

Bei normaler erweiterter Realität (Augmented Reality) wird ein

Kamerabild um Informationen erweitert. Diese

Informationen werden aufgrund von Sensorendaten dem Kamerabild

angepasst. Auf einem Smartphone können dies z. B. ein GPS-Signal,

Kompassdaten, Gyroskopdaten oder eine beliebige

Kombination aus diesen Daten sein. Auch ausgedruckte Marker oder

QR-Codes können verwendet

werden, um die Lage der Kamera zu bestimmen oder Bereiche im Kamerabild



Bei der inversen erweiterten Realität wird eine virtuelle Umgebung

erschaffen und mit

realen Inhalten ergänzt. Die realen Inhalte sollen in diesem Fall von

der Kamera stammen.

Eine vor die Videokamera getretene Person soll vom Hintergrund getrennt

und in eine

virtuelle Szene eingefügt werden.