Computer Graphics
TU Braunschweig

Events


Talk SEP-Abschluss: Massively distributed collaborative crowd input system for dome environments

31.08.2020 13:00
Dome (Aufnahmestudio & Visualisierungslabor)

Präsentation der Ergebnisse des studentischen Softwareentwicklungspraktikums (SEP).

Talk BA-Talk: Eye Tracking Analysis Framework for Video Portraits

28.08.2020 13:00
Online

Speaker(s): Moritz von Estorff

Dieser Abschlussvortrag wird online gestreamt:

https://webconf.tu-bs.de/mar-3vy-aef

Talk BA-Talk: Implementing Dynamic Stimuli in VR Environments for Visual Perception Research

04.08.2020 15:00
Dome (Aufnahmestudio & Visualisierungslabor)

Speaker(s): Mai Hellmann

Talk Praktikum-Abschluss: Creating an interactive VR-adventure for the ICG Dome

05.06.2020 13:30
Dome (Aufnahmestudio & Visualisierungslabor)

Präsentation der Ergebnisse des studentischen Computergraphik Praktikums (MA).
(Ein Folgeprojekt vom Computergraphik Praktikum (BA) SS'19)

Talk Teamprojekt-Abschluss: Unser kleines Planetarium

05.06.2020 13:00
Dome (Aufnahmestudio & Visualisierungslabor)

Präsentation der Ergebnisse des studentischen Teamprojekts.

Talk MA-Talk: Automatic Face Re-enactment in Real-World Portrait Videos to Manipulate Emotional Expression

24.04.2020 13:15
https://webconf.tu-bs.de/jan-n7t-j7a

Speaker(s): Colin Groth

Talk PhD defense: Reconstructing 3D Human Avatars from Monocular Images

13.03.2020 10:00
Informatikzentrum IZ161

Speaker(s): Thiemo Alldieck

Talk VASC Seminar: Reconstructing 3D Human Avatars from Monocular Images

17.01.2020 16:00
Carnegie Mellon University, Pittsburgh, PA, USA

Speaker(s): Thiemo Alldieck

https://www.ri.cmu.edu/event/reconstructing-3d-human-avatars-from-monocular-images/

Statistical 3D human body models have helped us to better understand human shape and motion and already enabled exciting new applications. However, if we want to learn detailed, personalized, and clothed models of human shape, motion, and dynamics, we require new approaches that learn from ubiquitous data such as plain RGB-images and video. I will discuss recent advances in personalized body shape and clothing estimation from monocular video, from a few frames, and even from a single image. We developed effective methods to learn detailed avatars without the need for expensive scanning equipment. These methods are easy to use and enable personalized avatar creation for example for VR and AR applications. I will conclude my talk by outlining the next challenges in human shape reconstruction.

Talk MA-Talk: Occlusion Aware Iterative Optical Flow Refinement for High Resolution Images

17.01.2020 11:00
Seminarraum G30

Speaker(s): Alexander Manegold

In the field of optical flow estimation many different approaches exist. Mostof the newest published methods use some kind of Convolutional NeuralNetwork (CNN). These CNNs often have high graphics hardware requirements which scale with the size of the input images. High resolution imagesor panoramas can consequently often not be processed at full resolution. The PanoTiler offers an image tiling strategy that can be used to partially estimate the optical flow using arbitrary CNNs and then merge the individual flow tiles. Its advantage over simple tiling techniques is the utilization of multiple resolution levels which allows to find better matching tile pairs between source and target images. Although the original PanoTiler yields good optical flow results for most images, errors are sometimes introduced in higher resolution levels. To solve this issue, I extended the PanoTiler approach with regularization which incorporates the optical flow of all levels for the final result. Additionally, I introduce a new optical flow clustering method to the PanoTiler which mends a vulnerability that produces errorsin higher resolution levels.

To compare the results of optical flow estimation techniques, multiple benchmarks like Middlebury, KITTI 2015 or MPI Sintel were created. These benchmarks mostly contain ground truth optical flow for lower resolution images and not for high resolution images or even panoramas. Because it is challenging to get ground truth optical flow for real world images, I created a simple to follow protocol to create panoramas and their ground truth opticalflow from Unreal Engine 4. The optical flow is generated using a Python tool and it is based on stereo vision and depth render passes.

Talk Fluid Simulation - From Research to Market

13.11.2019 16:45
IZ 161

Speaker(s): Matthias Teschner

Based on many years of research at the University of Freiburg, FIFTY2 Technology develops and markets PreonLab, a framework for Lagrangian fluid simulation with a particular focus on the automotive industry. This presentation discusses the respective evolution from a research project to a product. The first part introduces selected research results that contribute to the success of PreonLab. The second part discusses the technology transfer and aspects that affect the prosperity of a university spin-off.

 

Talk Physics in Graphics: Measuring from Images

07.11.2019 11:00
PhoenixD Retreat, Schneverdingen

Speaker(s): Marcus Magnor

Computer graphics is all about devising highly efficient, hardware-optimized algorithms to numerically evaluate the equations governing our physical world. Areas of physics that reguarly fall prey to computer graphics range from classical and continuum mechanics to hydrodynamics, optics, and radiation transport. In my talk I will give a few examples and discuss how being able to efficiently solve the forward problem of simulating the physical behavior of real-world systems can be used to also tackle the inverse problem of estimating and measuring physical properties from images.

 

Talk What’s missing in Head-mounted VR Displays?

14.10.2019 14:00
DLR Braunschweig

Speaker(s): Marcus Magnor

Thanks to competitively priced HMDs geared towards the consumer market, research in immersive displays and Virtual Reality has seen tremendous progress. Still, a number of challenges remain to make immersive VR experiences truly realistic. In my talk I will shwocase a number of research projects at TU Braunschweig that aim to enhance the immersive viewing experience by taking perceptual issues and real-world recordings into account.

Symposium Computer Vision Colloquium

08.10.2019 09:00 - 09.10.2019 15:00
Informatikzentrum IZ161

National and international experts present the latest research in Computer Vision

Talk Praktikum-Abschluss: HorrorAdventure: Creating an Immersive Wheelchair Experience

27.09.2019 13:00
Dome (Aufnahmestudio & Visualisierungslabor)

Präsentation der Ergebnisse des studentischen Computergraphik Praktikums.
(Ein Folgeprojekt vom Teamprojekt WS '18/19)

Talk Creating Real VR Experiences

18.09.2019 13:15
Informationzentrum, Seminarraum G30

Speaker(s): Tobias Bertel

Creating Real VR experiences is a very challenging and multi-disciplinary task which is in high demand in the advent of head mounted devices (HMDs). Real-world capturing procedures as well as rich scene representations are hard to obtain for general environments and are key for high-quality novel-view synthesis. My core research interests orbit around image-based rendering (IBR) and modeling (IBM) in order to casually create real-world VR content.

My talk will be split into two parts: Firstly, I will present MegaParallax [1], a full pipeline for casually creating 360° (multi-perspective) panoramas which enables motion parallax on runtime. I will motivate IBR in general and show how MegaParallax fits into that context. I conclude by motivating the main limitation of the method,  namely vertical distortion as described by Shum et al. [2], propose a way to alleviate the issue (reconstruction of explicit 3D geometry) and show some recent experimental results. Secondly, I want to outline possible directions for collaborations and future work. I see a great potential in looking at IBR with omnidirectional viewpoints, e.g. cylindrical [3] or spherical panoramas [4] instead of using perspective pinhole viewpoints. The main motivation for my visit in Braunschweig (so far) is to look at options for adding motion parallax to existing spherical 360° stereo videos.One way of achieving this is to reconstruct explicit geometry, e.g. per-view depth maps, from the given viewpoint pairs and perform IBR on runtime [5].

Generally, I would like to look into 3D reconstruction, starting with pinhole viewpoints (well-understood but nevertheless hard in general) and gradually extend to omnidirectional viewpoints, emphasising on epipolar geometry and estimating correspondences, e.g. using optical flow. The presentation itself is supposed to last between 15 and 20 minutes and I encourage the audience to interrupt me at any time if questions arise during the talk. 

[1] https://richardt.name/publications/megaparallax/

[2] https://www.microsoft.com/en-us/research/publication/rendering-with-concentric-mosaics/

[3] http://www.cs.unc.edu/~gb/Pubs/p39-mcmillan.pdf

[4] https://www.cs.princeton.edu/courses/archive/spring01/cs598b/papers/mcmillan97.pdf

[5] https://web.stanford.edu/~jayantt/data/vcip17.pdf

Talk MA-Talk: Image Acquisition Strategies for Time-variant Projection of Phase Shift Patterns using a CMOS Rolling Shutter Image Sensor

13.09.2019 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Benedikt-M. Pfennig

Talk Reconstructing 3D Human Avatars from Monocular Images

28.08.2019 13:00
Informationzentrum, Seminarraum G30

Speaker(s): Thiemo Alldieck

Modeling 3D virtual humans has been an active field of research over the last decades. It plays a fundamental role for many applications, such as movie production, sports and medical sciences, or human-computer interaction. Early works focus on artist-driven modeling or utilize expensive scanning equipment. In contrast, our goal is the fully automatic acquisition of personalized avatars using low-cost monocular video cameras only. In this dissertation, we show fundamental advances in 3D human reconstruction from monocular images. We solve this challenging task by developing methods that effectively fuse information from multiple points in time and realistically complete reconstructions from sparse observations. Given a video or only a single photograph of a person in motion, we reconstruct, for the first time, not only his or her 3D pose but the full 3D shape including the face, hair, and clothing. We investigate various aspects of monocular image and video-based 3D human reconstruction and demonstrate both straight-forward and sophisticated reconstruction methods focused on accuracy, simplicity, usability, and visual fidelity. During extensive evaluations, we give insights into important parameters, reconstruction quality, and the robustness of the methods. For the first time, our methods enable camera-based, easy-to-use self-digitization for exciting new applications like, for example, telepresence or virtual try-on for online fashion shopping.

Talk How much “nature” is in the image? The role of lower-level processed image properties on the processing and evaluation of faces, artworks, and environments

19.08.2019 15:00
Informationzentrum, Seminarraum G30

Speaker(s): Claudia Menzel

The human visual system is adapted to natural scenes and their characteristics, which leads to an efficient / fluent processing of such scenes. Interestingly, art and other aesthetically pleasing images share such natural image properties. Thus, these properties inherent in natural scenes are associated with beauty. In my talk, I will present computational, behavioural, and neurophysiological data on the relationship of natural image properties and the processing and evaluation of three stimulus categories: faces, artworks, and environmental scenes. In the first part of my talk, I will present a series of studies showing that natural image properties beneficially influence face processing and perceived facial attractiveness. In the second part, I will present studies on the role of image properties for the fast and automatic detection of artistic composition in artworks. In the third part, I will come back to natural scenes and present current data on the role of image properties for the evaluation of and health effects evoked by nature and urban environments. Overall, the presented work will demonstrate how image properties inherent to natural scenes influence the processing and evaluation of various image categories.

Talk MA-Talk: Augmented Reality in Optics Laboratories

29.07.2019 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Nils Wendorff

In dieser Arbeit wird eine Anwendung vorgestellt, die den Aufbau optischer Systeme auf optischen Tischen mit Augmented Reality unterstützt.

Die Anwendung läuft eigenständig auf der Microsoft HoloLens, ohne dass andere Geräte erforderlich sind.

Mittels einer auf Marker-Tracking basierenden Lösung zum Erkennen der optischen Elemente auf dem Tisch wird die Lichtausbreitung durch das System simuliert.

Aufgrund der Beschränkung der HoloLens auf Anwendungen, die auf der universellen Windows-Plattform basieren, wird eine Library zur Simulation auf die Plattform portiert und für geometrisches Ray tracing auf der HoloLens genutzt.

Dies ermöglicht die Einrichtung optischer Systeme auf optischen Tischen, ohne dass Lichtquellen wie Laser aktiviert werden müssen.

Gerade der Betrieb letzterer kann gesundheitsschädlich sein und Schutzausrüstung erfordern, die zu Unannehmlichkeiten in der Arbeitsumgebung führen können.

Talk MA-Talk: Quality Metric for Scatter Plots Based on Human Perception using CNNs

05.07.2019 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Yuxin Zou

Symposium Real VR - Importing the Real World into Immersive VR and Optimizing the Perceptual Experience of Head-Mounted Displays

30.06.2019 18:00 - 03.07.2019 13:00
Dagstuhl

Chair(s): Marcus Magnor

Dagstuhl Seminar Webpage

Motivated by the advent of mass-market VR headsets, our Dagstuhl Seminar addresses the scientific and engineering challenges that need to be overcome in order to experience omni-directional video recordings of the real world with the sense of stereoscopic, full-parallax immersion as can be provided by today’s head-mounted displays.

Talk Reconstructing 3D Human Avatars from Monocular Images

27.06.2019 12:00
Google Zürich

Speaker(s): Thiemo Alldieck

Statistical 3D human body models have helped us to better understand human shape and motion and already enabled exciting new applications. However, if we want to learn detailed, personalized, and clothed models of human shape, motion, and dynamics, we require new approaches that learn from ubiquitous data such as plain RGB-images and video. I will discuss recent advances in personalized body shape and clothing estimation from monocular video, from a few frames, and even from a single image. We developed effective methods to learn detailed avatars without the need for expensive scanning equipment. These methods are easy to use and enable personalized avatar creation for example for VR and AR applications. I will conclude my talk by outlining the next challenges in human shape reconstruction.

Talk Scene-Space Video Processing

17.05.2019 13:00
Informationzentrum, Seminarraum G30

Speaker(s): Felix Klose

quuxLogic Solutions GmbH

Nearly all image based video processing methods require correspondences to be established in between input images. Finding image correspondences is an area of active research for many years, and is known to be a notoriously difficult problem, due to its ill-posed nature and large number of unknowns. If the input data is not only a few images, but single or multiple video streams, the amount of data to be processed poses an additional challenge. This thesis provides a insight into commonly applied constraints to correspondence estimation in 2D, 3D and 4D and the applications that become possible from using the different data modalities. It is shown how free-viewpoint stereoscopic 3D renderings of multi-view recordings can be generated by purely using 2D correspondences. To accomplish this in the required high output quality, a simple interaction paradigm is introduced, that allows users to correct correspondences where the automatic results are not yet of sufficient quality. An approach to compute full quasi-dense scene-flow for short time intervals is presented. This work then shows how new applications become possible, when the inaccuracies in scene estimation are leveraged and the full redundancy of the available visual data is exploited. And highlights the performance implications of using the large amounts of available data.

Talk Tell Me How You Look and I'll Tell You How You Move

26.04.2019 09:15
Tampere University, Tampere, Finland

Speaker(s): Thiemo Alldieck

Talk at CIVIT Tech Day: Motion Deconstructed

[Video]

Human body shape and motion estimation are two sides of the same coin. To be able to fully understand human motion from monocular imagery, we need to understand the shapes of the tracked subjects, too. In my talk, I will motivate why we need better shapes for better tracking. I will demonstrate how 3D bodies helped us to understand human motion better and where these models find their limits.  If we want to learn rich models of human shape, motion, and dynamics, we require new approaches that learn from ubiquitous data such as plain RGB-images and video. I continue with discussing recent advances in personalized body shape estimation from monocular video, from a few frames, and even from a single image. We developed effective methods to learn detailed avatars without the need for expensive scanning equipment. These methods are easy to use and enable various VR and AR applications. I will conclude my talk by outlining the next challenges in human shape reconstruction and how this potentially affects human motion estimation.

Conference Computational Visual Media Conference (CVM)

24.04.2019 00:00 - 26.04.2019 00:00
University of Bath, UK

Chair(s): Marcus Magnor

Conference on Computational Visual Media (CVM 2019)