Computer Graphics
TU Braunschweig

Events


Talk MA-Talk: Video Objekt Segmentierung für Omnidirektionale Stereo Panoramen

14.05.2021 13:00
Online: https://webconf.tu-bs.de/mar-3vy-aef

Speaker(s): Fan Song

Talk MA-Talk: Functional Volumetric Rendering for Industrial Applications

07.05.2021 13:00
Online: https://webconf.tu-bs.de/mar-3vy-aef

Speaker(s): Jan-Christopher Schmidt

Talk Teamprojekt-Abschluss: Schoduvel im Dome

31.03.2021 13:15
Dome (Aufnahmestudio & Visualisierungslabor) / Online

Präsentation der Ergebnisse des studentischen Teamprojekts.

Talk MA-Talk: Temporal Coherent Relighting in Portrait Videos from Neural Textures

29.03.2021 13:00
Online: https://webconf.tu-bs.de/mar-3vy-aef

Speaker(s): Jann-Ole Henningson

Talk BA-Talk: Bekämpfung von Motion Sickness in VR durch dynamische bipolare Galvanisch Vestibuläre Stimulation

12.03.2021 13:00
Online: https://webconf.tu-bs.de/mar-3vy-aef

Speaker(s): Max Hattenbach

Talk BA-Talk: Neuronales Rendering - Wahrnehmungsbasierte Auswertung der Tiefenwirkung in VR

25.01.2021 13:00
Online: https://webconf.tu-bs.de/mar-3vy-aef

Speaker(s): Yannic Rühl

Talk Disputation

22.01.2021 10:00
Online

Speaker(s): Steve Grogorick

Guiding Visual Attention in Immersive Environments

Talk BA-Talk: Entwurf einer interaktiven Simulation zum Erlernen von Sternenkonstellationen in öffentlichen Planetarien

08.12.2020 13:00
Planetarium Wolfsburg

Speaker(s): Lars Richard

Talk Promotions-V-Vg: Guiding Visual Attention in Immersive Environments

30.10.2020 13:00
Online

Growing popularity of virtual reality (VR) technology, presenting content virtually all around a user, creates new challenges for digital content creators and presentation systems. In this dissertation we investigate how to support viewers to not miss important information when exploring unknown virtual environments. We examine different visual stimuli to guide viewers' attention towards predetermined target regions of surrounding environments. To best possibly maintain the original visual appearance of scenes, we aim for subtle visual modifications that operate as close as possible to viewers' perception threshold, while still providing effective guidance.


In a first approach, we identify issues of existing visual guidance stimuli to be effective in VR environments. For use in large field of view (FOV) head-mounted displays (HMDs), we derive techniques to handle perspective distortions, degradation of visual acuity in the peripheral visual field and target regions outside the initial FOV. An existing visual stimulus, originally conceived for desktop environments, is adapted accordingly and successfully evaluated in a perceptual study.
Subsequently the generalizability of these extending techniques is investigated, regarding different guidance methods and VR devices. For this, additional methods from related work are re-implemented and updated accordingly. Two comparable perceptual studies are conducted to evaluate their effectiveness within a consumer-grade HMD and in an immersive dome projection system covering almost the full human visual field. Regardless of the actual success rates, all of the tested methods show a measurable effect on participants' viewing behavior, indicating general applicability of our modification techniques for various guiding methods and VR systems.


Finally, a novel visual guidance method (SIBM) is created, specifically designed for immersive systems. It builds on contrary manipulations of the two stereoscopic frames in VR rendering systems, turning the inevitable overhead of double (per eye) rendering into an advantage that is not available in monocular systems. Moreover, exploiting our visual system's sensitivity for discrepancies in binocular visual input, it allows to noticeably reduce the required per-image contrast of the actual stimulus well below previous state-of-the-art.

Talk SEP-Abschluss: Massively distributed collaborative crowd input system for dome environments

31.08.2020 13:00
Dome (Aufnahmestudio & Visualisierungslabor)

Präsentation der Ergebnisse des studentischen Softwareentwicklungspraktikums (SEP).

Talk BA-Talk: Eye Tracking Analysis Framework for Video Portraits

28.08.2020 13:00
Online

Speaker(s): Moritz von Estorff

Dieser Abschlussvortrag wird online gestreamt:

https://webconf.tu-bs.de/mar-3vy-aef

Talk BA-Talk: Implementing Dynamic Stimuli in VR Environments for Visual Perception Research

04.08.2020 15:00
Dome (Aufnahmestudio & Visualisierungslabor)

Speaker(s): Mai Hellmann

Talk Praktikum-Abschluss: Creating an interactive VR-adventure for the ICG Dome

05.06.2020 13:30
Dome (Aufnahmestudio & Visualisierungslabor)

Präsentation der Ergebnisse des studentischen Computergraphik Praktikums (MA).
(Ein Folgeprojekt vom Computergraphik Praktikum (BA) SS'19)

Talk Teamprojekt-Abschluss: Unser kleines Planetarium

05.06.2020 13:00
Dome (Aufnahmestudio & Visualisierungslabor)

Präsentation der Ergebnisse des studentischen Teamprojekts.

Talk MA-Talk: Automatic Face Re-enactment in Real-World Portrait Videos to Manipulate Emotional Expression

24.04.2020 13:15
https://webconf.tu-bs.de/jan-n7t-j7a

Speaker(s): Colin Groth

Talk PhD defense: Reconstructing 3D Human Avatars from Monocular Images

13.03.2020 10:00
Informatikzentrum IZ161

Speaker(s): Thiemo Alldieck

Talk VASC Seminar: Reconstructing 3D Human Avatars from Monocular Images

17.01.2020 16:00
Carnegie Mellon University, Pittsburgh, PA, USA

Speaker(s): Thiemo Alldieck

https://www.ri.cmu.edu/event/reconstructing-3d-human-avatars-from-monocular-images/

Statistical 3D human body models have helped us to better understand human shape and motion and already enabled exciting new applications. However, if we want to learn detailed, personalized, and clothed models of human shape, motion, and dynamics, we require new approaches that learn from ubiquitous data such as plain RGB-images and video. I will discuss recent advances in personalized body shape and clothing estimation from monocular video, from a few frames, and even from a single image. We developed effective methods to learn detailed avatars without the need for expensive scanning equipment. These methods are easy to use and enable personalized avatar creation for example for VR and AR applications. I will conclude my talk by outlining the next challenges in human shape reconstruction.

Talk MA-Talk: Occlusion Aware Iterative Optical Flow Refinement for High Resolution Images

17.01.2020 11:00
Seminarraum G30

Speaker(s): Alexander Manegold

In the field of optical flow estimation many different approaches exist. Mostof the newest published methods use some kind of Convolutional NeuralNetwork (CNN). These CNNs often have high graphics hardware requirements which scale with the size of the input images. High resolution imagesor panoramas can consequently often not be processed at full resolution. The PanoTiler offers an image tiling strategy that can be used to partially estimate the optical flow using arbitrary CNNs and then merge the individual flow tiles. Its advantage over simple tiling techniques is the utilization of multiple resolution levels which allows to find better matching tile pairs between source and target images. Although the original PanoTiler yields good optical flow results for most images, errors are sometimes introduced in higher resolution levels. To solve this issue, I extended the PanoTiler approach with regularization which incorporates the optical flow of all levels for the final result. Additionally, I introduce a new optical flow clustering method to the PanoTiler which mends a vulnerability that produces errorsin higher resolution levels.

To compare the results of optical flow estimation techniques, multiple benchmarks like Middlebury, KITTI 2015 or MPI Sintel were created. These benchmarks mostly contain ground truth optical flow for lower resolution images and not for high resolution images or even panoramas. Because it is challenging to get ground truth optical flow for real world images, I created a simple to follow protocol to create panoramas and their ground truth opticalflow from Unreal Engine 4. The optical flow is generated using a Python tool and it is based on stereo vision and depth render passes.

Talk Fluid Simulation - From Research to Market

13.11.2019 16:45
IZ 161

Speaker(s): Matthias Teschner

Based on many years of research at the University of Freiburg, FIFTY2 Technology develops and markets PreonLab, a framework for Lagrangian fluid simulation with a particular focus on the automotive industry. This presentation discusses the respective evolution from a research project to a product. The first part introduces selected research results that contribute to the success of PreonLab. The second part discusses the technology transfer and aspects that affect the prosperity of a university spin-off.

 

Talk Physics in Graphics: Measuring from Images

07.11.2019 11:00
PhoenixD Retreat, Schneverdingen

Speaker(s): Marcus Magnor

Computer graphics is all about devising highly efficient, hardware-optimized algorithms to numerically evaluate the equations governing our physical world. Areas of physics that reguarly fall prey to computer graphics range from classical and continuum mechanics to hydrodynamics, optics, and radiation transport. In my talk I will give a few examples and discuss how being able to efficiently solve the forward problem of simulating the physical behavior of real-world systems can be used to also tackle the inverse problem of estimating and measuring physical properties from images.

 

Talk What’s missing in Head-mounted VR Displays?

14.10.2019 14:00
DLR Braunschweig

Speaker(s): Marcus Magnor

Thanks to competitively priced HMDs geared towards the consumer market, research in immersive displays and Virtual Reality has seen tremendous progress. Still, a number of challenges remain to make immersive VR experiences truly realistic. In my talk I will shwocase a number of research projects at TU Braunschweig that aim to enhance the immersive viewing experience by taking perceptual issues and real-world recordings into account.

Symposium Computer Vision Colloquium

08.10.2019 09:00 - 09.10.2019 15:00
Informatikzentrum IZ161

National and international experts present the latest research in Computer Vision

Talk Praktikum-Abschluss: HorrorAdventure: Creating an Immersive Wheelchair Experience

27.09.2019 13:00
Dome (Aufnahmestudio & Visualisierungslabor)

Präsentation der Ergebnisse des studentischen Computergraphik Praktikums.
(Ein Folgeprojekt vom Teamprojekt WS '18/19)

Talk Creating Real VR Experiences

18.09.2019 13:15
Informationzentrum, Seminarraum G30

Speaker(s): Tobias Bertel

Creating Real VR experiences is a very challenging and multi-disciplinary task which is in high demand in the advent of head mounted devices (HMDs). Real-world capturing procedures as well as rich scene representations are hard to obtain for general environments and are key for high-quality novel-view synthesis. My core research interests orbit around image-based rendering (IBR) and modeling (IBM) in order to casually create real-world VR content.

My talk will be split into two parts: Firstly, I will present MegaParallax [1], a full pipeline for casually creating 360° (multi-perspective) panoramas which enables motion parallax on runtime. I will motivate IBR in general and show how MegaParallax fits into that context. I conclude by motivating the main limitation of the method,  namely vertical distortion as described by Shum et al. [2], propose a way to alleviate the issue (reconstruction of explicit 3D geometry) and show some recent experimental results. Secondly, I want to outline possible directions for collaborations and future work. I see a great potential in looking at IBR with omnidirectional viewpoints, e.g. cylindrical [3] or spherical panoramas [4] instead of using perspective pinhole viewpoints. The main motivation for my visit in Braunschweig (so far) is to look at options for adding motion parallax to existing spherical 360° stereo videos.One way of achieving this is to reconstruct explicit geometry, e.g. per-view depth maps, from the given viewpoint pairs and perform IBR on runtime [5].

Generally, I would like to look into 3D reconstruction, starting with pinhole viewpoints (well-understood but nevertheless hard in general) and gradually extend to omnidirectional viewpoints, emphasising on epipolar geometry and estimating correspondences, e.g. using optical flow. The presentation itself is supposed to last between 15 and 20 minutes and I encourage the audience to interrupt me at any time if questions arise during the talk. 

[1] https://richardt.name/publications/megaparallax/

[2] https://www.microsoft.com/en-us/research/publication/rendering-with-concentric-mosaics/

[3] http://www.cs.unc.edu/~gb/Pubs/p39-mcmillan.pdf

[4] https://www.cs.princeton.edu/courses/archive/spring01/cs598b/papers/mcmillan97.pdf

[5] https://web.stanford.edu/~jayantt/data/vcip17.pdf

Talk MA-Talk: Image Acquisition Strategies for Time-variant Projection of Phase Shift Patterns using a CMOS Rolling Shutter Image Sensor

13.09.2019 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Benedikt-M. Pfennig