Events
Talk Teamprojekt-Abschluss: Schoduvel im Dome
31.03.2021 13:15
Dome (Aufnahmestudio & Visualisierungslabor) / Online
Präsentation der Ergebnisse des studentischen Teamprojekts.
Talk MA-Talk: Temporal Coherent Relighting in Portrait Videos from Neural Textures
29.03.2021 13:00
Online: https://webconf.tu-bs.de/mar-3vy-aef
Speaker(s): Jann-Ole Henningson
Talk BA-Talk: Bekämpfung von Motion Sickness in VR durch dynamische bipolare Galvanisch Vestibuläre Stimulation
12.03.2021 13:00
Online: https://webconf.tu-bs.de/mar-3vy-aef
Speaker(s): Max Hattenbach
Talk BA-Talk: Neuronales Rendering - Wahrnehmungsbasierte Auswertung der Tiefenwirkung in VR
25.01.2021 13:00
Online: https://webconf.tu-bs.de/mar-3vy-aef
Speaker(s): Yannic Rühl
Talk Disputation
22.01.2021 10:00
Online
Speaker(s): Steve Grogorick
Guiding Visual Attention in Immersive Environments
Talk BA-Talk: Entwurf einer interaktiven Simulation zum Erlernen von Sternenkonstellationen in öffentlichen Planetarien
08.12.2020 13:00
Planetarium Wolfsburg
Speaker(s): Lars Richard
Talk Promotions-V-Vg: Guiding Visual Attention in Immersive Environments
30.10.2020 13:00
Online
Growing popularity of virtual reality (VR) technology, presenting content virtually all around a user, creates new challenges for digital content creators and presentation systems. In this dissertation we investigate how to support viewers to not miss important information when exploring unknown virtual environments. We examine different visual stimuli to guide viewers' attention towards predetermined target regions of surrounding environments. To best possibly maintain the original visual appearance of scenes, we aim for subtle visual modifications that operate as close as possible to viewers' perception threshold, while still providing effective guidance.
In a first approach, we identify issues of existing visual guidance stimuli to be effective in VR environments. For use in large field of view (FOV) head-mounted displays (HMDs), we derive techniques to handle perspective distortions, degradation of visual acuity in the peripheral visual field and target regions outside the initial FOV. An existing visual stimulus, originally conceived for desktop environments, is adapted accordingly and successfully evaluated in a perceptual study.
Subsequently the generalizability of these extending techniques is investigated, regarding different guidance methods and VR devices. For this, additional methods from related work are re-implemented and updated accordingly. Two comparable perceptual studies are conducted to evaluate their effectiveness within a consumer-grade HMD and in an immersive dome projection system covering almost the full human visual field. Regardless of the actual success rates, all of the tested methods show a measurable effect on participants' viewing behavior, indicating general applicability of our modification techniques for various guiding methods and VR systems.
Finally, a novel visual guidance method (SIBM) is created, specifically designed for immersive systems. It builds on contrary manipulations of the two stereoscopic frames in VR rendering systems, turning the inevitable overhead of double (per eye) rendering into an advantage that is not available in monocular systems. Moreover, exploiting our visual system's sensitivity for discrepancies in binocular visual input, it allows to noticeably reduce the required per-image contrast of the actual stimulus well below previous state-of-the-art.
Talk SEP-Abschluss: Massively distributed collaborative crowd input system for dome environments
31.08.2020 13:00
Dome (Aufnahmestudio & Visualisierungslabor)
Präsentation der Ergebnisse des studentischen Softwareentwicklungspraktikums (SEP).
Talk BA-Talk: Eye Tracking Analysis Framework for Video Portraits
28.08.2020 13:00
Online
Speaker(s): Moritz von Estorff
Dieser Abschlussvortrag wird online gestreamt:
Talk BA-Talk: Implementing Dynamic Stimuli in VR Environments for Visual Perception Research
04.08.2020 15:00
Dome (Aufnahmestudio & Visualisierungslabor)
Speaker(s): Mai Hellmann
Talk Praktikum-Abschluss: Creating an interactive VR-adventure for the ICG Dome
05.06.2020 13:30
Dome (Aufnahmestudio & Visualisierungslabor)
Präsentation der Ergebnisse des studentischen Computergraphik Praktikums (MA).
(Ein Folgeprojekt vom Computergraphik Praktikum (BA) SS'19)
Talk Teamprojekt-Abschluss: Unser kleines Planetarium
05.06.2020 13:00
Dome (Aufnahmestudio & Visualisierungslabor)
Präsentation der Ergebnisse des studentischen Teamprojekts.
Talk MA-Talk: Automatic Face Re-enactment in Real-World Portrait Videos to Manipulate Emotional Expression
24.04.2020 13:15
https://webconf.tu-bs.de/jan-n7t-j7a
Speaker(s): Colin Groth
Talk PhD defense: Reconstructing 3D Human Avatars from Monocular Images
13.03.2020 10:00
Informatikzentrum IZ161
Speaker(s): Thiemo Alldieck
Talk VASC Seminar: Reconstructing 3D Human Avatars from Monocular Images
17.01.2020 16:00
Carnegie Mellon University, Pittsburgh, PA, USA
Speaker(s): Thiemo Alldieck
https://www.ri.cmu.edu/event/reconstructing-3d-human-avatars-from-monocular-images/
Statistical 3D human body models have helped us to better understand human shape and motion and already enabled exciting new applications. However, if we want to learn detailed, personalized, and clothed models of human shape, motion, and dynamics, we require new approaches that learn from ubiquitous data such as plain RGB-images and video. I will discuss recent advances in personalized body shape and clothing estimation from monocular video, from a few frames, and even from a single image. We developed effective methods to learn detailed avatars without the need for expensive scanning equipment. These methods are easy to use and enable personalized avatar creation for example for VR and AR applications. I will conclude my talk by outlining the next challenges in human shape reconstruction.
Talk MA-Talk: Occlusion Aware Iterative Optical Flow Refinement for High Resolution Images
17.01.2020 11:00
Seminarraum G30
Speaker(s): Alexander Manegold
In the field of optical flow estimation many different approaches exist. Mostof the newest published methods use some kind of Convolutional NeuralNetwork (CNN). These CNNs often have high graphics hardware requirements which scale with the size of the input images. High resolution imagesor panoramas can consequently often not be processed at full resolution. The PanoTiler offers an image tiling strategy that can be used to partially estimate the optical flow using arbitrary CNNs and then merge the individual flow tiles. Its advantage over simple tiling techniques is the utilization of multiple resolution levels which allows to find better matching tile pairs between source and target images. Although the original PanoTiler yields good optical flow results for most images, errors are sometimes introduced in higher resolution levels. To solve this issue, I extended the PanoTiler approach with regularization which incorporates the optical flow of all levels for the final result. Additionally, I introduce a new optical flow clustering method to the PanoTiler which mends a vulnerability that produces errorsin higher resolution levels.
To compare the results of optical flow estimation techniques, multiple benchmarks like Middlebury, KITTI 2015 or MPI Sintel were created. These benchmarks mostly contain ground truth optical flow for lower resolution images and not for high resolution images or even panoramas. Because it is challenging to get ground truth optical flow for real world images, I created a simple to follow protocol to create panoramas and their ground truth opticalflow from Unreal Engine 4. The optical flow is generated using a Python tool and it is based on stereo vision and depth render passes.
Talk Fluid Simulation - From Research to Market
13.11.2019 16:45
IZ 161
Speaker(s): Matthias Teschner
Based on many years of research at the University of Freiburg, FIFTY2 Technology develops and markets PreonLab, a framework for Lagrangian fluid simulation with a particular focus on the automotive industry. This presentation discusses the respective evolution from a research project to a product. The first part introduces selected research results that contribute to the success of PreonLab. The second part discusses the technology transfer and aspects that affect the prosperity of a university spin-off.
Talk Physics in Graphics: Measuring from Images
07.11.2019 11:00
PhoenixD Retreat, Schneverdingen
Speaker(s): Marcus Magnor
Computer graphics is all about devising highly efficient, hardware-optimized algorithms to numerically evaluate the equations governing our physical world. Areas of physics that reguarly fall prey to computer graphics range from classical and continuum mechanics to hydrodynamics, optics, and radiation transport. In my talk I will give a few examples and discuss how being able to efficiently solve the forward problem of simulating the physical behavior of real-world systems can be used to also tackle the inverse problem of estimating and measuring physical properties from images.
Talk What’s missing in Head-mounted VR Displays?
14.10.2019 14:00
DLR Braunschweig
Speaker(s): Marcus Magnor
Thanks to competitively priced HMDs geared towards the consumer market, research in immersive displays and Virtual Reality has seen tremendous progress. Still, a number of challenges remain to make immersive VR experiences truly realistic. In my talk I will shwocase a number of research projects at TU Braunschweig that aim to enhance the immersive viewing experience by taking perceptual issues and real-world recordings into account.
Symposium Computer Vision Colloquium
08.10.2019 09:00
- 09.10.2019 15:00
Informatikzentrum IZ161
National and international experts present the latest research in Computer Vision
Talk Praktikum-Abschluss: HorrorAdventure: Creating an Immersive Wheelchair Experience
27.09.2019 13:00
Dome (Aufnahmestudio & Visualisierungslabor)
Präsentation der Ergebnisse des studentischen Computergraphik Praktikums.
(Ein Folgeprojekt vom Teamprojekt WS '18/19)
Talk Creating Real VR Experiences
18.09.2019 13:15
Informationzentrum, Seminarraum G30
Speaker(s): Tobias Bertel
Talk MA-Talk: Image Acquisition Strategies for Time-variant Projection of Phase Shift Patterns using a CMOS Rolling Shutter Image Sensor
13.09.2019 13:00
Informatikzentrum, Seminarraum G30
Speaker(s): Benedikt-M. Pfennig
Talk Reconstructing 3D Human Avatars from Monocular Images
28.08.2019 13:00
Informationzentrum, Seminarraum G30
Speaker(s): Thiemo Alldieck
Modeling 3D virtual humans has been an active field of research over the last decades. It plays a fundamental role for many applications, such as movie production, sports and medical sciences, or human-computer interaction. Early works focus on artist-driven modeling or utilize expensive scanning equipment. In contrast, our goal is the fully automatic acquisition of personalized avatars using low-cost monocular video cameras only. In this dissertation, we show fundamental advances in 3D human reconstruction from monocular images. We solve this challenging task by developing methods that effectively fuse information from multiple points in time and realistically complete reconstructions from sparse observations. Given a video or only a single photograph of a person in motion, we reconstruct, for the first time, not only his or her 3D pose but the full 3D shape including the face, hair, and clothing. We investigate various aspects of monocular image and video-based 3D human reconstruction and demonstrate both straight-forward and sophisticated reconstruction methods focused on accuracy, simplicity, usability, and visual fidelity. During extensive evaluations, we give insights into important parameters, reconstruction quality, and the robustness of the methods. For the first time, our methods enable camera-based, easy-to-use self-digitization for exciting new applications like, for example, telepresence or virtual try-on for online fashion shopping.
Talk How much “nature” is in the image? The role of lower-level processed image properties on the processing and evaluation of faces, artworks, and environments
19.08.2019 15:00
Informationzentrum, Seminarraum G30
Speaker(s): Claudia Menzel
The human visual system is adapted to natural scenes and their characteristics, which leads to an efficient / fluent processing of such scenes. Interestingly, art and other aesthetically pleasing images share such natural image properties. Thus, these properties inherent in natural scenes are associated with beauty. In my talk, I will present computational, behavioural, and neurophysiological data on the relationship of natural image properties and the processing and evaluation of three stimulus categories: faces, artworks, and environmental scenes. In the first part of my talk, I will present a series of studies showing that natural image properties beneficially influence face processing and perceived facial attractiveness. In the second part, I will present studies on the role of image properties for the fast and automatic detection of artistic composition in artworks. In the third part, I will come back to natural scenes and present current data on the role of image properties for the evaluation of and health effects evoked by nature and urban environments. Overall, the presented work will demonstrate how image properties inherent to natural scenes influence the processing and evaluation of various image categories.