Computer Graphics
TU Braunschweig

Events


Talk Functional Programming in C++

18.06.2024 08:00
SN 19.1

Speaker(s): Jonathan Müller

Am 18.06.2024, 08:00 Uhr in SN 19.1

begrüßen wir Jonathan Müller für einen Gastvortrag zum Thema „Functional Programming in C++“. Funktionale Programmierung hat sich in immer mehr Bereichen als sichere und vorteilhafte Variante des Programmierens herausgestellt, so beispielsweise in der parallelen Programmierung. John Carmack, Erfinder des Ego-Shooters und Games wie Doom, Quake und Wolfenstein3D, sagte einmal über funktionale Programmierung: „No matter what language you work in, programming in a functional style provides benefits. You should do it whenever it is convenient, and you should think hard about the decision when it isn’t convenient."

Jonathan ist ein C++-Bibliotheksentwickler bei think-cell, hält Vorträge auf Konferenzen und ist Mitglied des C++-Standardisierungsausschusses.

Er ist der Autor von Open-Source-Projekten wie type_safe, einer Bibliothek von Sicherheitshilfsprogrammen, foonathan/memory, einer Speicherzuweisungsbibliothek, und cppast, einem C++-Reflection-Tool. In letzter Zeit hat er sich für Programmiersprachen und Compiler interessiert und lexy, eine C++-Parser-Bibliothek, und lauf, einen Bytecode-Interpreter, veröffentlicht.
Er bloggt auch unter foonathan.net.

Trotz der frühen Stunde freuen wir uns über interessierte Zuschauer.

Talk Globale, visuelle Lokalisierung durch Abgleich von Punkt- und Linienmerkmalen in Bildern mit bekannten, hochgenauen Geodaten

17.05.2024 13:00
IZ G30

Speaker(s): Junbo Li

Heutzutage spielt die Lokalisierung in vielen Gebieten eine sehr wichtige Rolle, wie autonomes Fliegen und autonomes Fahren. Die gebräuchlichste Methode dazu ist die satellitengestützte Lokalisierung, bei deren Anwendung in Städten jedoch das Problem besteht, dass die Lokalisierungsgenauigkeit aufgrund der Behinderung von Gebäuden erheblich abnimmt. Daher ist die Entwicklung einer auf anderen Informationen basierenden Lokalisierung als Ergänzung oder Ersetzung zur satellitengestützten Lokalisierung in städtischen Anwendungsszenarien zu einem Forschungsschwerpunkt geworden. In dieser Arbeit wird hauptsächlich eine Ende-zu-Ende globale visuelle Lokalisierungspipeline entwickelt, die auf dem Abgleich von Punkt- und Linienmerkmalen in Inferenzbildern mit vorprozessierter Datenbank, die im Voraus einmalig anhand des Lokalisierungsgebiets unter Verwendung bekannter hochpräziser Geodaten erstellt wurde. Bei Tests mit den Daten aus sehr komplexer realer städtischer Umgebung kann die mediane Genauigkeit etwa 1 Meter erreichen.

Talk BA-Talk: Evaluation of Methods for Learned Point Spread Functions through Camera-In-The-Loop Optimization

19.04.2024 11:00
IZ G30

Speaker(s): Karl Ritter

Talk Promotions-Vor-Vortrag: Perception-Based Techniques to Enhance User Experience in Virtual Reality

15.03.2024 13:00
IZ G30

Speaker(s): Colin Groth

Talk MA-Talk: An Investigation on the Practicality of Neural Radiance Field Reconstruction from in-the-wild Multi-View Panorama Recordings

22.12.2023 13:00
IZ G30

Speaker(s): Yannic Rühl

Talk Colloquium on AI in Interactive Systems

07.12.2023 10:00 - 08.12.2023 22:00
IZ161

Talk BA-Talk: Partial Face Swaps

09.10.2023 13:00
G30

Speaker(s): Carlotta Harms

Conference Vision, Modeling, and Visualization

27.09.2023 13:00 - 29.09.2023 12:00
Braunschweig, Germany

Chair(s): Marcus Magnor, Martin Eisemann, Susana Castillo

(conference website)

Vision, Modeling, and Visualization

Talk BA-Talk: Kostengünstige integrierte Steuerung und Überwachung von FDM Druckern mittels digitaler Zwillinge

26.09.2023 13:00
G30

Speaker(s): Marc Majohr

In dieser Arbeit wurde ein integriertes Steuerungs- und Überwachungssystem für Consumer FDM Drucker konzipiert und entwickelt .
Ein Fokus liegt auf universeller Anwendbarkeit auf verschiedenen FDM Druckern (L1), sowie auf minimalem Eingriff in den Druckprozess (L5).

Talk Computer Vision from the Perspective of Surveying

28.08.2023 13:00
IZ G30

Speaker(s): Anita Sellent

Talk Turning Natural Reality into Virtual Reality

18.08.2023 13:00
Stanford University, Packard 202

Speaker(s): Marcus Magnor

SCIEN Colloquium, Electrical Engineering, Stanford University

Talk Turning Natural Reality into Virtual Reality

14.08.2023 10:45
NVIDIA Inc., Santa Clara, CA

Speaker(s): Marcus Magnor

Current endeavors towards immersive visual entertainment are still almost entirely based on 3D graphics content, limiting application scenarios to digital, synthetic worlds only. The reason is that in order to provide for stereo vision and ego-motion parallax, two essential ingredients for visual immersion perception, the scene must be rendered in real-time from varying vantage points. While this can be easily accomplished in 3D graphics via GPU rendering, it is not at all straight-forward to do the same from conventional video footage acquired of real-world events. In my talk I will outline different ideas and approaches of how to utilize graphics hardware in conjunction with video in order to import the real world into VR.

 

Talk BA-Talk: Enhancing Perceived Acceleration using Galvanic Vestibular Stimulation in Virtual Reality

17.07.2023 13:00
G30

Speaker(s): Zandalee Roets

Talk BA-Talk: Perceptually Realistic Rendering of 360° Photos: a State-of-the-Art Overview

16.06.2023 13:00
G30

Speaker(s): Marcel Gädke

Talk BA-Talk: Move to the Music: Analyzing Dance Synchronicity using Smart Watch Motion Sensors

09.06.2023 13:00
IZ G41b Hardstyle Lab

Speaker(s): Maximilian Hein

In a dance performance, the dancers must be in sync with the music.
In this bachelor thesis, a system is presented which can determine whether a dancer has performed on the beat of the music.
For this purpose, a modern smartwatch is used to record the movement data of the dancer.
In parallel, the music is recorded to determine the synchronicity between the motion data and the music.

The evaluation of the system indicates that the proposed method is able to assess a dancers' beat accuracy with certain limitations.
To the best of our knowledge, this work is the first to use motion data from a smartwatch to analyze dance performances.

BA-Talk: Move to the Music: Analyzing Dance Synchronicity using Smart Watch Motion Sensors

Talk MA-Talk: Locally-Adaptive Video Recoloring

02.06.2023 13:00
G30

Speaker(s): Jan Malte Hilgefort

Video recoloring is an essential part of videography, yet there are only a limited number of approaches to locally-adaptive video recoloring, many of them based on mask propagation. In this master’s thesis, an approach to locally-adaptive video recoloring is developed that is both fast and produces realistic results. The approach is based on user-set constraints that influence the pixels based on their color and space distances, also allowing it to perform global recoloring. These constraint influences are then used to apply the re- coloring information the user sets for each constraint onto the pixels. The new approach can, for example, be used to simplify prototyping in video production by providing an interactive and intuitive way to apply an artistic vision to a video.

Talk MA-Talk: Disclosure of falsification of graphical data between its generation and display

26.05.2023 13:00 - 26.05.2023 14:00
IZ G30

Speaker(s): Tobias Hellmann

Talk BA-Talk: On the Beat: Analyzing and Evaluating Synchronicity in Dance Performances

26.05.2023 13:00
IZ G41b Hardstyle Lab

Speaker(s): Malte Menzel

Synchronisation of a dancer's movement and the accompanying music is a vital characteristic of dance performances.
In this work, we present a method to analyse and evaluate synchronicity in dance performances automatically.
We use a computer vision-based approach to extract dancer information from dance performance videos and examine its alignment with the background music.
Our method delivers correct results, as its analysis closely matches assessments made by professional dancers.
To the best of our knowledge, this work represents the first video-based dance practice system that analyses the synchronicity of dancers and music, as other research focuses on the alignment of multiple dancers.

BA-Talk: On the Beat: Analyzing and Evaluating Synchronicity in Dance Performances

Talk Learned Optics — Improving Computational Imaging Systems through Deep Learning and Optimization

22.05.2023 13:00
IZ G30

Speaker(s): Wolfgang Heidrich

Computational imaging systems are based on the joint design of optics and associated image reconstruction algorithms. Historically, many such systems have employed simple transform-based reconstruction methods. Modern optimization methods and priors can drastically improve the reconstruction quality in computational imaging systems. Furthermore, learning-based methods can be used to design the optics along with the reconstruction method, yielding truly end-to-end optimized imaging systems that outperform classical solutions.

Wolfgang Heidrich is a Professor of Computer Science and Electrical and Computer Engineering in the KAUST Visual Computing Center, for which he also served as director from 2014 to 2021. Prof. Heidrich joined King Abdullah University of Science and Technology (KAUST) in 2014, after 13 years as a faculty member at the University of British Columbia. He received his PhD in from the University of Erlangen in 1999, and then worked as a Research Associate in the Computer Graphics Group of the Max-Planck-Institute for Computer Science in Saarbrucken, Germany, before joining UBC in 2000. Prof. Heidrich's research interests lie at the intersection of imaging, optics, computer vision, computer graphics, and inverse problems. His more recent interest is in computational imaging, focusing on hardware-software co-design of the next generation of imaging systems, with applications such as High-Dynamic Range imaging, compact computational cameras, hyperspectral cameras, to name just a few. Prof. Heidrich's work on High Dynamic Range Displays served as the basis for the technology behind Brightside Technologies, which was acquired by Dolby in 2007. Prof. Heidrich is a Fellow of the IEEE and Eurographics, and the recipient of a Humboldt Research Award.

Talk MA-Talk: Inference of Users' Emotional State from Facial Landmarks, Gaze, and Rigid Head Motion

08.05.2023 13:00 - 08.05.2023 14:00
IZ G30

Speaker(s): Moritz von Estorff

Talk BA-Talk: Adapting "Virtual Texturing" to Stream Data With Varying Information Density

14.04.2023 13:00
G30

Speaker(s): Marcel Purwins

Talk Promotion: Fast and Efficient Artifact Correction for CT Reconstruction

17.03.2023 13:00 - 17.03.2023 14:00
IZ G30

Speaker(s): Markus Wedekind

CT reconstruction is a highly studied field in image processing that aims to reconstruct 3D images from radiographic projections. In industrial CT in particular, datasets are of substantially higher resolution than in conventional medical applications and the metrological accuracy of computed results is of great importance. This leads to high demands towards the computational performance of reconstruction algorithms. It also imposes a need to compensate for an abundance of artifacts that are introduced when not accounting for physical effects that play a role during the acquisition of projections. In this dissertation, we present several techniques that combat such artifacts in CT reconstruction. Firstly, we devise a method for reducing or eliminating stair artifacts that occur when polygonizing surface meshes from voxel grids that have been reconstructed using CT. We employ the ability of the commonly used filtered backprojection technique to reconstruct infinitesimal voxels at arbitrary positions of the volume, and use it to circumvent the interpolation sub-voxel data that leads to the stair artifacts in the polygonization. Additionally, we seek to reduce ring artifacts in reconstructed volumes. These artifacts stem from incorrect normalization of detector screen pixels and particularly affects voxels near the axis of rotation in circular scans. We seek to reduce those artifacts by physically correctly modelling the flat-field errors that lead to the emergence of the artifacts. Simultaneously, we demonstrate a computationally efficient way to implement our method in an existing CT reconstruction pipeline. Finally, the challenge of compensating for geometric calibration errors is addressed. In the case of truncated projections, we develop and evaluate methods for calibration correction with limited or no data redundancy. We consider and examine both methods that operate in projection domain as well as ones operating in image domain.

Talk BA-Talk: Selection in Scatter Plots

15.03.2023 13:00
G30

Speaker(s): Richard Neumann

Talk Disputation: Investigating the Perceived Authenticity and Communicative Abilities of Face-Swapped Portrait Videos

17.02.2023 13:00
G30

Speaker(s): Leslie Wöhler

Modern deep learning approaches allow for the automatic creation of highly realistic face-swapped videos. In these videos, recordings of two people are combined in a way that the face of a source person is applied to the video of a target person. This way, the resulting video obtains the facial identity of the source while keeping the body appearance, movements, and facial expressions of the target person. Thanks to their high degree of realism and automation of the generation process, face swaps are a valuable tool for creative and communicative scenarios. However, they could also be abused for criminal activities as they allow the impersonation of others and the generation of manipulated video content.

While many works focus on improving algorithms for the creation and detection of face swaps, there is only limited research on the perception of these modern video manipulations. As humans are very sensitive to changes and imbalances in facial representations, in my thesis I set out to investigate the perception of face swaps. Thereby, I focus on two areas: The perceived authenticity and the communicative abilities of face swaps.
To assess the quality and detectable cues in face swap videos, I examine whether humans can detect face swaps and which artifacts and facial areas are most important to detect manipulations using self-reports and eye tracking data. Furthermore, I discuss the perception of the conveyed emotions and personalities of face swaps to evaluate their usefulness as digital avatars in communicative scenarios. In order to perform reliable experiments and evaluations, I additionally introduce a novel dataset of face swaps designed for perceptual experiments as well as an eye tracking framework which enables the automatic generation of areas of interest in portrait videos.

The results of the experiments performed in this thesis indicate that modern face swaps are generally convincing and often mistaken for genuine videos.
While participants were able to report visible artifacts, they are usually attributed to video quality and did not suspect face swapping. The eye tracking data, on the other hand, revealed significant differences in viewing behavior between genuine and manipulated videos. This may indicate that some differences are perceived, but only subconsciously. Furthermore, my experiments show that face swaps are able to convey emotions and personality which makes them useful in communicative scenarios such as digital avatars.

Talk BA-Talk: Wavelet based Foveated Rendering of Videos in Virtual Reality

28.11.2022 13:00 - 28.11.2022 13:30
G30

Speaker(s): Christopher Graen