Computer Graphics
TU Braunschweig


Talk Immersive Digital Reality

12.01.2018 10:15
Tampere University of Technology, Finland

Speaker(s): Marcus Magnor

Keynote presentation at VR Walkthrough Technology Day, TU Tampere, Finland (presentation video)

Since the times of the Lumière brothers, the way we watch movies hasn’t fundamentally changed: whether in movie theaters, on mobile devices, or on TV at home, we still experience movies as outside observers, watching the action through a “peephole” whose size is defined by the angular extent of the screen. As soon as we look away from the screen or turn  around, we are immediately reminded that we are only “voyeurs”. With full field-of-view, head-mounted and tracked displays available now on the consumer market, this outside-observer paradigm of visual entertainment is giving way to a fully immersive experience that encompasses the viewer and is able to draw us in much more than was possible before.

Current endeavors towards immersive visual entertainment, however, are still almost entirely based on 3D graphics-generated content, limiting application scenarios to virtual worlds only. The reason is that in order to provide for stereo vision and ego-motion parallax, which are essential for genuine visual immersion perception, the scene must be rendered in real-time from arbitrary vantage points. While this can be easily accomplished for 3D graphics via standard GPU rendering, it is not at all straight-forward to do the same from conventional video footage acquired of real-world events.

In my talk I will outline avenues of research toward enabling the immersive experience of real-world recordings, enhancing the immersive viewing experience by taking perceptual issues into account, and extending visual immersion beyond a single viewer to create a collectively experienceable immersive real-world environment.

Talk MA-Talk: Fast high-resolution GPU-based Computed Tomography

22.12.2017 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Markus Wedekind

Talk MA-Talk: Automatic Infant Face Verification

08.12.2017 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Hangjian Zhang

Talk Solution methods for vector field tomography in different geometries

15.11.2017 14:30
Informatikzentrum, Seminarraum G30

Speaker(s): Thomas Schuster

Vector field tomography has a broad range of applications such as medical diagnosis, oceanography, plasma physics or electron tomography. In the talk, we present an overview of vector field tomography in several different settings with adapted numerical solvers and inversion schemes.

Talk Bsc Talk: A framework for psychophysical experiment design in immersive full-dome environments

30.10.2017 13:00
ICG Dome, Northern Campus

Speaker(s): Jan-Frederick Musiol

This thesis documents the development of a software framework for conducting psychophysical experiments using the dome projection system of the Computer Graphics Lab at the TU Braunschweig (ICG Dome).
The framework adapts the approaches of existing software designed for conventional flat displays to this new immersive environment.

The thesis describes the functionality of the framework and explains the process of building an experiment using an example.
Finally it discusses the suitability of the ICG Dome for psychophysical experimentation.

Bsc Talk: A framework for psychophysical experiment design in immersive full-dome environments

Talk Data-driven Compressed Sensing Tomography

09.10.2017 09:00
Sandia National Labs, USA

Speaker(s): Marcus Magnor

Talk BA-Talk: Evaluation of Skinning Techniques for Skeletal Animation in MonSteR

04.09.2017 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Paul Maximilian Bittner

Talk MA Talk: Facial Texture Generation from Uncontrolled Monocular Video

01.09.2017 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Rudolf Martin

Talk Real Virtual Humans

10.07.2017 13:30
IZ G30

Speaker(s): Gerard Pons-Moll

For man-machine interaction it is crucial to develop models of humans that look and move indistinguishably from real humans. Such virtual humans will be key for application areas such as computer vision, medicine and psychology, virtual and augmented reality and special effects in movies. 

Currently, digital models typically lack realistic soft tissue and clothing dynamics or require time-consuming manual editing of physical simulation parameters. Our hypothesis is that better and more realistic models of humans and clothing can be learned directly from real measurements coming from 4D scans, images and depth and inertial sensors. We combine statistical machine learning techniques and physics based simulation to create realistic models from data.

I will give an overview of several of our projects in which we build realistic models of human pose and shape, soft-tissue dynamics and clothing. I will also present a recent technique we have developed to capture human movement from only 6 inertial sensors attached at the body limbs. This will enable capturing human motion of every day activities, for example while we are interacting with other people, while we are riding a bike or driving a car. Such recorded motions will be key to learn models that replicate human behaviour. I will conclude the talk outlining the next challenges to build virtual humans that are indistinguishable from real people.


Gerard Pons-Moll obtained his degree in superior Telecommunications Engineering from the Technical University of Catalonia (UPC) in 2008. From 2007 to 2008 he was at Northeastern University in Boston USA with a fellowship from the Vodafone foundation conducting research on medical image analysis. He received his Ph.D. degree (with distinction) from the Leibniz University of Hannover in 2014. In 2012 he was a visiting researcher at the vision group at the University of Toronto. In 2012 he also worked as intern at the computer vision group at Microsoft Research Cambridge. From 11/2013 until 11/2015 he was a postdoc at the Max Planck Institute (MPI) for Intelligent Systems in Tuebingen, Germany. Since 11/2015 he is a research scientist at the MPI.

His work has been published in the major computer vision and computer graphics conferences and journals including Siggraph, Siggraph Asia, CVPR, ICCV, BMVC(Best Paper), Eurographics(Best Paper), IJCV and TPAMI. He serves regularly as a reviewer for TPAMI, IJCV, Siggraph, Siggraph Asia, CVPR, ICCV, ECCV, ACCV and others. He co-organized 3 tutorials at major conferences: 1 tutorial at ICCV 2011 on Looking at People: Model Based Pose Estimation, and 2 tutorials at ICCV 2015 and Siggraph 2016 on Modeling Human Bodies in Motion.

His research interests are 3D modeling of humans and clothing in motion and using machine learning and graphics models to solve vision problems.

Talk MA-Talk: Virtually Increasing the Walkable Area in Room-Scale Immersive Environments

23.06.2017 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Adrian Wierzbowski

Talk Bildbasiertes Messen und Modellieren der realen Welt

20.06.2017 14:00
PTB-Braunschweig, Seminarzentrum A, Kohlrausch-Bau

Speaker(s): Marcus Magnor

Bilder sind Projektionen der physikalischen Realität: sämtliche leuchtenden und beleuchteten Dinge senden kontinuierlich Bilder von sich aus, in alle Richtungen und über weite Distanzen. Mit Lichtgeschwindigkeit übertragen Bilder reichhaltige Information über ihren Entstehungsort und ihre Entstehungsweise. Damit stellt jedes digitale Bild eine Messung dar, jede Digitalkamera ist ein Messinstrument, das simultan Millionen von Messwerten unserer natürlichen Umgebung erfasst, aus der Distanz und ohne das vermessene System zu beeinflussen.

In meinem Vortrag möchte ich anhand von Beispielen aus der Radioastronomie, Strömungsmechanik, Atmosphärenoptik und Wahrnehmungspsychologie zeigen, wie moderne Bildverarbeitung und Rechnersimulation in der Lage ist, aus Bildern komplexer natürlicher Vorgänge quantitative Informationen zu ermitteln.

[Weitere Informationen]

Symposium SVCP17 - Symposium on Visual Computing and Perception

07.06.2017 13:00 - 08.06.2017 16:00
Campus Nord, TU Braunschweig

Speaker(s): Micheal Bach, Heinrich Bülthoff, Jan Koenderink, Holger Theisel, Bernt Schiele, Erik Reinhard

Computer graphics, vision, and psychophysics form a scientific triad whose interdisciplinary research is about to fundamentally change the way we drive our cars, watch movies, play games, communicate with one another and computers, socially interact, learn, live. In six distinguished lectures, some of the most renowned scientists worldwide in these fields will share their view on what has already been accomplished, which challenges still lie ahead, and what we may expect in the future. The symposium is intended to bring together all scientific communities interested in visual computing and perception, to stir everyone's imagination, and to foster exciting new research between graphics, vision, and perception. (SVCP'17 web page)

SVCP17 - Symposium on Visual Computing and Perception

Talk Einsatz der Unity-Engine in VR-Umgebungen

12.05.2017 13:00
ICG Lab Campus Nord

Speaker(s): Marcus Riemer

Die Unity-Engine ist eine populäre Entwicklungsumgebung für Spiele und interaktive Anwendungen. Neben klassischen Zielplattformen wie PC oder Smartphone bietet Unity auch Entwicklungsschnittstellen für verschiedenste VR Systeme. Marcus Riemer berichtet über den Einsatz von Unity an der FH Wedel sowie die Herausforderungen und Möglichkeiten der Nutzung für individuelle VR Systeme am Beispiel eines CAVE Virtual Environment.

Weitere Vortragende: Steffen Kurt, Florian Habib

Symposium The Entanglement between Gesture, Media and Politics

24.04.2017 13:00 - 28.04.2017 13:00
HBK Institute for Performing Arts and ICG Dome, Northern Campus

Host(s): Jan-Philipp Tauscher, Chair(s): Paul Maximilian Bittner, Speaker(s): Timo Herbst

Das vorliegende Projekt The Entanglement between Gesture, Media and Politics untersucht die Verschränkungen körperlicher Gesten mit zeitgenössischen ubiquitären und global vernetzten Medientechnologien. Die Entwicklung dieser Technologien, so die Annahme, korrespondiert damit, dass sich die Frage nach Gesten in unterschiedlichen Disziplinen neuerlich und dringlich stellt. Seit den 1990er Jahren wächst das künstlerische und wissenschaftliche Interesse an Gesten; allerdings gibt es keine allgemeingültige Definition dessen, was eine Geste sei, vielmehr arbeiten unterschiedliche Ansätze mit je eigenen Konzepten, Methoden und Zielen.

Vor diesem Hintergrund erforscht das Projekt die Verschränkung von Gesten, Medien und Politik in der inter- und transdisziplinären Zusammenarbeit von Kunst und Wissenschaft. Dafür haben wir uns zwei Schwerpunktthemen gewählt: Unter der Überschrift „Un/Wahrnehmbare Gesten“ fragen wir, inwieweit das Verständnis und die Praxis von „Geste“ schon immer eine komplexe Konstruktion von Sinneseindrücken und Medientechnologien (gewesen) ist. Mit dem Schwerpunkt „Politische Geste“ untersuchen wir, wie intersubjektiv lesbare Gesten in ihre politischen und gesellschaftlichen Kontexte eingebunden sind und diese sogar neu bestimmen können.

Um einen gleichberechtigten Dialog zwischen KünstlerInnen und WissenschaftlerInnen zu gewähr- leisten, wechseln sich Phasen der gemeinsamen Arbeit in einer Workshopserie mit Phasen der indivi- duellen Forschung und Arbeit ab. Die Workshops bieten einen sehr spezifischen Ort der Begegnung und ein angemessenes Setting, um Methoden des interdisziplinären Zusammenarbeitens zu entwickeln und zu erproben. Durch den Wechsel von Methoden und Praktiken vertiefen die Teilnehmenden ihr eigenes disziplinäres Erforschen der Phänomene und entwickeln gleichermaßen transdisziplinäre Er- kenntnisse über Gesten in ihren Verschränkungen mit Medien und dem Politischen. Darüberhinaus erarbeitet das Projekt innerhalb der Workshopserie ein Framework für interdisziplinäre Forschung von Kunst und Wissenschaft.

Entlang der Erfordernisse und Ziele unserer Zusammenarbeit verspricht das Projekt „The Entangle- ment between Gesture, Media and Politics“ folgende Ergebnisse und Produkte: (1) eine Plattform zur Unterstützung kollaborativer Arbeitsprozesse; (2) Dokumentationen der Workshops; (3) individuelle Arbeiten der TeilnehmerInnen (Kunstwerke, Performances, Artikel usw.); (4) ein öffentliches Sympo- sium, um Prozesse und Ergebnisse einer breiteren Öffentlichkeit zu präsentieren; (5) eine Publikation zur nachhaltigen Dokumentation des Projekts und seiner Ergebnisse und (6) ein „White Paper“ für inter- und transdisziplinäre Forschung von Kunst und Wissenschaft.

The Entanglement between Gesture, Media and Politics

Talk Visual Computing and Perception

20.04.2017 13:00
GCafe, Stanford University, USA

Speaker(s): Marcus Magnor

Image are projections of the physical reality around us: any luminous or illuminated object continuously emits images of itself, in all directions and over huge distances. At the speed of light images convey a wealth of information about their origins, which our visual system deciphers almost without effort, in real time, extremely efficient, and enormously robust. Consequently, our visual sense has evolved to become the prime modality for gathering information of our environs. In my talk I will present a few ongoing projects to investigate how we perceive visual information and make use of perception to attain visual authenticity in computer graphics.

Talk Teamprojekt-Abschluss: Virtual Reality Dome

07.02.2017 13:30
ICG Lab, Campus Nord

Präsentation der Ergebnisse des studentischen Teamprojekts.

Talk Visual Computing - Bridging Real and Digital Domain

23.01.2017 16:00
Universität Konstanz

Invited talk at the University of Konstanz/SFB TRR 161 (abstract)

Talk Physik-basierte photorealistische Visualisierung astronomischer Nebel

09.12.2016 13:00
IZ G30

Speaker(s): Wolfgang Steffen

Talk Visual Computing - Bridging Real and Digital Domain

06.12.2016 14:00
IZ G30

Speaker(s): Marcus Magnor

invited remote talk at the Johannes Kepler Universität Linz (abstract)

Talk Disputation

14.11.2016 13:00
Informatikzentrum, IZ 161

Speaker(s): Michael Stengel

Gaze-contingent Computer Graphics

Talk Disputation

07.10.2016 13:00
Informatikzentrum, Seminarraum G04

Speaker(s): Thomas Neumann

Reconstruction, Analysis, and Editing of dynamically deforming 3D-Surfaces

Talk Fast image reconstruction for Magnetic-Particle-Imaging by Chebyshev Transformations

08.08.2016 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Leonard Schmiester

Image reconstruction in magnetic-particle-imaging (MPI) is done using an algebraic approach for Lissajous type measurement sequences. By solving a large linear system of equations the spatial distribution of magnetic nanoparticles can be determined. Despite the use of iterative solvers that converge rapidly, the size of the MPI system matrix leads to reconstruction times that are typically much longer than the actual data acquisition time. For this reason, matrix compression techniques have been introduced that transform the MPI system matrix into a sparse domain and then utilize this sparsity for accelerated reconstruction. Within this scope we investigate the Chebyshev transformation for matrix compression. By reducing the number of coefficients per matrix row to one, it is even possible to derive a direct reconstruction method that circumvents the usage of iterative solvers.

Talk MA-Talk: Guided Camera Placement for Image-Based Rendering

11.07.2016 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Leslie Wöhler

Talk Promotions-V-Vg: Gaze-Contingent Perceptual Rendering in Computer Graphics

01.07.2016 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Michael Stengel

Contemporary digital displays feature multi-million pixels at ever-increasing refresh rates. Reality, on the other hand, provides us with a view of the world that is continuous in space and in time. The discrepancy between viewing the physical world and its sampled depiction on digital displays gives rise to perceptual quality degradation. By measuring or estimating where we look, gaze-contingent algorithms aim at exploiting the way we visually perceive to remedy visible artifacts. In his dissertation pre-talk Michael Stengel will present recent results from projects in the field of gaze-contingent and perceptual algorithms. Two projects aim at boosting the perceived visual quality of conventional video footage when viewed on commodity monitors or projection. In addition he will describe a novel head-mounted display with real-time gaze tracking resulting in many novel applications in the context of Virtual Reality and Augmented Reality. In a follow-up project Michael and colleagues derived a novel gaze-contingent render method using active gaze tracking to reduce computational efforts when shading virtual worlds.

Talk Promotions-V-Vg: Reconstruction, Analysis and Editing of Dynamically Deforming 3D Surfaces

10.06.2016 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Thomas Neumann

Dynamically deforming 3D surfaces play a major role in computer graphics. However, producing time-varying dynamic geometry at ever increasing detail is a labor-intensive process, and so a recent trend is to capture geometry data directly from the real world. The first part of this talk presents novel approaches in this direction, approaches that capture dense dynamic 3D surfaces from multi-camera systems in a particularly robust and accurate way. This provides highly realistic dynamic surface models for phenomena like moving garments and bulging muscles.

However, conveniently re-using, editing or otherwise analyzing dynamic 3D surface data is not yet possible. The second part of the talk thus deals with novel data-driven modeling and animation approaches. I first show a supervised data-driven approach for modeling human muscle deformations, an approach that scales to huge datasets and provides fine-scale, anatomically realistic deformations at a high quality not shown by previous data-driven methods. I then extend data-driven modeling to the unsupervised setting, thus providing editing tools for a wider set of input data ranging from facial performance captures and full body motion to muscle and cloth deformations. To this end, I introduce the concepts of sparsity and locality within a mathematical optimization framework. I also explore these concepts for constructing shape-aware functions that are useful for static geometry processing, registration and localized editing.