Computer Graphics
TU Braunschweig


Talk Teamprojekt-Abschluss: World Builder VR Toolkit Continued

26.03.2018 13:00
ICG Lab, Campus Nord

Präsentation der Ergebnisse des studentischen Teamprojekts.

Talk BA-Talk: From Chairs to Humans: Specializing FlowNet 2 to non-rigid Human Motion

27.02.2018 14:00
Informatikzentrum, Seminarraum G30

Speaker(s): Maximilian Homann

Talk Das Weltall in Farbe und 3D

21.02.2018 19:00
Planetarium Wolfsburg

Speaker(s): Marcus Magnor

Public lecture at the planetarium Wolfsburg (website)

Ob in Science Fiction-Filmen, Computerspielen oder im Planetarium: das Weltall ist reich an Formen und Farben. Dabei hat wohl noch niemand  mit eigenen Augen einen echten astronomischen Nebel in bunt und 3D gesehen. Wie sähe es wohl vor Ort wirklich aus, wenn wir zum Ring-, Orion- oder Pferdekopfnebel fliegen könnten?

Talk Immersive Digital Reality

12.01.2018 10:15
Tampere University of Technology, Finland

Speaker(s): Marcus Magnor

Keynote presentation at VR Walkthrough Technology Day, TU Tampere, Finland (presentation video)

Since the times of the Lumière brothers, the way we watch movies hasn’t fundamentally changed: whether in movie theaters, on mobile devices, or on TV at home, we still experience movies as outside observers, watching the action through a “peephole” whose size is defined by the angular extent of the screen. As soon as we look away from the screen or turn  around, we are immediately reminded that we are only “voyeurs”. With full field-of-view, head-mounted and tracked displays available now on the consumer market, this outside-observer paradigm of visual entertainment is giving way to a fully immersive experience that encompasses the viewer and is able to draw us in much more than was possible before.

Current endeavors towards immersive visual entertainment, however, are still almost entirely based on 3D graphics-generated content, limiting application scenarios to virtual worlds only. The reason is that in order to provide for stereo vision and ego-motion parallax, which are essential for genuine visual immersion perception, the scene must be rendered in real-time from arbitrary vantage points. While this can be easily accomplished for 3D graphics via standard GPU rendering, it is not at all straight-forward to do the same from conventional video footage acquired of real-world events.

In my talk I will outline avenues of research toward enabling the immersive experience of real-world recordings, enhancing the immersive viewing experience by taking perceptual issues into account, and extending visual immersion beyond a single viewer to create a collectively experienceable immersive real-world environment.

Talk MA-Talk: Fast high-resolution GPU-based Computed Tomography

22.12.2017 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Markus Wedekind

Talk MA-Talk: Automatic Infant Face Verification

08.12.2017 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Hangjian Zhang

Talk Solution methods for vector field tomography in different geometries

15.11.2017 14:30
Informatikzentrum, Seminarraum G30

Speaker(s): Thomas Schuster

Vector field tomography has a broad range of applications such as medical diagnosis, oceanography, plasma physics or electron tomography. In the talk, we present an overview of vector field tomography in several different settings with adapted numerical solvers and inversion schemes.

Talk Data-driven Compressed Sensing Tomography

09.10.2017 09:00
Sandia National Labs, USA

Speaker(s): Marcus Magnor

Talk BA-Talk: Evaluation of Skinning Techniques for Skeletal Animation in MonSteR

04.09.2017 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Paul Bittner

Talk MA Talk: Facial Texture Generation from Uncontrolled Monocular Video

01.09.2017 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Rudolf Martin

Talk Real Virtual Humans

10.07.2017 13:30
IZ G30

Speaker(s): Gerard Pons-Moll

For man-machine interaction it is crucial to develop models of humans that look and move indistinguishably from real humans. Such virtual humans will be key for application areas such as computer vision, medicine and psychology, virtual and augmented reality and special effects in movies. 

Currently, digital models typically lack realistic soft tissue and clothing dynamics or require time-consuming manual editing of physical simulation parameters. Our hypothesis is that better and more realistic models of humans and clothing can be learned directly from real measurements coming from 4D scans, images and depth and inertial sensors. We combine statistical machine learning techniques and physics based simulation to create realistic models from data.

I will give an overview of several of our projects in which we build realistic models of human pose and shape, soft-tissue dynamics and clothing. I will also present a recent technique we have developed to capture human movement from only 6 inertial sensors attached at the body limbs. This will enable capturing human motion of every day activities, for example while we are interacting with other people, while we are riding a bike or driving a car. Such recorded motions will be key to learn models that replicate human behaviour. I will conclude the talk outlining the next challenges to build virtual humans that are indistinguishable from real people.


Gerard Pons-Moll obtained his degree in superior Telecommunications Engineering from the Technical University of Catalonia (UPC) in 2008. From 2007 to 2008 he was at Northeastern University in Boston USA with a fellowship from the Vodafone foundation conducting research on medical image analysis. He received his Ph.D. degree (with distinction) from the Leibniz University of Hannover in 2014. In 2012 he was a visiting researcher at the vision group at the University of Toronto. In 2012 he also worked as intern at the computer vision group at Microsoft Research Cambridge. From 11/2013 until 11/2015 he was a postdoc at the Max Planck Institute (MPI) for Intelligent Systems in Tuebingen, Germany. Since 11/2015 he is a research scientist at the MPI.

His work has been published in the major computer vision and computer graphics conferences and journals including Siggraph, Siggraph Asia, CVPR, ICCV, BMVC(Best Paper), Eurographics(Best Paper), IJCV and TPAMI. He serves regularly as a reviewer for TPAMI, IJCV, Siggraph, Siggraph Asia, CVPR, ICCV, ECCV, ACCV and others. He co-organized 3 tutorials at major conferences: 1 tutorial at ICCV 2011 on Looking at People: Model Based Pose Estimation, and 2 tutorials at ICCV 2015 and Siggraph 2016 on Modeling Human Bodies in Motion.

His research interests are 3D modeling of humans and clothing in motion and using machine learning and graphics models to solve vision problems.

Talk MA-Talk: Virtually Increasing the Walkable Area in Room-Scale Immersive Environments

23.06.2017 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Adrian Wierzbowski

Talk Bildbasiertes Messen und Modellieren der realen Welt

20.06.2017 14:00
PTB-Braunschweig, Seminarzentrum A, Kohlrausch-Bau

Speaker(s): Marcus Magnor

Bilder sind Projektionen der physikalischen Realität: sämtliche leuchtenden und beleuchteten Dinge senden kontinuierlich Bilder von sich aus, in alle Richtungen und über weite Distanzen. Mit Lichtgeschwindigkeit übertragen Bilder reichhaltige Information über ihren Entstehungsort und ihre Entstehungsweise. Damit stellt jedes digitale Bild eine Messung dar, jede Digitalkamera ist ein Messinstrument, das simultan Millionen von Messwerten unserer natürlichen Umgebung erfasst, aus der Distanz und ohne das vermessene System zu beeinflussen.

In meinem Vortrag möchte ich anhand von Beispielen aus der Radioastronomie, Strömungsmechanik, Atmosphärenoptik und Wahrnehmungspsychologie zeigen, wie moderne Bildverarbeitung und Rechnersimulation in der Lage ist, aus Bildern komplexer natürlicher Vorgänge quantitative Informationen zu ermitteln.

[Weitere Informationen]

Symposium SVCP17 - Symposium on Visual Computing and Perception

07.06.2017 13:00 - 08.06.2017 16:00
Campus Nord, TU Braunschweig

Speaker(s): Micheal Bach, Heinrich Bülthoff, Jan Koenderink, Holger Theisel, Bernt Schiele, Erik Reinhard

Computer graphics, vision, and psychophysics form a scientific triad whose interdisciplinary research is about to fundamentally change the way we drive our cars, watch movies, play games, communicate with one another and computers, socially interact, learn, live. In six distinguished lectures, some of the most renowned scientists worldwide in these fields will share their view on what has already been accomplished, which challenges still lie ahead, and what we may expect in the future. The symposium is intended to bring together all scientific communities interested in visual computing and perception, to stir everyone's imagination, and to foster exciting new research between graphics, vision, and perception. (SVCP'17 web page)

SVCP17 - Symposium on Visual Computing and Perception

Talk Einsatz der Unity-Engine in VR-Umgebungen

12.05.2017 13:00
ICG Lab Campus Nord

Speaker(s): Marcus Riemer

Die Unity-Engine ist eine populäre Entwicklungsumgebung für Spiele und interaktive Anwendungen. Neben klassischen Zielplattformen wie PC oder Smartphone bietet Unity auch Entwicklungsschnittstellen für verschiedenste VR Systeme. Marcus Riemer berichtet über den Einsatz von Unity an der FH Wedel sowie die Herausforderungen und Möglichkeiten der Nutzung für individuelle VR Systeme am Beispiel eines CAVE Virtual Environment.

Weitere Vortragende: Steffen Kurt, Florian Habib

Talk Visual Computing and Perception

20.04.2017 13:00
GCafe, Stanford University, USA

Speaker(s): Marcus Magnor

Image are projections of the physical reality around us: any luminous or illuminated object continuously emits images of itself, in all directions and over huge distances. At the speed of light images convey a wealth of information about their origins, which our visual system deciphers almost without effort, in real time, extremely efficient, and enormously robust. Consequently, our visual sense has evolved to become the prime modality for gathering information of our environs. In my talk I will present a few ongoing projects to investigate how we perceive visual information and make use of perception to attain visual authenticity in computer graphics.

Talk Teamprojekt-Abschluss: Virtual Reality Dome

07.02.2017 13:30
ICG Lab, Campus Nord

Präsentation der Ergebnisse des studentischen Teamprojekts.

Talk Visual Computing - Bridging Real and Digital Domain

23.01.2017 16:00
Universität Konstanz

Invited talk at the University of Konstanz/SFB TRR 161 (abstract)

Talk Physik-basierte photorealistische Visualisierung astronomischer Nebel

09.12.2016 13:00
IZ G30

Speaker(s): Wolfgang Steffen

Talk Visual Computing - Bridging Real and Digital Domain

06.12.2016 14:00
IZ G30

Speaker(s): Marcus Magnor

invited remote talk at the Johannes Kepler Universität Linz (abstract)

Talk Disputation

14.11.2016 13:00
Informatikzentrum, IZ 161

Speaker(s): Michael Stengel

Gaze-contingent Computer Graphics

Talk Disputation

07.10.2016 13:00
Informatikzentrum, Seminarraum G04

Speaker(s): Thomas Neumann

Reconstruction, Analysis, and Editing of dynamically deforming 3D-Surfaces

Talk Fast image reconstruction for Magnetic-Particle-Imaging by Chebyshev Transformations

08.08.2016 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Leonard Schmiester

Image reconstruction in magnetic-particle-imaging (MPI) is done using an algebraic approach for Lissajous type measurement sequences. By solving a large linear system of equations the spatial distribution of magnetic nanoparticles can be determined. Despite the use of iterative solvers that converge rapidly, the size of the MPI system matrix leads to reconstruction times that are typically much longer than the actual data acquisition time. For this reason, matrix compression techniques have been introduced that transform the MPI system matrix into a sparse domain and then utilize this sparsity for accelerated reconstruction. Within this scope we investigate the Chebyshev transformation for matrix compression. By reducing the number of coefficients per matrix row to one, it is even possible to derive a direct reconstruction method that circumvents the usage of iterative solvers.

Talk MA-Talk: Guided Camera Placement for Image-Based Rendering

11.07.2016 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Leslie Wöhler

Talk Promotions-V-Vg: Gaze-Contingent Perceptual Rendering in Computer Graphics

01.07.2016 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Michael Stengel

Contemporary digital displays feature multi-million pixels at ever-increasing refresh rates. Reality, on the other hand, provides us with a view of the world that is continuous in space and in time. The discrepancy between viewing the physical world and its sampled depiction on digital displays gives rise to perceptual quality degradation. By measuring or estimating where we look, gaze-contingent algorithms aim at exploiting the way we visually perceive to remedy visible artifacts. In his dissertation pre-talk Michael Stengel will present recent results from projects in the field of gaze-contingent and perceptual algorithms. Two projects aim at boosting the perceived visual quality of conventional video footage when viewed on commodity monitors or projection. In addition he will describe a novel head-mounted display with real-time gaze tracking resulting in many novel applications in the context of Virtual Reality and Augmented Reality. In a follow-up project Michael and colleagues derived a novel gaze-contingent render method using active gaze tracking to reduce computational efforts when shading virtual worlds.