Computer Graphics
TU Braunschweig

Fast Non-Rigid Radiance Fields from Monocularized Data

Fast Non-Rigid Radiance Fields from Monocularized Data

The reconstruction and novel view synthesis of dynamic scenes recently gained increased attention. As reconstruction from large-scale multi-view data involves immense memory and computational requirements, recent benchmark datasets provide collections of single monocular views per timestamp sampled from multiple (virtual) cameras. We refer to this form of inputs as monocularized data.

Existing work shows impressive results for synthetic setups and forward-facing real-world data, but is often limited in the training speed and angular range for generating novel views. This paper addresses these limitations and proposes a new method for full 360° inward-facing novel view synthesis of non-rigidly deforming scenes. At the core of our method are: 1) An efficient deformation module that decouples the processing of spatial and temporal information for accelerated training and inference; and 2) A static module representing the canonical scene as a fast hash-encoded neural radiance field.

In addition to existing synthetic monocularized data, we systematically analyze the performance on real-world inward-facing scenes using a newly recorded challenging dataset sampled from a synchronized large-scale multi-view rig. In both cases, our method is significantly faster than previous methods, converging in less than 7 minutes and achieving real-time framerates at 1K resolution, while obtaining a higher visual accuracy for generated novel views.

Our code and dataset are available online:


Framework Overview

Our method takes a set of calibrated monocular RGBA images to reconstruct a deformable radiance field for novel-view synthesis. We feed sampled points  x  and their normalized timestamp t  into individual shallow MLPs, and combine the resulting high-dimensional embeddings using matrix multiplication to obtain a deformation vector δx  into canonical space. The canonical module is implemented as a fast hash-encoded neural radiance field, estimating opacity σ  and view-dependent color c  for volume rendering.

Framework Overview

Code & Dataset

Author(s):Moritz Kappel, Vladislav Golyanik, Susana Castillo, Christian Theobalt, Marcus Magnor
Published:February 2024
Journal:IEEE Transactions on Visualization and Computer Graphics (TVCG)
Project(s): Neural Reconstruction and Rendering of Dynamic Real-World Scenes  Immersive Digital Reality 

  title = {Fast Non-Rigid Radiance Fields from Monocularized Data},
  author = {Kappel, Moritz and Golyanik, Vladislav and Castillo, Susana  and Theobalt, Christian and Magnor, Marcus},
  journal = {{IEEE} Transactions on Visualization and Computer Graphics ({TVCG})},
  doi = {10.1109/{TVCG}.2024.3367431},
  pages = {1--12},
  month = {Feb},
  year = {2024}