Computer Graphics
TU Braunschweig

Fast Non-Rigid Radiance Fields from Monocularized Data


Fast Non-Rigid Radiance Fields from Monocularized Data

3D reconstruction and novel view synthesis of dynamic scenes from collections of single views recently gained increased attention. Existing work shows impressive results for synthetic setups and forward-facing real-world data, but is severely limited in the training speed and angular range for generating novel views. This paper addresses these limitations and proposes a new method for full 360° novel view synthesis of non-rigidly deforming scenes. At the core of our method are: 1) An efficient deformation module that decouples the processing of spatial and temporal information for acceleration at training and inference time; and 2) A static module representing the canonical scene as a fast hash-encoded neural radiance field. We evaluate the proposed approach on the established synthetic D-NeRF benchmark, that enables efficient reconstruction from a single monocular view per time-frame randomly sampled from a full hemisphere. We refer to this form of inputs as monocularized data. To prove its practicality for real-world scenarios, we recorded twelve challenging sequences with human actors by sampling single frames from a synchronized multi-view rig. In both cases, our method is trained significantly faster than previous methods (minutes instead of days) while achieving higher visual accuracy for generated novel views.

Framework Overview

Our method takes a set of calibrated monocular RGBA images to reconstruct a deformable radiance field for novel-view synthesis. We feed sampled points  x  and their normalized timestamp t  into individual shallow MLPs, and combine the resulting high-dimensional embeddings using matrix multiplication to obtain a deformation vector δx  into canonical space. The canonical module is implemented as a fast hash-encoded neural radiance field, estimating opacity σ  and view-dependent color c  for volume rendering.

Framework Overview

Code & Dataset

https://github.com/MoritzKappel/MoNeRF


Author(s):Moritz Kappel, Vladislav Golyanik, Susana Castillo, Christian Theobalt, Marcus Magnor
Published:to appear
Type:Misc
Howpublished:arXiv preprint
Note:url: https://arxiv.org/abs/2212.01368
Project(s): Comprehensive Human Performance Capture from Monocular Video Footage  Immersive Digital Reality 


@misc{kappel2022fast,
  title = {Fast Non-Rigid Radiance Fields from Monocularized Data},
  author = {Kappel, Moritz and Golyanik, Vladislav and Castillo, Susana  and Theobalt, Christian and Magnor, Marcus},
  howpublished = {arXiv preprint},
  note = {url: https://arxiv.org/abs/2212.01368},
  year = {2022}
}

Authors