ODHSR

Online Dense 3D Reconstruction of Humans and Scenes from Monocular Videos

CVPR 2025
1ETH Zürich, 2HKUST(GZ), 3HKUST, 4University of Amsterdam

ODHSR takes monocular RGB input videos of humans and jointly reconstructs a photorealistic dense Gaussian representation of the scene and the moving human as well as camera poses, human poses, and human silhouettes within a SLAM setting.

Abstract

Creating a photorealistic scene and human reconstruction from a single monocular in-the-wild video figures prominently in the perception of a human-centric 3D world. Recent neural rendering advances have enabled holistic human-scene reconstruction but require pre-calibrated camera and human poses, and days of training time. In this work, we introduce a novel unified framework that simultaneously performs camera tracking, human pose estimation and human-scene reconstruction in an online fashion. 3D Gaussian Splatting is utilized to learn Gaussian primitives for humans and scenes efficiently, and reconstruction-based camera tracking and human pose estimation modules are designed to enable holistic understanding and effective disentanglement of pose and appearance. Specifically, we design a human deformation module to reconstruct the details and enhance generalizability to out-of-distribution poses faithfully. Aiming to learn the spatial correlation between human and scene accurately, we introduce occlusion-aware human silhouette rendering and monocular geometric priors, which further improve reconstruction quality. Experiments on the EMDB and NeuMan datasets demonstrate superior or on-par performance with existing methods in camera tracking, human pose estimation, novel view synthesis and runtime.

Video (coming)

Method Overview

Given a monocular video featuring a human in the scene, we simultaneously track the camera and human poses for each frame while training 3D Gaussian primitives. Our holistic human-scene representation is designed to handle the garment deformations, shadows and scene occlusions. Camera and human pose optimization is achieved through dense matching for view synthesis and leveraging monocular geometric cues. Mapping is carried out within a small local keyframe window, and we propose multiple regularizations to enhance reconstruction quality from the sparse set of keyframes.

BibTeX

@inproceedings{zhang2025odhsr,
  author = {Zhang, Zetong and Kaufmann, Manuel and Xue, Lixin and Song, Jie and Oswald, Martin R},
  title = {ODHSR: Online Dense 3D Reconstruction of Humans and Scenes
from Monocular Videos},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2025}
}