3DV 2025 Paper
We introduce Gaussian Wardrobe, a novel framework to digitalize compositional 3D neural avatars from multi-view videos. Existing methods for 3D neural avatars typically treat the human body and clothing as an inseparable entity. However, this paradigm fails to capture the dynamics of complex free-form garments and limits the reuse of clothing across different individuals. To overcome these problems, we develop a novel, compositional 3D Gaussian representation to build avatars from multiple layers of free-form garments. The core of our method is decomposing neural avatars into bodies and layers of shape-agnostic neural garments. To achieve this, our framework learns to disentangle each garment layer from multi-view videos and canonicalizes it into a shape-independent space. In experiments, our method models photorealistic avatars with high-fidelity dynamics, achieving new state-of-the-art performance on novel pose synthesis benchmarks. In addition, we demonstrate that the learned compositional garments contribute to a versatile digital wardrobe, enabling a practical virtual try-on application where clothing can be freely transferred to new subjects.
Our framework canonicalizes an input template mesh into a zero-shaped space and segments it into distinct body and garment components. During training, layer-specific U-Nets predict the parameters of 3D Gaussian primitives from pose-conditioned positional maps. Finally, we composite the 3D Gaussians from all layers to render the final RGB images and segmentation masks.
@inproceedings{chen2026gaussian,
title={Gaussian Wardrobe: Compositional 3D Gaussian Avatars for Free-Form Virtual Try-on},
author={Chen, Zhiyi and Ho, Hsuan-I and Jiang, Tianjian and Song, Jie and Kaufmann, Manuel and Guo, Chen},
booktitle={Thirteenth International Conference on 3D Vision},
year={2026},
url={https://openreview.net/forum?id=sncanvgvUn}
}