👗Gaussian Wardrobe🧥
Compositional 3D Gaussian Avatars for Free-Form Virtual Try-on

3DV 2025 Paper

*Equal contribution

TL;DR


Project Teaser Image

Gaussian Wardrobe digitalizes compositional 3D Gaussian avatars from multi-view videos. The learned neural garments are subject-agnostic. Therefore, they can be stored, reused, and seamlessly recombined with new subjects. Leveraging Gaussian Wardrobe, we realize a practical 3D avatar virtual try-on application demonstrating the capabilities of modeling the dynamics of challenging free-form clothing such as skirts, dresses, and open jackets.

Abstract

We introduce Gaussian Wardrobe, a novel framework to digitalize compositional 3D neural avatars from multi-view videos. Existing methods for 3D neural avatars typically treat the human body and clothing as an inseparable entity. However, this paradigm fails to capture the dynamics of complex free-form garments and limits the reuse of clothing across different individuals. To overcome these problems, we develop a novel, compositional 3D Gaussian representation to build avatars from multiple layers of free-form garments. The core of our method is decomposing neural avatars into bodies and layers of shape-agnostic neural garments. To achieve this, our framework learns to disentangle each garment layer from multi-view videos and canonicalizes it into a shape-independent space. In experiments, our method models photorealistic avatars with high-fidelity dynamics, achieving new state-of-the-art performance on novel pose synthesis benchmarks. In addition, we demonstrate that the learned compositional garments contribute to a versatile digital wardrobe, enabling a practical virtual try-on application where clothing can be freely transferred to new subjects.

Method

Method Architecture Diagram

Our framework canonicalizes an input template mesh into a zero-shaped space and segments it into distinct body and garment components. During training, layer-specific U-Nets predict the parameters of 3D Gaussian primitives from pose-conditioned positional maps. Finally, we composite the 3D Gaussians from all layers to render the final RGB images and segmentation masks.

    Technical Contributions:

  • Compositional Representation: Separating the body and garments into zero-shape canonical templates for cross-subject compatibility.
  • Garment Learning: Joint optimization of separate neural garments using photometric, segmentation, and inter-layer penetration losses.
  • Penetration-Aware Rendering: An online correction algorithm during rendering to resolve interpenetration artifacts in novel poses.

Results on 4D-DRESS

Input Images

Input Image

Animatable Gaussians

LayGA

Ours

Input Images

Input Image

Animatable Gaussians

LayGA

Ours

Virtual Try-On Applications

Virtual Try-On 1
Virtual Try-On 2
Virtual Try-On 3
Virtual Try-On 4

References

Citation

@inproceedings{chen2026gaussian, title={Gaussian Wardrobe: Compositional 3D Gaussian Avatars for Free-Form Virtual Try-on}, author={Chen, Zhiyi and Ho, Hsuan-I and Jiang, Tianjian and Song, Jie and Kaufmann, Manuel and Guo, Chen}, booktitle={Thirteenth International Conference on 3D Vision}, year={2026}, url={https://openreview.net/forum?id=sncanvgvUn} }