Gaussian Haircut reconstructs strand-based hair geometry from a monocular video in a few hours on a single GPU.

Abstract

We introduce a new hair modeling method that uses a dual representation of classical hair strands and 3D Gaussians to produce accurate and realistic strand-based reconstructions from multi-view data. In contrast to recent approaches that leverage unstructured Gaussians to model human avatars, our method reconstructs the hair using 3D polylines, or strands. This fundamental difference allows the use of the resulting hairstyles out-of-the-box in modern computer graphics engines for editing, rendering, and simulation. Our 3D lifting method relies on unstructured Gaussians to generate multi-view ground truth data to supervise the fitting of hair strands. The hairstyle itself is represented in the form of the so-called strand-aligned 3D Gaussians. This representation allows us to combine strand-based hair priors, which are essential for realistic modeling of the inner structure of hairstyles, with the differentiable rendering capabilities of 3D Gaussian Splatting. Our method, named Gaussian Haircut, is evaluated on synthetic and real scenes and demonstrates state-of-the-art performance in the task of strand-based hair reconstruction.

Video Presentation

Main idea

Gaussian Haircut works with multi-view images and uses strand-aligned 3D Gaussians to reconstruct a hairstyle. In the first stage, 3D lifting, we reconstruct the scene using unstructured primitives in the form of Gaussians. These unstructured primitives are then used in the second stage, strands fitting, to supervise our dual hair strand representation consisting of 3D Gaussians that are attached to hair strands. As a result, we produce a realistic strand-based hairstyle that can be rendered, edited, and animated using classical computer graphics techniques.

(a) In our work, we utilize both structured and unstructured sets of Gaussians. The former are attached to the strands and thus cover an entire hair volume, while the latter concentrate only on the visible part of the hair surface. (b) During coarse strand-based fitting, we decode only a set of guiding strands from the latent hair map at each training step, as generating a full hairstyle is computationally- and memory-expensive. We then convert these strands into a dense hair map suitable for rendering by conducting interpolation in the space of their 3D coordinates. (c) Finally, we conduct a fine strand-based optimization step. We decode a dense hairstyle from a latent map and directly optimize their coordinates instead of latent representations.

Results

Comparison

Physics simulation and relighting

Unreal Engine demo

Hairstyles produced by our method can be easily imported into physically rendered and simulated virtual environments.

BibTeX

@article{zakharov2024gh,
    title={Human Hair Reconstruction with Strand-Aligned 3D Gaussians},
    author={Zakharov, Egor and Sklyarova, Vanessa and Black, Michael J and Nam, Giljoo and Thies, Justus and Hilliges, Otmar},
    journal={ArXiv},
    month={Sep}, 
    year={2024} 
}