TempCLR: Reconstructing Hands via Time-Coherent Contrastive Learning

1ETH Zürich, Switzerland, 2Max Planck Institute for Intelligent Systems, Tübingen, Germany

Accurate in-the-wild 3D hand annotation is limited, TempCLR leverages a large amount of unlabelled monocular RGB video data in the wild via contrastive learning to enforce similar hand poses to have similar embeddings.

TempCLR

No TempCLR

Abstract

We introduce TempCLR, a new time-coherent contrastive learning approach for the structured regression task of 3D hand reconstruction. Unlike previous time-contrastive methods for hand pose estimation, our framework considers temporal consistency in its augmentation scheme, and accounts for the differences of hand poses along the temporal direction. Our data-driven method leverages unlabelled videos and a standard CNN, without relying on synthetic data, pseudo-labels, or specialized architectures. Our approach improves the performance of fully-supervised hand reconstruction methods by 15.9% and 7.6% in PA-V2V on the HO-3D and FreiHAND datasets respectively, thus establishing new state-of-the-art performance. Finally, we demonstrate that our approach produces smoother hand reconstructions through time, and is more robust to heavy occlusions compared to the previous state-of-the-art which we show quantitatively and qualitatively.

Video

BibTeX


@inProceedings{ziani2022tempclr,
  title={TempCLR: Reconstructing Hands via Time-Coherent Contrastive Learning},
  author={Ziani, Andrea and Fan, Zicong and Kocabas, Muhammed and Christen, Sammy and Hilliges, Otmar},
  booktitle={International Conference on 3D Vision (3DV)},
  year={2022}
}