Reconstructing Action-Conditioned Human-Object Interactions Using Commonsense Knowledge Priors

1ETH Zurich, Switzerland
2Massachusetts Institute of Technology, USA
3Max Planck Institute for Intelligent Systems, Tübingen, Germany
*Equal Contribution
In Proceedings of the International Conference on 3D Vision (3DV), 2022.
Overview

Abstract

We present a method for inferring diverse 3D models of human-object interactions from images. Reasoning about how humans interact with objects in complex scenes from a single 2D image is a challenging task given ambiguities arising from the loss of information through projection. In addition, modeling 3D interactions requires the generalization ability towards diverse object categories and interaction types. We propose an action-conditioned modeling of interactions that allows us to infer diverse 3D arrangements of humans and objects without supervision on contact regions or 3D scene geometry. Our method extracts highlevel commonsense knowledge from large language models (such as GPT-3), and applies them to perform 3D reasoning of human-object interactions. Our key insight is priors extracted from large language models can help in reasoning about human-object contacts from textural prompts only. We quantitatively evaluate the inferred 3D models on a large human-object interaction dataset and show how our method leads to better 3D reconstructions. We further qualitatively evaluate the effectiveness of our method on real images and demonstrate its generalizability towards interaction types and object categories.

Video

We present a method for inferring diverse 3D models of human-object interactions from images using priors extracted from large language models.

Poster

Overview

BibTeX

@inproceedings{wang2022reconstruction,
      title={Reconstructing Action-Conditioned Human-Object Interactions Using Commonsense Knowledge Priors},
      author={Wang, Xi and Li, Gen and Kuo, Yen-Ling and Kocabas, Muhammed and Aksan, Emre and Hilliges, Otmar},
      booktitle={International Conference on 3D Vision (3DV)},
      year={2022}
    }