Given a set of calibrated images of a scene, we present an approach that produces a simple, compact, and actionable 3D world representation by means of 3D primitives. While many approaches focus on recovering high-fidelity 3D scenes, we focus on parsing a scene into mid-level 3D representations made of a small set of textured primitives. Such representations are interpretable, easy to manipulate and suited for physics-based simulations. Moreover, unlike existing primitive decomposition methods that rely on 3D data, our approach operates directly on images through differentiable rendering. Specifically, we model primitives as superquadric meshes and their parameters are optimized from scratch with an image rendering loss. We highlight the importance of modeling transparency for each primitive, which is critical for optimization and also enables handling varying numbers of primitives. We show that the resulting textured primitives faithfully reconstruct the input images, accurately model the visible 3D points, while providing amodal shape completions of unseen object regions. We compare our approach to the state of the art on diverse scenes from DTU, and demonstrate its robustness on real-life captures from BlendedMVS and Nerfstudio. We also showcase how our results can be used to effortlessly edit a scene or perform physical simulations.
@inproceedings{monnier2023dbw, title={{Differentiable Blocks World: Qualitative 3D Decomposition by Rendering Primitives}}, author={Monnier, Tom and Austin, Jake and Kanazawa, Angjoo and Efros, Alexei A. and Aubry, Mathieu}, booktitle={{NeurIPS}}, year={2023}, }
We thank Cyrus Vachha for help on the physics-based simulations; Antoine Guédon, Romain Loiseau for visualization insights; François Darmon, Romain Loiseau, Elliot Vincent for manuscript feedback. This work was supported in part by the European Research Council (ERC project DISCOVER, number 101076028), ANR project EnHerit ANR-17-CE23-0008, gifts from Adobe and HPC resources from GENCI-IDRIS (2022-AD011011697R2, 2022-AD011013538).
If you like this project, see related works from our group:
© You are welcome to copy the code, simply attribute the source with a link to this page and remove analytics.