Research Post
A key requirement for leveraging supervised deep learning methods is the availability of large, labeled datasets. Unfortunately, in the context of RGB-D scene understanding, very little data is available – current datasets cover a small range of scene views and have limited semantic annotations. To address this issue, we introduce ScanNet, an RGB-D video dataset containing 2.5M views in 1513 scenes annotated with 3D camera poses, surface reconstructions, and semantic segmentations. To collect this data, we designed an easy-to-use and scalable RGB-D capture system that includes automated surface reconstruction and crowdsourced semantic annotation.We show that using this data helps achieve state-of-the-art performance on several 3D scene understanding tasks, including 3D object classification, semantic voxel labeling, and CAD model retrieval.
Acknowledgments
This project is funded by Google Tango, Intel, NSF (IIS-1251217 and VEC 1539014/1539099), and a Stanford Graduate fellowship. We also thank Occipital for donating structure sensors and Nvidia for hardware donations, as well as support by the Max-Planck Center for Visual Computing and the Stanford CURIS program. Further, we thank Toan Vuong, Joseph Chang, and Helen Jiang for help on the mobile scanning app and the scanning process, and Hope Casey-Allen and Duc Nugyen for early prototypes of the annotation interfaces. Last but not least, we would like to thank all the volunteers who helped with scanning and get- ting us access to scanning spaces.
Aug 25th 2020
Research Post
Aug 6th 2020
Research Post
Feb 7th 2020
Research Post
Looking to build AI capacity? Need a speaker at your event?