Binh-Son Hua1, Quang-Hieu Pham2, Duc Thanh Nguyen3, Minh-Khoi Tran2, Lap-Fai Yu4, and Sai-Kit Yeung5
1The University of Tokyo 2Singapore University of Technology and Design 3Deakin University 4George Mason University 5The Hong Kong University of Science and Technology
We introduce an RGB-D scene dataset consisting of more than 100 indoor scenes.
Our scenes are captured at various places, e.g., offices, dormitory, classrooms, pantry, etc.,
from University of Massachusetts Boston and Singapore University of Technology and Design.
All scenes are reconstructed into triangle meshes and have
per-vertex and per-pixel annotation. We further enriched
the dataset with fine-grained information such as
axis-aligned bounding boxes, oriented bounding boxes, and object poses.
CVPR 2018 (evaluating semantic segmentation with NYU-D v2 40 classes)
3DV 2016 (original instance annotation of 100+ scenes)
Real-time Progressive 3D Semantic Segmentation for Indoor Scenes
Quang-Hieu Pham, Binh-Son Hua, Duc Thanh Nguyen, and Sai-Kit Yeung
WACV 2019
ProjectPointwise Convolutional Neural Networks
Binh-Son Hua, Minh-Khoi Tran, and Sai-Kit Yeung
Computer Vision and Pattern Recognition (CVPR) 2018
Project Paper Code Bibtex
@inproceedings{hua-pointwise-cvpr18,
title = {Pointwise Convolutional Neural Networks},
author = {Binh-Son Hua and Minh-Khoi Tran and Sai-Kit Yeung},
booktitle = {Computer Vision and Pattern Recognition (CVPR)},
year = {2018}
}
SceneNN: A Scene Meshes Dataset with aNNotations
Binh-Son Hua, Quang-Hieu Pham, Duc Thanh Nguyen, Minh-Khoi Tran, Lap-Fai Yu, and Sai-Kit Yeung
International Conference on 3D Vision (3DV) 2016. Best Paper Honorable Mention.
Paper Supplemental Slides Poster Bibtex
@inproceedings{scenenn-3dv16,
author = {Binh-Son Hua and Quang-Hieu Pham and Duc Thanh Nguyen and Minh-Khoi Tran and Lap-Fai Yu and Sai-Kit Yeung},
title = {SceneNN: A Scene Meshes Dataset with aNNotations},
booktitle = {International Conference on 3D Vision (3DV)},
year = {2016}
}
A Robust 3D-2D Interactive Tool for Scene Segmentation and Annotation
Duc Thanh Nguyen, Binh-Son Hua, Lap-Fai Yu, and Sai-Kit Yeung
IEEE Transactions on Visualization and Computer Graphics (TVCG) 2017
(presented at Pacific Graphics 2017)
Paper Video Bibtex
@article{anno-tvcg17,
author = {Thanh Nguyen, Duc and Hua, Binh-Son and Yu, Lap-Fai and Yeung, Sai-Kit},
title = {A Robust 3D-2D Interactive Tool for Scene Segmentation and Annotation},
journal = {IEEE Transactions on Visualization and Computer Graphics (TVCG)},
year = {2017}
}
SHREC'17: RGB-D to CAD Retrieval with ObjectNN Dataset
Organizers: Binh-Son Hua, Quang-Trung Truong, Minh-Khoi Tran, Quang-Hieu Pham, Lap-Fai Yu, Duc Thanh Nguyen, and Sai-Kit Yeung
Eurographics Workshop on 3D Object Retrieval 2017
Paper Slides Homepage Bibtex
@misc{objectnn-shrec17,
author = {Binh-Son Hua and Quang-Trung Truong and Minh-Khoi Tran and Quang-Hieu Pham and
Asako Kanezaki and Tang Lee and HungYueh Chiang and Winston Hsu and
Bo Li and Yijuan Lu and Henry Johan and Shoki Tashiro and Masaki Aono and
Minh-Triet Tran and Viet-Khoi Pham and Hai-Dang Nguyen and Vinh-Tiep Nguyen and
Quang-Thang Tran and Thuyen V. Phan and Bao Truong and Minh N. Do and Anh-Duc Duong and
Lap-Fai Yu and Duc Thanh Nguyen and Sai-Kit Yeung},
title = {SHREC'17: RGB-D to CAD Retrieval with ObjectNN Dataset},
booktitle = {Eurographics Workshop on 3D Object Retrieval},
year = {2017}
}
Please email us at scenenn [at] gmail.com for any inquiries. You can also post to the discussion board below.
We are grateful to the anonymous reviewers for their constructive comments. We thank Fangyu Lin for his assistance with the data capture and development of the WebGL viewer, and Guoxuan Zhang for his help with the early version of the annotation tool.
Lap-Fai Yu is supported by the University of Massachusetts Boston StartUp Grant P20150000029280 and by the Joseph P. Healey Research Grant Program provided by the Office of the Vice Provost for Research and Strategic Initiatives & Dean of Graduate Studies of the University of Massachusetts Boston. This research is supported by the National Science Foundation under award number 1565978. We also acknowledge NVIDIA Corporation for graphics card donation.
Sai-Kit Yeung is supported by Singapore MOE Academic Research Fund MOE2013-T2-1-159 and SUTD-MIT International Design Center Grant IDG31300106. We acknowledge the support of the SUTD Digital Manufacturing and Design (DManD) Centre which is supported by the National Research Foundation (NRF) of Singapore. This research is also supported by the National Research Foundation, Prime Minister's Office, Singapore under its IDM Futures Funding Initiative.