A new approach for recovering indoor scene layout and clutter from RGB-D images is proposed. Also, a robust method is introduced for obtaining scene coordinates from depth images by solving a restricted quantization problem over normal vectors. Using this quantization and 3D position of pixels, the scene is decomposed into planar surfaces and an orientation labeling is obtained. The segmented image is used to extract features for layout candidates generated by sampled rays from vanishing points. These features are applied in a structured learning algorithm to rank layout candidates. The approach recover clutter by distinguishing different layers of parallel surfaces. The experimental results on the challenging NYU-Depth V2 dataset show that this approach outperforms state-of-the-art methods both in accuracy and computational cost.
[1]. M. Noroozi, M.K. Tabrizi and S.R. Moghadasi. Indoor Scene 3D Layout and Clutter Estimation from RGB-D Images. In 3DV, 2014.