Viewpoint invariant semantic object and scene categorization with RGB-D sensors

Understanding the semantics of objects and scenes using multi-modal RGB-D sensors serves many robotics applications. Key challenges for accurate RGB-D image recognition are the scarcity of training data, variations due to viewpoint changes and the heterogeneous nature of the data. We address these p...

Full description

Bibliographic Details
Main Authors: Mohd Zaki, Hasan Firdaus, Shafait, Faisal, Mian, Ajmal
Format: Article
Language:English
English
English
Published: Springer New York LLC 2019
Subjects:
Online Access:http://irep.iium.edu.my/64696/
http://irep.iium.edu.my/64696/
http://irep.iium.edu.my/64696/
http://irep.iium.edu.my/64696/20/64696_Viewpoint%20invariant%20semantic%20object%20and%20scene_complete.pdf
http://irep.iium.edu.my/64696/19/64696_Viewpoint%20invariant%20semantic%20object%20and%20scene_scopus.pdf
http://irep.iium.edu.my/64696/31/64696_Viewpoint%20invariant%20semantic%20object%20and%20scene%20categorization%20with%20RGB-D%20sensors_WOS.pdf
Description
Summary:Understanding the semantics of objects and scenes using multi-modal RGB-D sensors serves many robotics applications. Key challenges for accurate RGB-D image recognition are the scarcity of training data, variations due to viewpoint changes and the heterogeneous nature of the data. We address these problems and propose a generic deep learning framework based on a pre-trained convolutional neural network, as a feature extractor for both the colour and depth channels. We propose a rich multi-scale feature representation, referred to as convolutional hypercube pyramid (HP-CNN), that is able to encode discriminative information from the convolutional tensors at different levels of detail. We also present a technique to fuse the proposed HP-CNN with the activations of fully connected neurons based on an extreme learning machine classifier in a late fusion scheme which leads to a highly discriminative and compact representation. To further improve performance, we devise HP-CNN-T which is a view-invariant descriptor extracted from a multi-view 3D object pose (M3DOP) model. M3DOP is learned from over 140,000 RGB-D images that are synthetically generated by rendering CAD models from different viewpoints. Extensive evaluations on four RGB-D object and scene recognition datasets demonstrate that our HP-CNN and HP-CNN-T consistently outperforms state-of-the-art methods for several recognition tasks by a significant margin.