Going From RGB to RGBD Saliency: A Depth-Guided Transformation Model.

Clicks: 213
ID: 15903
2019
Depth information has been demonstrated to be useful for saliency detection. However, the existing methods for RGBD saliency detection mainly focus on designing straightforward and comprehensive models, while ignoring the transferable ability of the existing RGB saliency detection models. In this article, we propose a novel depth-guided transformation model (DTM) going from RGB saliency to RGBD saliency. The proposed model includes three components, that is: 1) multilevel RGBD saliency initialization; 2) depth-guided saliency refinement; and 3) saliency optimization with depth constraints. The explicit depth feature is first utilized in the multilevel RGBD saliency model to initialize the RGBD saliency by combining the global compactness saliency cue and local geodesic saliency cue. The depth-guided saliency refinement is used to further highlight the salient objects and suppress the background regions by introducing the prior depth domain knowledge and prior refined depth shape. Benefiting from the consistency of the entire object in the depth map, we formulate an optimization model to attain more consistent and accurate saliency results via an energy function, which integrates the unary data term, color smooth term, and depth consistency term. Experiments on three public RGBD saliency detection benchmarks demonstrate the effectiveness and performance improvement of the proposed DTM from RGB to RGBD saliency.
Reference Key
cong2019goingieee Use this key to autocite in the manuscript while using SciMatic Manuscript Manager or Thesis Manager
Authors Cong, Runmin;Lei, Jianjun;Fu, Huazhu;Hou, Junhui;Huang, Qingming;Kwong, Sam;
Journal ieee transactions on cybernetics
Year 2019
DOI 10.1109/TCYB.2019.2932005
URL
Keywords Keywords not found

Citations

No citations found. To add a citation, contact the admin at info@scimatic.org

No comments yet. Be the first to comment on this article.