This paper presents a method for segmenting depth images into superpixels without requiring color images. Typically superpixel methods cluster pixels based on proximity in a multidimensional color space. However, building superpixels from time-of-flight depth images poses a number of new challenges including: pixels do not have color channels for similarity comparisons, the resolution of depth cameras is low compared to color, and there is significant depth noise. To address these we propose a superpixel method that approximates a depth image with set of planar facets. Facets are grown from seed points to cover the scene. Facet boundaries tend to coincide with high curvature regions and depth discontinuities, typically giving an over-segmentation of the scene. This work is motivated by automated foliage modeling, and the data we consider are of dense 3D foliage. Superpixel results are shown on foliage and are quantified using labeled data.
|Title of host publication||Proceedings - 2016 13th Conference on Computer and Robot Vision, CRV 2016|
|Number of pages||4|
|State||Published - Dec 28 2016|
|Event||13th Conference on Computer and Robot Vision, CRV 2016 - Victoria, Canada|
Duration: Jun 1 2016 → Jun 3 2016
|Name||Proceedings - 2016 13th Conference on Computer and Robot Vision, CRV 2016|
|Conference||13th Conference on Computer and Robot Vision, CRV 2016|
|Period||6/1/16 → 6/3/16|
Bibliographical noteFunding Information:
This research was supported by an MSU start-up grant, the U.S. Department of Energy, Office of Science, Basic Energy Sciences [award number DE-FG02-91ER20021], the National Science Foundation [award number 1458556] and the MSU Center for Advanced Algal and Plant Phenotyping.
© 2016 IEEE.
- Depth image
- Energy minimization
- Time-of-flight camera
ASJC Scopus subject areas
- Computer Vision and Pattern Recognition
- Signal Processing