In this paper we present the geometry and the algorithms for organizing a viewer-centered representation of the occluding contour of a polyhedron. The contour is computed from a polyhedral boundary model as it would appear under orthographic projection into the image plane from every viewpoint on the view sphere. Using this representation, we show how to derive constraints on regions in viewpoint space from the relationship between detected image features and our precomputed contour model. Such constraints are based on both qualitative (viewpoint extent) and quantitative (angle measurements and relative geometry) information that has been precomputed about how the contour appears in the image plane as a set of projected curves and T-junctions from self-occlusion. The results we show from an experimental system demonstrate that features of the occluding contour can be computed in a model-based framework, and that their geometry constrains the viewpoints from which a model will project to a set of occluding contour features in an image.
|Number of pages||14|
|Journal||CVGIP: Image Understanding|
|State||Published - Mar 1992|
ASJC Scopus subject areas
- Environmental Science (all)
- Engineering (all)
- Earth and Planetary Sciences (all)