Hard-to-see targets are generally only detected by human observers once they have been fixated. Hence, understanding how the human visual system allocates fixation locations is necessary for predicting target detectability. Visual search experiments were conducted where observers searched for military vehicles in cluttered terrain. Instantaneous eye position measurements were collected using an eye tracker. The resulting data was partitioned into fixations and saccades, and analyzed for correlation with various image properties. The fixation data was used to validate our model for predicting fixation locations. This model generates a saliency map from bottom-up image features, such as local contrast. To account for top-down scene understanding effects, a separate cognitive bias map is generated. The combination of these two maps provides a fixation probability map, from which sequences of fixation points were generated.
Analysis and modeling of fixation point selection for visual search in cluttered backgrounds
2000
11 Seiten, 10 Quellen
Conference paper
English
Modeling cognitive effects on visual search for targets in cluttered backgrounds [3375-11]
British Library Conference Proceedings | 1998
|Hyperspectral Target Detection in Cluttered Backgrounds
British Library Conference Proceedings | 2005
|Real-time viewpoint-invariant hand localization with cluttered backgrounds
British Library Online Contents | 2012
|Computer-augmented detection of targets in cluttered and low-contrast backgrounds
Tema Archive | 1997
|A Comparison of Measures for Detecting Natural Shapes in Cluttered Backgrounds
British Library Online Contents | 1999
|