A summary statistic representation in peripheral vision explains visual search

Ruth Rosenholtz, Jie Huang, Alvin Raj, Benjamin J. Balas, and Livia Ilie


Abstract

Vision is an active process: We repeatedly move our eyes to seek out objects of interest and explore our environment. Visual search experiments capture aspects of this process, by having subjects look for a target within a background of distractors. Search speed often correlates with target–distractor discriminability; search is faster when the target and distractors look quite different. However, there are notable exceptions. A given discriminability can yield efficient searches (where the target seems to “pop-out”) as well as inefficient ones (where additional distractors make search significantly slower and more difficult). Search is often more difficult when finding the target requires distinguishing a particular configuration or conjunction of features. Search asymmetries abound. These puzzling results have fueled three decades of theoretical and experimental studies. We argue that the key issue in search is the processing of image patches in the periphery, where visual representation is characterized by summary statistics computed over a sizable pooling region. By quantifying these statistics, we predict a set of classic search results, as well as peripheral discriminability of crowded patches such as those found in search displays.

Information

title:
A summary statistic representation in peripheral vision explains visual search
author:
R. Rosenholtz,
J. Huang,
A. Raj,
B. J. Balas,
and L. Ilie
citation:
Journal of Vision, 12(4):14, 1-17
shortcite:
Journal of Vision
year:
2012
created:
2012-04-22
summary:
searchjov2012
keyword:
rosenholtz,
visstat,
search
pdf:
http://www.journalofvision.org/content/12/4/14
type:
publication