Revisiting the variable memory model of visual search

$54.97 plus tax (Refund Policy)

Buy Article:

Abstract:

How much memory does visual search have? A number of recent papers have explored this question from various points of view. In this paper, I propose a formal framework for comparing answers across different experimental paradigms. This framework is based on the “variable memory model” (Arani, Karwan, & Drury, 1984). This model has three parameters: Encoding probability (), recall probability (φ), and target identification probability ( p′ ). The model can be used to generate cumulative distribution functions for reaction time (RT) or saccades. I compare the model to a dataset of RTs collected on a standard inefficient search for block 2s among block 5s. Assuming perfect identification ( p′ =1), I found that mean encoding probability was .33, and mean recall probability .71. The variable memory model provides a common metric for characterizing the behaviour of observers in different laboratories, in terms that are easy to relate to the memory literature.

Document Type: Research Article

DOI: http://dx.doi.org/10.1080/13506280500193958

Affiliations: Visual Attention Laboratory, Brigham & Women's Hospital and Harvard Medical School, Cambridge, MA, USA

Publication date: August 1, 2006

Related content

Share Content

Access Key

Free Content
Free content
New Content
New content
Open Access Content
Open access content
Subscribed Content
Subscribed content
Free Trial Content
Free trial content
Cookie Policy
X
Cookie Policy
ingentaconnect website makes use of cookies so as to keep track of data that you have filled in. I am Happy with this Find out more