Skip to main content

Revisiting the variable memory model of visual search

Buy Article:

$63.00 + tax (Refund Policy)

How much memory does visual search have? A number of recent papers have explored this question from various points of view. In this paper, I propose a formal framework for comparing answers across different experimental paradigms. This framework is based on the “variable memory model” (Arani, Karwan, & Drury, 1984). This model has three parameters: Encoding probability (), recall probability (φ), and target identification probability ( p′ ). The model can be used to generate cumulative distribution functions for reaction time (RT) or saccades. I compare the model to a dataset of RTs collected on a standard inefficient search for block 2s among block 5s. Assuming perfect identification ( p′ =1), I found that mean encoding probability was .33, and mean recall probability .71. The variable memory model provides a common metric for characterizing the behaviour of observers in different laboratories, in terms that are easy to relate to the memory literature.

Document Type: Research Article

Affiliations: Visual Attention Laboratory, Brigham & Women's Hospital and Harvard Medical School, Cambridge, MA, USA

Publication date: 01 August 2006

  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content