Skip to main content

Quantifying the contribution of low‐level saliency to human eye movements in dynamic scenes

Buy Article:

$47.00 plus tax (Refund Policy)

We investigated the contribution of low‐level saliency to human eye movements in complex dynamic scenes. Eye movements were recorded while naive observers viewed a heterogeneous collection of 50 video clips (46,489 frames; 4–6 subjects per clip), yielding 11,916 saccades of amplitude ≥2°. A model of bottom‐up visual attention computed instantaneous saliency at the instant each saccade started and at its future endpoint location. Median model‐predicted saliency was 45% the maximum saliency, a significant factor 2.03 greater than expected by chance. Motion and temporal change were stronger predictors of human saccades than colour, intensity, or orientation features, with the best predictor being the sum of all features. There was no significant correlation between model‐predicted saliency and duration of fixation. A majority of saccades were directed to a minority of locations reliably marked as salient by the model, suggesting that bottom‐up saliency may provide a set of candidate saccade target locations, with the final choice of which location of fixate more strongly determined top‐down.
No Reference information available - sign in for access.
No Citation information available - sign in for access.
No Supplementary Data.
No Data/Media
No Metrics

Document Type: Research Article

Affiliations: Departments of Computer Science, Pyschology, and Neuroscience Graduate Program, University of Southern California, Los Angeles, USA

Publication date: 2005-08-01

  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content
Cookie Policy
Cookie Policy
Ingenta Connect website makes use of cookies so as to keep track of data that you have filled in. I am Happy with this Find out more