Skip to main content

Crossmodal binding in localizing objects outside the field of view

Buy Article:

$55.00 plus tax (Refund Policy)


Using virtual reality techniques we created a virtual room within which participants could orient themselves by means of a head-mounted display. Participants were required to search for a nonimmediately visually available object attached to different parts of the virtual room's walls. The search could be guided by a light and/or a sound emitted by the object. When the object was found participants engaged it with a sighting circle. The time taken by participants to initiate the search and to engage the target object was measured. Results from three experiments suggest that (1) advantages in starting the search, finding, and engaging the object were found when the object emitted both light and sound; (2) these advantages disappeared when the visual and auditory information emitted by the object was separated in time by more than 150 ms; (3) misleading visual information determined a greater level of interference than misleading auditory information (e.g., sound from one part of the room, light from the object).

Document Type: Research Article


Affiliations: Dipartimento di Psicologia Generale, Università di Padova, Padova, Italy

Publication date: 2006-01-01

  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content
Cookie Policy
Cookie Policy
Ingenta Connect website makes use of cookies so as to keep track of data that you have filled in. I am Happy with this Find out more