Crossmodal binding in localizing objects outside the field of view
Using virtual reality techniques we created a virtual room within which participants could orient themselves by means of a head-mounted display. Participants were required to search for a nonimmediately visually available object attached to different parts of the virtual room's walls. The search could be guided by a light and/or a sound emitted by the object. When the object was found participants engaged it with a sighting circle. The time taken by participants to initiate the search and to engage the target object was measured. Results from three experiments suggest that (1) advantages in starting the search, finding, and engaging the object were found when the object emitted both light and sound; (2) these advantages disappeared when the visual and auditory information emitted by the object was separated in time by more than 150 ms; (3) misleading visual information determined a greater level of interference than misleading auditory information (e.g., sound from one part of the room, light from the object).
No Reference information available - sign in for access.
No Citation information available - sign in for access.
No Supplementary Data.
No Article Media
Document Type: Research Article
Affiliations: Dipartimento di Psicologia Generale, Università di Padova, Padova, Italy
Publication date: 01 January 2006