Skip to main content

Enabling collaborative geoinformation access and decision-making through a natural, multimodal interface

Buy Article:

$55.00 plus tax (Refund Policy)

Current computing systems do not support human work effectively. They restrict human-computer interaction to one mode at a time and are designed with an assumption that use will be by individuals (rather than groups), directing (rather than interacting with) the system. To support the ways in which humans work and interact, a new paradigm for computing is required that is multimodal , rather than unimodal, collaborative , rather than personal, and dialogue-enabled , rather than unidirectional. To address this challenge, we have developed an approach for designing natural, multimodal, multiuser dialogue-enabled interfaces to geographic information systems that make use of large-screen displays and integrated speech-gesture interaction. After outlining our goals and providing a brief overview of relevant literature, we introduce the Dialogue-Assisted Visual Environment for Geoinformation (DAVE_G). DAVE_G is being developed using a human-centred systems approach that contextualizes development and assessment in the current practice of potential users. In keeping with this human-centred approach, we outline a user task analysis and associated scenario development that implementation is designed to support (grounded in the context of emergency response), review our own precursors to the current prototype system and discuss how the current prototype extends upon past work, provide a detailed description of the architecture that underlies the current system, and introduce the approach implemented for enabling mixed-initiative human-computer dialogue. We conclude with a discussion of goals for future research.
No Reference information available - sign in for access.
No Citation information available - sign in for access.
No Supplementary Data.
No Article Media
No Metrics

Keywords: GIS; decision-making; emergency response; geocollaboration; geoinformation; multimodal interface

Document Type: Research Article

Affiliations: School of Information Sciences and Technology Penn State University PA 16802 USA

Publication date: 2005-03-01

More about this publication?
  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content
Cookie Policy
Cookie Policy
Ingenta Connect website makes use of cookies so as to keep track of data that you have filled in. I am Happy with this Find out more