Acquisition of joint attention through natural interaction utilizing motion cues
Joint attention is one of the most important cognitive functions for the emergence of communication not only between humans, but also between humans and robots. In previous work, we have demonstrated how a robot can acquire primary joint attention behavior (gaze following) without external
evaluation. However, this method needs the human to tell the robot when to shift its gaze. This paper presents a method that does not need such a constraint by introducing an attention selector based on a measure consisting of saliencies of object features and motion cues. In order to realize
natural interaction, a self-organizing map for real-time face pattern separation and contingency learning for gaze following without external evaluation are utilized. The attention selector controls the robot gaze to switch often from the human face to an object and vice versa, and
pairs of a face pattern and a gaze motor command are input to the contingency learning. The motion cues are expected to reduce the number of incorrect training data pairs due to the asynchronous interaction that affects the convergence of the contingency learning. The experimental result shows
that gaze shift utilizing motion cues enables a robot to synchronize its own motion with human motion and to learn joint attention efficiently in about 20 min.
No Reference information available - sign in for access.
No Citation information available - sign in for access.
No Supplementary Data.
No Article Media
Document Type: Research Article
Department of Adaptive Machine Systems, Graduate School of Engineering, Osaka University, 2-1 Yamadaoka, Suita, Osaka 565-0871, Japan
Department of Adaptive Machine Systems, Graduate School of Engineering, Osaka University, 2-1 Yamadaoka, Suita, Osaka 565-0871, Japan; ERATO, JST, 2-1 Yamadaoka, Suita, Osaka 565-0871, Japan
ERATO, JST, 2-1 Yamadaoka, Suita, Osaka 565-0871, Japan
Publication date: 2007-09-01