Skip to main content

Learning behavior fusion from demonstration

Buy Article:

$36.18 plus tax (Refund Policy)

Abstract:

A critical challenge in robot learning from demonstration is the ability to map the behavior of the trainer onto a robot's existing repertoire of basic/primitive capabilities. In part, this problem is due to the fact that the observed behavior of the teacher may consist of a combination (or superposition) of the robot's individual primitives. The problem becomes more complex when the task involves temporal sequences of goals. We introduce an autonomous control architecture that allows for learning of hierarchical task representations, in which: (1) every goal is achieved through a linear superposition (or fusion) of robot primitives and (2) sequencing across goals is achieved through arbitration. We treat learning of the appropriate superposition as a state estimation problem over the space of possible linear fusion weights, inferred through a particle filter. We validate our approach in both simulated and real world environments with a Pioneer 3DX mobile robot.

Keywords: CONTROL ARCHITECTURES; FILTERS; HUMAN–ROBOT INTERACTION; LEARNING FROM DEMONSTRATION; PARTICLE

Document Type: Research Article

DOI: https://doi.org/10.1075/is.9.2.09nic

Publication date: 2008-05-01

More about this publication?
  • Social Behaviour and Communication in Biological and Artificial Systems
  • Access Key
  • Free ContentFree content
  • Partial Free ContentPartial Free content
  • New ContentNew content
  • Open Access ContentOpen access content
  • Partial Open Access ContentPartial Open access content
  • Subscribed ContentSubscribed content
  • Partial Subscribed ContentPartial Subscribed content
  • Free Trial ContentFree trial content
Cookie Policy
X
Cookie Policy
Ingenta Connect website makes use of cookies so as to keep track of data that you have filled in. I am Happy with this Find out more