Learning from demonstration (LfD) aims at robots learning skills from human-demonstrated tasks. Robots should be able to learn at all levels of abstraction. Unlike at the level of motor primitives, high-level LfD requires symbolic representations. It thus faces the classical problem of symbol grounding. Furthermore, it requires the robot to interpret human-demonstrated actions at a higher, conceptual abstraction level. We present a method, that enables a robot to recognize human-demonstrated pick-and-place task goals on an object-relational abstraction layer. The robot can reproduce the task goals in new situations using a symbolic planner. We show that in a robotic context conceptual spaces can serve as a mean for symbol grounding at an object-relational level as well as for the recognition of conceptual similarities in effects of human-demonstrated actions. The method is evaluated in experiments on a real robot.
High-level learning from demonstration with conceptual spaces and subspace clustering
2015-05-01
1751071 byte
Conference paper
Electronic Resource
English
Reweighted sparse subspace clustering
British Library Online Contents | 2015
|A Robust Subspace Clustering Algorithm
British Library Online Contents | 2011
|Conceptual Design Optimization of High Altitude Airship in Concurrent Subspace Optimization
British Library Conference Proceedings | 2012
|