This paper proposes a method which estimates the relationships between learner's behaviors and other agents' ones in the environment through interactions (observation and action) using the method of system identification. In order to identify the model of each agent, Akaike's Information Criterion is applied to the results of Canonical Variate Analysis for the relationship between the observed data in terms of action and future observation. Next, reinforcement learning based on the estimated state vectors is performed to obtain the optimal behavior. The proposed method is applied to a soccer playing situation, where a rolling ball and other moving agents are well modeled and the learner's behaviors are successfully acquired by the method. Computer simulations and real experiments are shown and a discussion is given.
State space construction for behavior acquisition in multi agent environments with vision and action
1998-01-01
824231 byte
Conference paper
Electronic Resource
English
State Space Construction for Behavior Acquisition in Multi Agent Environments with Vision and Action
British Library Conference Proceedings | 1998
|State and Action Space Construction Using Vision Information
British Library Online Contents | 2000
|Action-based state space construction for robot learning
British Library Online Contents | 1997
|Vision-Based Robot Learning for Behavior Acquisition
British Library Conference Proceedings | 1995
|