Projecte final de carrera fet en col.laboració amb Aalto University. School of Science and Technology. Faculty of Information and Natural Sciences ; In order to interact with real environments, performing daily tasks, autonomous agents (as machines or robots) cannot be hard-coded. Given all the possible scenarios and, in each scenario, all the possible variations, it is impossible to take into account every single situation that the autonomous agent may encounter. Humans are able to interact with the changing world using as a guidance the sensory input perceived. Thus, autonomous agents need to be able to adapt to a changing environment. This work proposes a biologically inspired solution that allows the agent to learn representations and skills autonomously that prepare the agent for future learning tasks. The biologically inspired solution proposed here, called a cognitive architecture, follows the hierarchical architecture found in the cerebral cortex. This model permits the autonomous agent to extract useful information from the sensory input data it receives. The information is coded in abstractions, which are invariant features found within the input patterns. The cognitive architecture uses slowness as a principle for extracting features. In principle, unsupervised learning algorithms based on slowness try to find relevant and slowly changing data. This information could be useful for self evaluation. The agent tries to learn how to manipulate the sensory abstractions, by linking those to the motor ones. This allows the robot to find the mapping between the motor actions it is taking and the changes it is able to produce in the surrounding environment. Using the cognitive architecture, an example will be implemented. An agent, who knows nothing about the environment it is placed on, will be able to learn how to move towards different places in space in an efficient (not random) way. Starting from random movements and capturing the sensory input data, it is able to learn concepts such as place and distance, which permits it to learn how to move towards a target efficiently.


    Access

    Download


    Export, share and cite



    Reinforcement learning using sensorimotor traces

    Li, Jingxian | BASE | 2013

    Free access

    A sensorimotor learning framework for object categorization

    Högman, Virgile / Björkman, Mårten / Maki, Atsuto et al. | BASE | 2016

    Free access

    Rover sensorimotor control systems

    Ellery, Alex | Springer Verlag | 2015


    Robotic sensorimotor interaction strategies

    Kettukangas, T. (Teemu) / Talukder, R. (Rafiqul) | BASE | 2023

    Free access

    Sensorimotor Adaptation, Including SMS

    Seidler, Rachael D. / Mulavara, Ajitkumar P. | Springer Verlag | 2021