Assuring stability of the guidance law for quadrotor-type Urban Air Mobility (UAM) is important since it is assumed to operate in urban areas. Model free reinforcement learning was intensively applied for this purpose in recent studies. In reinforcement learning, the environment is an important part of training. Usually, a Proximal Policy Optimization (PPO) algorithm is used widely for reinforcement learning of quadrotors. However, PPO algorithms for quadrotors tend to fail to guarantee the stability of the guidance law in the environment as the search space increases. In this work, we show the improvements of stability in a multi-agent quadrotor-type UAM environment by applying the Soft Actor-Critic (SAC) reinforcement learning algorithm. The simulations were performed in Unity. Our results achieved three times better reward in the Urban Air Mobility environment than when trained with the PPO algorithm and our approach also shows faster training time than the PPO algorithm.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Traffic Navigation for Urban Air Mobility with Reinforcement Learning


    Weitere Titelangaben:

    Lect. Notes Electrical Eng.


    Beteiligte:
    Lee, Sangchul (Herausgeber:in) / Han, Cheolheui (Herausgeber:in) / Choi, Jeong-Yeol (Herausgeber:in) / Kim, Seungkeun (Herausgeber:in) / Kim, Jeong Ho (Herausgeber:in) / Lee, Jaeho (Autor:in) / Lee, Hohyeong (Autor:in) / Noh, Junyoung (Autor:in) / Bang, Hyochoong (Autor:in)

    Kongress:

    Asia-Pacific International Symposium on Aerospace Technology ; 2021 ; Korea (Republic of) November 15, 2021 - November 17, 2021



    Erscheinungsdatum :

    2022-09-30


    Format / Umfang :

    12 pages





    Medientyp :

    Aufsatz/Kapitel (Buch)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch