In this work, we optimize the 3D trajectory of an unmanned aerial vehicle (UAV)-based portable access point (PAP) that provides wireless services to a set of ground nodes (GNs). Moreover, as per the Peukert effect, we consider pragmatic non-linear battery discharge for UAV’s battery. Thus, we formulate the problem in a novel manner that represents the maximization of a fairness-based energy efficiency metric and is named fair energy efficiency (FEE). The FEE metric defines a system that lays importance on both the per-user service fairness and the PAP’s energy efficiency. The formulated problem takes the form of a non-convex problem with non-tractable constraints. To obtain a solution we represent the problem as a Markov Decision Process (MDP) with continuous state and action spaces. Considering the complexity of the solution space, we use the twin delayed deep deterministic policy gradient (TD3) actor-critic deep reinforcement learning (DRL) framework to learn a policy that maximizes the FEE of the system. We perform two types of RL training to exhibit the effectiveness of our approach: the first (offline) approach keeps the positions of the GNs the same throughout the training phase; the second approach generalizes the learned policy to any arrangement of GNs by changing the positions of GNs after each training episode. Numerical evaluations show that neglecting the Peukert effect overestimates the air-time of the PAP and can be addressed by optimally selecting the PAP’s flying speed. Moreover, the user fairness, energy efficiency, and hence the FEE value of the system can be improved by efficiently moving the PAP above the GNs. As such, we notice massive FEE improvements over baseline scenarios of up to 88.31%, 272.34%, and 318.13% for suburban, urban, and dense urban environments, respectively.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Fairness Based Energy-Efficient 3D Path Planning of a Portable Access Point:A Deep Reinforcement Learning Approach


    Beteiligte:

    Erscheinungsdatum :

    2022-09-01


    Anmerkungen:

    Babu , N , Donevski , I , Valcarce , A , Popovski , P , Nielsen , J J & Papadias , C 2022 , ' Fairness Based Energy-Efficient 3D Path Planning of a Portable Access Point : A Deep Reinforcement Learning Approach ' , IEEE Open Journal of the Communications Society , vol. 3 , pp. 1487-1500 . https://doi.org/10.1109/OJCOMS.2022.3201292



    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Klassifikation :

    DDC:    629



    UAV Coverage Path Planning Based on Deep Reinforcement Learning

    Wang, Yuehai / Wang, Zixin / Xing, Na et al. | IEEE | 2023


    Supervised-Reinforcement Learning (SRL) Approach for Efficient Modular Path Planning

    Hebaish, Marawan Azmy / Hussein, Ahmed / El-Mougy, Amr | IEEE | 2022


    Mobile Robot Path Planning Using Deep reinforcement learning

    Abedi, Ali / Anari, Reza Ghaderizadeh / Mohammadi, Hossein | IEEE | 2023


    A UAV Path Planning Method Based on Deep Reinforcement Learning

    Li, Yibing / Zhang, Sitong / Ye, Fang et al. | IEEE | 2020


    Deep Reinforcement Learning for Image-Based Multi-Agent Coverage Path Planning

    Xu, Meng / She, Yechao / Jin, Yang et al. | IEEE | 2023