Hydrobatic autonomous underwater vehicles (AUVs) can be efficient in speed and range as well as agile in maneuvering, thereby enabling new use cases in ocean production, environmental sensing, and security. However, such robots are underactuated, have highly nonlinear dynamics at high angles of attack, and will be used in applications with high requirements for robustness. This paper explores the use of reinforcement learning (RL) to control hydrobatic AUVs, using the agile SAM AUV as a case study. The focus is on controlling the depth and pitch simultaneously, where there is a tight coupling between the states. This maneuver offers a simple, yet interesting test case to compare different control strategies. The twin-delay deep deterministic policy gradient (TD3) algorithm is applied to this AUV control problem. The resulting trained RL controller offers good robustness to noise and performs at a similar level as a Proportional-Integral-Derivative (PID) controller within the Stonefish simulation environment. The agent is also deployed and run on the robot hardware, with high overshoot. While the RL agent has good performance in simulation, the transfer from simulation to reality still leaves some open questions. ; QC 20230913


    Access

    Download


    Export, share and cite



    Title :

    Using Reinforcement Learning for Hydrobatic Maneuvering with Autonomous Underwater Vehicles


    Contributors:

    Type of media :

    Paper


    Type of material :

    Electronic Resource


    Language :

    English



    Classification :

    DDC:    629





    Self-Propelled Maneuvering Underwater Vehicles

    McDonald, H. / Whitfield, D. / United States; Office of Naval Research; Mechanics and Energy Conversion S&T Division et al. | British Library Conference Proceedings | 1997


    Maneuvering and control simulator for underwater vehicles

    KLEINMANN ROGER J / TSAREV ALEXANDER S / HUGH JEEVEN B et al. | European Patent Office | 2023

    Free access