An online updating framework of an energy management system (EMS) for a multimode hybrid electric powertrain is proposed via cooperation between the asynchronous advantage actor–critic (A3C)-based deep reinforcement learning (DRL) agent and the Markov chain model (MCM). In the overall framework, the DRL agent periodically updates the energy management policy. The MCM expedites the policy update process by generating plenty of probable future drive cycles using recent historical driving data and supplying them to the training process. Assisted with the MCM, the proposed A3C-based energy management framework can yield near-optimal policy for any type of unknown drive cycle in the recent future. Two types of unknown drive cycles are chosen to demonstrate the efficacy of the proposed framework. Type I unknown drive cycle is also generated from the same recent historical driving data but was not included in the training dataset. Type II drive cycle is neither known to the framework nor generated from the same historical data. In type I unknown drive cycle, the trained A3C-based EMS achieves 99% of the fuel economy obtained by the global-optimal EMS and 0.12% deviation from charge sustainability. The trained A3C-based EMS consumes 6%–12% more fuel than the global-optimal EMS for type II unknown drive cycles.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Real-Time Optimal Energy Management of Multimode Hybrid Electric Powertrain With Online Trainable Asynchronous Advantage Actor–Critic Algorithm


    Beteiligte:
    Biswas, Atriya (Autor:in) / Anselma, Pier Giuseppe (Autor:in) / Emadi, Ali (Autor:in)


    Erscheinungsdatum :

    2022-06-01


    Format / Umfang :

    15858964 byte




    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch