In common scenarios of cloud computing and edge computing, jobs are divided into tasks with dependencies, and task scheduling and computation are performed through containers. However, the cold start of containers significantly impedes the efficiency of short tasks. The existing research on cold starts faces challenges in addressing task scheduling with dependencies and may not fully exploit the distinct advantages offered by cloud and edge servers. On the other hand, there is limited research on using Deep Reinforcement Learning (DRL) to optimize container cold starts. Existing task scheduling algorithms based on DRL often struggle to handle scenarios with multiple jobs simultaneously. To reduce the job completion time of the system, this paper introduces a task scheduling algorithm based on DRL. By intelligently reusing containers and minimizing cold starts, the algorithm aims to simultaneously consider computing and communication resources, effectively leveraging the unique strengths of both cloud and edge servers to enhance job completion speed. The proposed architecture, comprising both Agent and Scheduler components, reduces the action space and enhances the ability to handle multiple jobs. Simulation results demonstrate that, compared to existing common algorithms, the proposed algorithm reduces the average job completion time by approximately 30%.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    A Time-Saving Task Scheduling Algorithm Based on Deep Reinforcement Learning for Edge Cloud Collaborative Computing


    Beteiligte:
    Zou, Wenhao (Autor:in) / Zhang, Zongshuai (Autor:in) / Wang, Nina (Autor:in) / Tan, Xiaochen (Autor:in) / Tian, Lin (Autor:in)


    Erscheinungsdatum :

    24.06.2024


    Format / Umfang :

    1954413 byte





    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch