In common scenarios of cloud computing and edge computing, jobs are divided into tasks with dependencies, and task scheduling and computation are performed through containers. However, the cold start of containers significantly impedes the efficiency of short tasks. The existing research on cold starts faces challenges in addressing task scheduling with dependencies and may not fully exploit the distinct advantages offered by cloud and edge servers. On the other hand, there is limited research on using Deep Reinforcement Learning (DRL) to optimize container cold starts. Existing task scheduling algorithms based on DRL often struggle to handle scenarios with multiple jobs simultaneously. To reduce the job completion time of the system, this paper introduces a task scheduling algorithm based on DRL. By intelligently reusing containers and minimizing cold starts, the algorithm aims to simultaneously consider computing and communication resources, effectively leveraging the unique strengths of both cloud and edge servers to enhance job completion speed. The proposed architecture, comprising both Agent and Scheduler components, reduces the action space and enhances the ability to handle multiple jobs. Simulation results demonstrate that, compared to existing common algorithms, the proposed algorithm reduces the average job completion time by approximately 30%.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    A Time-Saving Task Scheduling Algorithm Based on Deep Reinforcement Learning for Edge Cloud Collaborative Computing


    Contributors:
    Zou, Wenhao (author) / Zhang, Zongshuai (author) / Wang, Nina (author) / Tan, Xiaochen (author) / Tian, Lin (author)


    Publication date :

    2024-06-24


    Size :

    1954413 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English