In urban rail transit systems, moving the existing rail transit service to cloud computing systems can effectively relieve the pressure from data sharing and excessive loads. Allocating computing resources reasonably to guarantee Quality of Service (QoS) of urban rain transit services is crucial. Traditional resource allocation methods are mostly predefined policies. It proves to be difficult for on-demand policies to efficiently utilize the total resources. And it is hard to set an appropriate threshold for each service when applying the threshold-based policy. As one of the autonomous decision-making methods, Reinforcement Learning (RL) has been applied in many fields to solve resource allocation problems. However, a complete urban rail transit cloud resource allocation scenario usually has high dimensions in action and state spaces. In this paper, we utilize Deep Reinforcement Learning (DRL) to allocate resource, since function approximation is usually used to solve the curse of dimensionality. Several urban rail related services are selected as cloud computing users, and the resource allocation among these services is formulated as a Deep Q-Network (DQN). We conduct both the predefined policy and the DQN-based resource allocation policy in a simulated cloud system. Our simulation results show that the DQN-based policy can obtain a better QoS for all selected rail transit services.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    A Deep Reinforcement Learning based Resource Allocation Method for Urban Rail Transit Cloud Systems


    Contributors:
    Li, Ziheng (author) / Zhu, Li (author) / Li, Yang (author) / Liang, Hao (author) / Wang, Hao (author)


    Publication date :

    2021-09-19


    Size :

    547419 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English