Recent years have witnessed a surge in the deployment of Deep Neural Network (DNN)-based services, which drives the development of emerging intelligent transportation systems (ITSs). However, it is still challenging to enable efficient and reliable DNN inference in Vehicular Edge Computing (VEC) environments due to resource constraints and system dynamics. In view of this, this work investigates a DNN inference partition and offloading scenario with environmental uncertainties in VEC, which motivates the necessity to strike a balance between inference delay and the success ratio of receiving the offloading outputs. Then, by considering communication and computation overheads as well as failed offloading conditions in an analytical model, we propose an Adaptive Splitting, Partitioning, and Merging (ASPM) strategy that reduces the inference delay while maintaining a decent offloading success ratio. Specifically, ASPM first splits and partitions the DNN model in a recursive way to find the optimal split blocks with the aim of minimizing inference delay. On this basis, it further merges DNN blocks in a greedy way to reduce the number of blocks to be offloaded thus, enhancing the offloading success ratio for the whole DNN inference. Finally, we conduct comprehensive performance evaluations to demonstrate the superiority of our design.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    ASPM: Reliability-Oriented DNN Inference Partition and Offloading in Vehicular Edge Computing


    Contributors:
    Yan, Guozhi (author) / Liu, Chunhui (author) / Liu, Kai (author)


    Publication date :

    2023-09-24


    Size :

    712083 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Computation Offloading with Reliability Guarantee in Vehicular Edge Computing Systems

    He, Zhongjie / Shan, Hangguan / Bi, Yuanguo et al. | IEEE | 2020


    A Collaborative Task Offloading Scheme in Vehicular Edge Computing

    Bute, Muhammad Saleh / Fan, Pingzhi / Liu, Gang et al. | IEEE | 2021


    Intelligent Offloading Balance for Vehicular Edge Computing and Networks

    Wu, Yu / Fang, Xuming / Min, Geyong et al. | IEEE | 2025


    Scalable Modulation based Computation Offloading in Vehicular Edge Computing System

    Li, Wenjie / Zhang, Ning / Liu, Qiuyan et al. | IEEE | 2020


    Joint Offloading and Resource Allocation for Scalable Vehicular Edge Computing

    Wu, Wei / Wang, Qie / Wu, Xuanli et al. | IEEE | 2020