Years of research on transport protocols have not solved the tussle between in-network and end-to-end congestion control. This debate is due to the variance of conditions and assumptions in different network scenarios, e.g., cellular versus data center networks. Recently, the community has proposed a few transport protocols driven by machine learning, nonetheless limited to end-to-end approaches. In this paper, we present Owl, a transport protocol based on reinforcement learning, whose goal is to select the proper congestion window learning from end-to-end features and network signals, when available. We show that our solution converges to a fair resource allocation after the learning overhead. Our kernel implementation, deployed over emulated and large scale virtual network testbeds, outperforms all benchmark solutions based on end-to-end or in-network congestion control.


    Access

    Download


    Export, share and cite



    Title :

    Owl: Congestion Control with Partially Invisible Networks via Reinforcement Learning


    Contributors:

    Publication date :

    2021-01-01



    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Classification :

    DDC:    629



    Partially Oblivious Congestion Control for the Internet via Reinforcement Learning

    Alessio Sacco / Matteo Flocco / Flavio Esposito et al. | BASE | 2023

    Free access

    ReCoCo: Reinforcement learning-based Congestion control for Real-time applications

    Markudova, Dena / Meo, Michela | BASE | 2023

    Free access

    Freeway Congestion Management With Reinforcement Learning Headway Control of Connected and Autonomous Vehicles

    Elmorshedy, Lina / Smirnov, Ilia / Abdulhai, Baher | Transportation Research Record | 2023

    Free access