As vehicles develop into software-defined platforms with powerful automated driving capabilities and driver support systems, their in-vehicle networks become significantly more complicated. A key technique for ensuring deterministic, low-latency connectivity for crucial data traffic in such settings is Time-Sensitive Networking (TSN), and specifically the Time-Aware Shaper (TAS). However, current TAS scheduling techniques have difficulty adjusting schedules to dynamically shifting traffic patterns...
As vehicles develop into software-defined platforms with powerful automated driving capabilities and driver support systems, their in-vehicle networks become significantly more complicated. A key technique for ensuring deterministic, low-latency connectivity for crucial data traffic in such settings is Time-Sensitive Networking (TSN), and specifically the Time-Aware Shaper (TAS). However, current TAS scheduling techniques have difficulty adjusting schedules to dynamically shifting traffic patterns and changing operating conditions. This paper presents an adaptive scheduler using Deep Reinforcement Learning (DRL), which aims to meet strict deadlines, reducing latency and providing near-ideal resource usage. Experimental results for different vehicle scenarios show that our DRL-based scheduler performs better in terms of success rate, low latency, and overall network performance than state-of-the-art heuristic algorithms such as earliest deadline first (EDF) scheduling.
Deep-Reinforcement-Learning-based Scheduler for Time-Aware Shaper in In-Vehicle Networks
Mohammadparsa Karimi, Majid Nabi, Andrew Nelson, Eindhoven University of technology; Kees Goossens, Twan Basten, Eindhoven University of Technology
View more