In this paper, we investigate the problem of age-of-information (AoI)-aware dynamic user scheduling in vehicular networks based on soft reinforcement learning, where multiple vehicle-to-infrastructure (V2I) downlinks share the spectrum resource. To address the slow convergence and local optimality problem that is usually faced by traditional reinforcement learning, we formulate the AoI-aware user scheduling problem as a sequential decision making problem and then use the soft actor-critic (SAC) reinforcement learning algorithm to address it. The agent, i.e., the road side unit (RSU), chooses an appropriate V2I link to occupy the spectrum to send data at each slot to decrease the AoI outage probability of the status update system in vehicular networks. By maximizing the expected return and policy entropy, the agent can converge fast to an efficient solution while holding some robustness to cope with the differences between actual and training environments. Simulation results validate the proposed scheme in terms of convergence, effectiveness, and robustness.
AoI-Aware Dynamic User Scheduling in Vehicular Networks Based on Soft Reinforcement Learning
10.10.2023
988665 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
IEEE | 2023
|