Due to the complexity and uncertainty of traffic environment and the fragmentation of traditional planning and control strategy, conventional planning and control algorithms have the problems of low and unstable efficiency. In order to improve both planning and control efficiency while ensuring the validity of the planned trajectories for autonomous vehicles, a reinforcement learning and optimal control integrated method (RL-OCIM) for planning and control is proposed. The proposed method ensures the obstacle avoidance through incorporating lane change based on proximal policy optimization (PPO) with the optimal control method (OCM). The trained PPO agent takes the structured information of the ego vehicle and the surrounding obstacles as input, and generates the trajectory terminal state for the ego vehicle. Subsequently, the continuous trajectory solving problem is transformed into an optimal control problem (OCP), resulting in the generation of a feasible trajectory that includes vehicle steering and longitudinal acceleration based on the vehicle kinematics model. In addition, to improve the efficiency of agent training, a pre-trained neural network is employed to generate trajectory feature points by taking the trajectory terminal states as input. During the agent training process, the trajectory feature points are obtained directly from the neural network after selecting an action. Experiments are conducted on a hybrid Lincoln MKZ vehicle to demonstrate and validate the effectiveness and efficiency of the proposed RL-OCIM, which includes various driving scenarios such as static obstacle avoidance and dynamic obstacle avoidance.
An Optimal Obstacle Avoidance Method Using Reinforcement Learning-Based Decision Parameterization for Autonomous Vehicles
IEEE Transactions on Intelligent Transportation Systems ; 26 , 7 ; 10552-10566
2025-07-01
2610505 byte
Article (Journal)
Electronic Resource
English
SAGE Publications | 2024
|