Cooperative perception algorithms based on fusing sensing data across multiple connected automated vehicles (CAVs) have shown promising performance to enhance the existing individual perception algorithm in terms of object detection tracking. However, existing cooperative perception algorithms are only developed offline given the constraints from data sharing and computational resources and none of them have been verified in real-time conditions. In this work, we propose a real-time cooperative perception framework called CooperFuse, which achieves cooperative perception in a late fusion scheme. Based on object detection and tracking results from individual vehicle, the late fusion cooperative perception algorithm considers object detection confidence score, kinematics, and dynamics consistency as well as scale consistency of detected objects. The algorithm computes the kinematic and dynamic consistency of the objects by solving for the energy consumption of inter-frame trajectories, and determines scale consistency by calculating inter-frame scale changes, enabling feature-based bounding box fusion. The experimental results demonstrate the real-time performance of the proposed algorithm and reveal its effective improvements in feature fusion and object detection accuracy when dealing with heterogeneous detection models across different cooperative intelligent agents.
CooperFuse: A Real-Time Cooperative Perception Fusion Framework
2024-06-02
2022234 byte
Conference paper
Electronic Resource
English
British Library Conference Proceedings | 2019
|Cooperative Perception System And Cooperative Perception Method
European Patent Office | 2024
|