Vehicle location prediction or vehicle tracking is a significant topic within connected vehicles. This task, however, is difficult if merely a single modal data is available, probably causing biases and impeding the accuracy. With the development of sensor networks in connected vehicles, multimodal data are becoming accessible. Therefore, we propose a framework for vehicle tracking with multimodal data fusion. Specifically, we fuse the results of two modalities, images and velocities, in our vehicle-tracking task. Images, being processed in the module of vehicle detection, provide visual information about the features of vehicles, whereas velocity estimation can further evaluate the possible locations of the target vehicles, which reduces the number of candidates being compared, decreasing the time consumption and computational cost. Our vehicle detection model is designed with a color-faster R-CNN, whose inputs are both the texture and color of the vehicles. Meanwhile, velocity estimation is achieved by the Kalman filter, which is a classical method for tracking. Finally, a multimodal data fusion method is applied to integrate these outcomes so that vehicle-tracking tasks can be achieved. Experimental results suggest the efficiency of our methods, which can track vehicles using a series of surveillance cameras in urban areas.
Vehicle Tracking Using Surveillance With Multimodal Data Fusion
IEEE Transactions on Intelligent Transportation Systems ; 19 , 7 ; 2353-2361
2018-07-01
1990039 byte
Article (Journal)
Electronic Resource
English
Articles - Data Fusion Surveillance System
Online Contents | 2000
|Pedestrian tracking by fusion of thermal-visible surveillance videos
British Library Online Contents | 2010
|