Traffic sign detection is important to a robotic vehicle that automatically drives on roads. In this paper, an efficient novel approach which is enlighten by the process of the human vision is proposed to achieve automatic traffic sign detection. The detection method combines bottom-up traffic sign saliency region with learning based top-down features of traffic sign guided search. The bottom-up stage could obtain saliency region of traffic sign and achieve computational parsimony using improved Model of Saliency-Based Visual Attention. The top-down stage searches traffic sign in these traffic sign saliency regions based on the feature Histogram of Oriented Gradient (HOG) and the classifier Support Vector Mechine (SVM). Experimental results show that, the proposed approach can achieve robustness to illumination, scale, pose, viewpoint change and even partial occlusion. The samllest detection size of traffic sign is 14×14, the average detection rate is 98.3% and the false positive rate is 5.09% in test image data set.
Unifying visual saliency with HOG feature learning for traffic sign detection
01.06.2009
9431717 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Unifying Visual Saliency with HOG Feature Learning for Traffic Sign Detection
British Library Conference Proceedings | 2009
|Feature combination strategies for saliency-based visual attention systems
British Library Online Contents | 2001
|