Neural network pruning has become a de facto component for deploying deep networks on resource-constrained devices, which can reduce memory requirements and computation costs. In particular, channel pruning gained more popularity due to its structured nature and direct savings on general hardware. However, most existing pruning approaches utilize importance measures that are not directly related to the task utility. Moreover, few in the literature focus on visual detection models. To fill these gaps, we propose a novel gradient-based saliency measure for visual detection and use it to guide our channel pruning. Experiments on the KITTI and COCO_traffic datasets demonstrate our pruning method’s efficacy and superiority over competing state-of-the-art approaches. It can even achieve better performance with fewer parameters than the original model. Our pruning approach also demonstrates its great potential in handling small-scale objects.
Visual-Saliency-Guided Channel Pruning for Deep Visual Detectors in Autonomous Driving
04.06.2023
6268682 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
AUTONOMOUS VEHICLE OPERATIONAL MANAGEMENT WITH VISUAL SALIENCY PERCEPTION CONTROL
Europäisches Patentamt | 2023
|Autonomous vehicle operational management with visual saliency perception control
Europäisches Patentamt | 2021
|Visual Routines for Autonomous Driving
British Library Conference Proceedings | 1998
|