Vehicle overloading not only seriously damages the public facilities of the country, but also poses a great threat to the life safety of citizens, therefore, efficient overload detection technology is particularly critical. However, the current means of overload detection cannot yet propose to recognize the cargo status of trucks, and it is difficult to screen out suspicious vehicles from a large amount of overloading data, resulting in the inability to effectively prevent the damage caused by overloaded vehicles on roads. For this reason, based on the YOLOv5 network, this study aims to improve the method of recognizing truck load status by adding a parameter-free attention mechanism to the backbone and neck networks, in order to ensure the accuracy of the detection and to reduce the cases of omission and misdetection. The experimental results show that the proposed method not only has a high recognition rate, but also has a fast-processing speed, which can well meet the needs of practical applications.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Vehicle State Recognition Based on Improved YOLOv5


    Contributors:
    Ma, Jun (author) / Li, Ming (author) / Yu, Fengjun (author) / He, Peixin (author) / Tan, Pengliu (author)


    Publication date :

    2025-04-12


    Size :

    1625836 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Traffic Sign Recognition Algorithm Based on Improved YOLOv5

    Sang, Zhengxiao / Xia, Fuming / Huang, Han et al. | IEEE | 2022


    Vehicle Recognition under Autonomous Driving Based on YOLOv5

    Zhou, Xiaozhou / Song, Hongwei / Gao, Jiaxing | IEEE | 2024



    Controller fatigue state detection based on improved YOLOv5

    Wu, Shanshan / Zhang, Jianping / Zhong, Yiqian et al. | IEEE | 2024


    STD-Yolov5: a ship-type detection model based on improved Yolov5

    Ning, Yue / Zhao, Lining / Zhang, Can et al. | Taylor & Francis Verlag | 2024