Ship detection plays a crucial role in maritime transportation and navigation safety, with one of its key technologies being the accurate detection of vessels. However, the diversity of ship types and locations, coupled with the interference from complex environments, continues to pose a significant challenge in accurately detecting multi-scale vessels. The YOLO (You Only Look Once) framework has demonstrated exceptional accuracy in automated ship detection tasks. In this paper, we propose a novel AE-YOLO architecture that integrates the EfficientViT feature extraction network, the Progressive Feature Pyramid Network (AFPN), and a Slim-neck module with a mixed convolutional structure into YOLOv11.AE-YOLO leverages AFPN to fuse multi-scale high-level semantic features with spatial details, thereby enriching the feature representation. Additionally, it employs a large selective kernel attention mechanism that dynamically adjusts its extensive receptive field to focus more on critical vessel features, mitigating the interference of complex environmental factors and enhancing the distinctive feature representation of ships. This study also investigates the impact of various attention mechanisms on ship detection accuracy. Experimental results indicate that the model improves the classification and localization capabilities of targets at different stages by synthesizing outputs from multiple modules. Compared to YOLOv11, AE-YOLO achieves relative increases of 0.91% and 0.93% in mean Average Precision (mAP@0.50) on the Seaships and SMD datasets, respectively. Under various evaluation metrics, the overall performance of the AE-YOLO method surpasses that of existing ship detection approaches.
AE-YOLO: Ship Detection Based on Multi-Scale Fusion Attention and EfficientViT Lightweight YOLOv11
2025-05-16
2217105 byte
Conference paper
Electronic Resource
English
A Lightweight Anti-Unmanned Aerial Vehicle Detection Method Based on Improved YOLOv11
DOAJ | 2024
|Traffic accident detection based on YOLOv11
IEEE | 2024
|