An autonomous vehicle requires a reliable system for high vehicle precision and relative estimation of its state for the safety of humans during the autonomous movement of the vehicle in an environment dominated by human drivers. Such systems have a complex environment involving multiple sensors (e.g. Vision modules, Global Navigation Satellite System (GNSS), LIDAR, RADAR). Through this paper, environment perception stack for self-driving cars is proposed to improve the intelligence for decision making and improve the safety measures. Semantic image segmentation, based on Fully convolutional Network architecture is implemented and the output received from the model is then used for implementing 3D space estimation and lane estimation. Considering the real-time cooperation required between the autonomous vehicles and other vehicles in the frame, a 2D object detector is implemented on the stack to detect different classes of objects and their relative distances are calculated. The proposed system is then implemented on the CARLA simulation software and generated outcomes are further discussed in the paper.
Visual Perception Stack for Autonomous Vehicle Using Semantic Segmentation and Object Detection
2021-08-27
3872125 byte
Conference paper
Electronic Resource
English
VISUAL PERCEPTION ASSISTANCE SYSTEM AND VISUAL-PERCEPTION TARGET OBJECT DETECTION SYSTEM
European Patent Office | 2018
|VISUAL PERCEPTION ASSISTANCE SYSTEM AND VISUAL-PERCEPTION TARGET OBJECT DETECTION SYSTEM
European Patent Office | 2017
|