In this paper we present our research work on self-Iocalization of the vehicle (ego-state) aided by visual features and maps. This approach requires only last received GPS location, Odometry estimates as the vehicle moves, a database of visual features around the last received GPS estimate and Standard Definition (SD) Map. Towards this goal, we extract Oriented and Rotated Brief (ORB) Descriptors from the Images collected during a drive and use Bag of Words (BoW) approach for creating a vocabulary of visual words. We also use Inverted File Index method for fast querying of the image descriptor that is seen currently by the ego vehicle against a database of all the descriptors collected for finding the possible locations of the robot. The locations for the corresponding matches are then used to update the measurement model. This approach has helped us to localize within 3 seconds with average position and orientation error of 0.8 m and 0.38 degree respectively for all the sequences.
Improved localization using visual features and maps for Autonomous Cars
01.06.2018
2590818 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
IMPROVED LOCALIZATION USING VISUAL FEATURES AND MAPS FOR AUTONOMOUS CARS
British Library Conference Proceedings | 2018
|Lane Localization for Autonomous Model Cars
DataCite | 2014
|Traffic Light Recognition Using Deep Learning and Prior Maps for Autonomous Cars
ArXiv | 2019
|