To be able to predict the evolution of the driving context, estimate the expected risks and plan future behavior alternatives, it is crucial to know where traffic participants can go and where they will most likely go. Consequently, for future Advanced Driver Assistance Systems (ADAS), precise, lane-accurate localization of the ego- as well as the other vehicles is a key technology. The proposed standard solutions to lane-accurate localization are usually based on expensive detailed 3D maps and precise absolute position estimation sensors. To the contrary, in this paper we propose to improve the localization of scene items based on state-of-the-art map data, combined with a coarse and cheap position estimation as e.g. provided by standard GNSS. From the map data, we infer the structure of the contextual road geometry, and align it with the road view(s) as provided by a front camera. This results in an improved relative positioning of the sensed items on the map structures, allowing a better scene interpretation. The alignment occurs by best-match search based on a feature comparison between the real road view from the camera and virtually generated road views based on the map considering different assumed ego-vehicle positions. On standard road scenes, we validate the approach, and show that it can be used as a cheap means to support intelligent ADAS and improve localization.
Camera to map alignment for accurate low-cost lane-level scene interpretation
01.11.2016
2809018 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Automatic accurate lane changing system and lane changing method for double-vehicle scene
Europäisches Patentamt | 2023
|Expressway traffic lane level accurate guidance system and method
Europäisches Patentamt | 2022
|