State-of-the-art advanced driver assistance systems (ADAS) typically focus on single tasks and therefore, have clearly defined functionalities. Although said ADAS functions (e.g. lane departure warning) show good performance, they lack the general ability to extract spatial relations of the environment. These spatial relations are required for scene analysis on a higher layer of abstraction, providing a new quality of scene understanding, e.g. for inner-city crash prevention when trying to detect a Stop sign violation in a complex situation. Otherwise, it will be difficult for an ADAS to deal with complex scenes and situations in a generic way. This contribution presents the novel task dependent generation of spatial representations, allowing task specific extraction of knowledge from the environment based on our biologically motivated ADAS. Additionally, the hierarchy of the approach provides advantages when dealing with heterogeneous processing modules, a large number of tasks and additional new input cues. First results show the reliability of the approach.
Towards a task dependent representation generation for scene analysis
2010 IEEE Intelligent Vehicles Symposium ; 731-737
01.06.2010
1285506 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Towards a Task Dependent Representation Generation for Scene Analysis, pp. 731-737
British Library Conference Proceedings | 2010
|Task-dependent scene interpretation in driver assistance
Tema Archiv | 2010
|Task-dependent scene interpretation in driver assistance
TIBKAT | 2010
|Wiley | 2017
|