Modern automobiles are increasingly equipped with Advanced Driver Assistance Systems (ADAS), which improve passenger safety and comfort. Each of these ADAS can rely on a multitude of environmental sensors, which creates the need of cross-linking ADAS with respect to their access to environment information. This idea is reflected in the so-called environment model (EM) that centralizes the access of all ADAS functions to the perceived but not yet interpreted scene in the vehicle environment. In this paper, we propose an architectural framework that concretizes the EM in terms of separate modules for signal processing and sensor data fusion, as well as well-defined generic input and output interfaces. Moreover, we extend the EM by including additional components and output interfaces for centralized scene interpretation (CSI). The architectural framework allows streamlining the development process and increasing computational efficiency, as software modules for scene interpretation can be shared among multiple ADAS. In addition, the consistency of the overall ADAS behavior is improved, as all ADAS functions rely on the same scene information and scene interpretation. We illustrate our concept of centralized scene interpretation using the case study of a module for the detection of directional roadways, a generalization of motorways. The algorithm we present analyzes multiple perceptual indicators in favor or against a directional roadway and computes an overall decision, along with a foresight value for the expected duration of the directional roadway, to be used in various ADAS functions.
Cross-Linking Driver Assistance Systems Via Centralized Scene Interpretation Using The Example Of Directional Roadway Detection
2012
12 Seiten
Conference paper
English
Automotive engineering | 2012
|British Library Conference Proceedings | 2013
|Task-dependent scene interpretation in driver assistance
Tema Archive | 2010
|Task-dependent scene interpretation in driver assistance
TIBKAT | 2010
|