Autonomous Vehicles (AVs) have the potential of reducing car accidents and increasing accessibility to transportation. AVs need to be rigorously tested. Scenario-based testing offers a set of approaches to design high-risk tests for AVs at low cost. Since the AVs need to be tested for a large number of scenarios, automated generation approaches are needed. Pre-trained Large Language Models (LLMs) are open-input, general-purpose data generators with good learning and reasoning abilities. However, due to the black-box nature of these systems, it's difficult to get direct evidence of their abilities. In this paper, we address the open question of the reasoning capabilities of pre-trained LLMs specifically in the context of scenario-based testing of AVs. Inspired by QA benchmarks for LLM evaluations for commonsense reasoning, science reasoning, and more, we present our main contribution, ScenarioQA. This benchmark involves an LLM-based QA generation process based on an integration of methods to generate questions and corresponding answers specifically in the context of scenario-based testing. We carry out a comprehensive evaluation of this process and gain valuable insights regarding effective QA generation. In addition, we evaluate several available pre-trained LLMs for these abilities.
ScenarioQA: Evaluating Test Scenario Reasoning Capabilities of Large Language Models
2024-09-24
369423 byte
Conference paper
Electronic Resource
English
A Large-Scale Traffic Scenario of Berlin for Evaluating Smart Mobility Applications
Springer Verlag | 2023
|