Vehicle API testing verifies whether the interactions between a vehicle's internal systems and external applications meet expectations, ensuring that users can access and control various vehicle functions and data. However, this task is inherently complex, requiring the alignment and coordination of API systems, communication protocols, and even vehicle simulation systems to develop valid test cases. In practical industrial scenarios, inconsistencies, ambiguities, and interde-pendencies across various documents and system specifications pose significant challenges. This paper presents a system designed for the automated testing of in-vehicle APIs. By clearly defining and segmenting the testing process, we enable Large Language Models (LLMs) to focus on specific tasks, ensuring a stable and controlled testing workflow. Experiments conducted on over 100 APIs demonstrate that our system effectively automates vehicle API testing. The results also confirm that LLMs can efficiently handle mundane tasks requiring human judgment, making them suitable for complete automation in similar industrial contexts.
Automating a Complete Software Test Process Using LLMs: An Automotive Case Study
26.04.2025
1951146 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Automating a Complete Software Test Process Using LLMs: An Automotive Case Study
ArXiv | 2025
|Automating test driver generation (airborne software)
Tema Archiv | 1990
|SYSTEMS & SOFTWARE - Automating the inspection process.
Online Contents | 2006
Automating Test Case Generation for the New Generation Mission Software System
British Library Conference Proceedings | 2000
|