The NexusStreets dataset contains human and autonomous driving scenes. They are collected by monitoring a target vehicle that can be either autonomous or controlled by a human driver. Data is presented in the shape of: sequences of JPEG images, one image per timestamp target vehicle state information for each timestamp The dataset has been built on CARLA Simulator, thanks to Baidu Apollo and a Logitech G29 steering wheel for the autonomous and human drivings, respectively. The dataset consists of 520 scenes (260 pairs of mirrored scenarios) of 60 seconds each. The folders are organized as follows: . ├── . ├── │ ├── │ │ ├── │ │ │ └── . │ │ └── . │ └── . └── . driving mode: corresponds to the control modality of the target vehicle under test and can be either Baidu Apollo or manual driving; town: one of the five default maps in CARLA (e.g., Town01, Town02, etc); trial: 60 different trials per map, they differ in traffic and weather conditions (except Town04). Each trial records 60 seconds of simulation, logging 120 frames per video and an equal number of rows per CSV. In particular, each trial includes: video: this folder groups the JPEG images; state_features.csv: reports the state information of the target vehicle for each frame; detection_features.csv: reports the 2D bounding box detections obtained from a pre-trained YOLOv3 detector.
NexusStreets: a dataset combining human and autonomous driving behaviours
27.02.2023
Forschungsdaten
Elektronische Ressource
Englisch
DDC: | 629 |
MSDAD:A Multi-Sensor Dataset for Autonomous Driving
IEEE | 2024
|Europäisches Patentamt | 2024
|Europäisches Patentamt | 2021
|On combining Big Data and machine learning to support eco-driving behaviours
BASE | 2019
|Europäisches Patentamt | 2022
|