📢 Code and Dataset will be released after the acceptance.
Overview
The advancement of self-driving technology has become a focal point in outdoor robotics, driven by the need for robust and efficient perception systems. This paper addresses the critical role of sensor integration in autonomous vehicles, particularly emphasizing the underutilization of radar compared to cameras and LiDARs. While extensive research has been conducted on the latter two due to the availability of large-scale datasets, radar technology offers unique advantages such as all-weather sensing and occlusion penetration, which are essential for safe autonomous driving. This study presents a novel integration of a realistic radar sensor model within the CARLA simulator, enabling researchers to develop and test navigation algorithms using radar data. Utilizing this radar sensor and showcasing its capabilities in simulation, we demonstrate improved performance in end-to-end driving scenarios. Our findings aim to rekindle interest in radar-based self-driving research and promote the development of algorithms that leverage radar's strengths.
Sample Videos Collected Across Different Routes in CARLA
Example 1
In this situation, the driving agent is attempting to make a left turn at an intersection. The Camera only model becomes stagnant at the intersection once the vehicle from the opposing lane passes by. Whereas the other two models, due to enhanced spatial awareness, do not stop at the intersection as it can see farther and confirm that no vehicle is coming from the opposite lane.
Input: Camera Only
Input: Camera + LiDAR
Input: Camera + Radar
Example 2
In this scene, the driving agent attempts to switch to the left lane. The Camera only model struggles to make the turn and ends up crashing with a vehicle coming from behind. Whereas in the other two models, both LiDAR and Radar detect a car behind and accordingly increase the speed of vehicle before switching the lane.
Input: Camera Only
Input: Camera + LiDAR
Input: Camera + Radar
Example 3
This is a special test scenario in CARLA where the traffic lights in opposing lanes are turned on to test the situational awareness of the driving agent. Here the vehicle is attempting to make a right turn at the intersection when the lights from crossing lane are on. The Camera only model fails to stop in time and crashes into the incoming car from the crossing lane. However the other two models using LiDAR and Radar manage to avoid the crash by stopping abruptly and proceeding only when it's safe.
Input: Camera Only
Input: Camera + LiDAR
Input: Camera + Radar
High Level Implementation
The following diagram illustrates a high level overview of our sensor integration into CARLA and the evaluation framework for End-to-End Driving.
The Transfuser++ model is the state-of-the-art End-to-End driving model that utilizes Camera and LiDAR sensors for perception and path planning. The model is trained on data from an expert driver provided by CARLA and it predicts the future waypoints/direction and the velocity of the ego vehicle. We substitute the LiDAR input with our integrated C-Shenron radar sensor and re-train multiple models with varying radar views. In our results, we showcase that using radar sensors have improved the driving score and overall situational awareness of the model, indicating the accuracy of our sensor.
Sensor Views
Comparison of views from Camera, Semantic LiDAR, and Shenron Radar in CARLA simulator.