The advancement of self-driving technology is driven by the need for robust perception and navigation systems. Simulators for autonomous driving facilitate the rapid development and testing of navigation algorithms; however, a key issue for most is their inaccurate modeling of the radar sensor. This is a significant drawback as radars offer robust sensing capabilities in adverse weather conditions and occlusions. CARLA, a widely adopted open-source simulator, provides a simplistic radar model that fails to capture the complex physical and material-dependent behavior of real-world radar, leading to a substantial gap in the realism of its simulated data. To address these limitations, we present C-Shenron, a radar simulation framework integrated into CARLA, which generates realistic radar measurements by fusing LiDAR and camera data. C-Shenron also supports configurable radar parameters, multiple sensor placements, and scalable dataset generation. Our evaluations demonstrate that radar-camera fusion models, trained with C-Shenron's generated data, achieve performance equivalent to traditional LiDAR-camera baselines on key metrics from the CARLA leaderboard.
Sample Videos Collected Across Different Routes in CARLA
Example 1
In this situation, the driving agent is attempting to make a left turn at an intersection. The Camera only model becomes stagnant at the intersection once the vehicle from the opposing lane passes by. Whereas the other two models, due to enhanced spatial awareness, do not stop at the intersection as it can see farther and confirm that no vehicle is coming from the opposite lane.
Input: Camera Only
Input: Camera + LiDAR
Input: Camera + Radar
Example 2
In this scene, the driving agent attempts to switch to the left lane. The Camera only model struggles to make the turn and ends up crashing with a vehicle coming from behind. Whereas in the other two models, both LiDAR and Radar detect a car behind and accordingly increase the speed of vehicle before switching the lane.
Input: Camera Only
Input: Camera + LiDAR
Input: Camera + Radar
Example 3
This is a special test scenario in CARLA where the traffic lights in opposing lanes are turned on to test the situational awareness of the driving agent. Here the vehicle is attempting to make a right turn at the intersection when the lights from crossing lane are on. The Camera only model fails to stop in time and crashes into the incoming car from the crossing lane. However the other two models using LiDAR and Radar manage to avoid the crash by stopping abruptly and proceeding only when it's safe.
Input: Camera Only
Input: Camera + LiDAR
Input: Camera + Radar
Carla Radar vs C-Shenron Radar
The following image shows a comparison of the radar sensor output from CARLA and C-Shenron. The camera view is from inside the ego vehicle whereas both radar views are in bird's eye view. Clearly from the image, CARLA radar only provides sparse point clouds whereas C-Shenron provides a dense Range-AoA map.
High Level Implementation
The following diagram illustrates a high level overview of our sensor integration into CARLA and the evaluation framework for End-to-End Driving.
The Transfuser++ model is the state-of-the-art End-to-End driving model that utilizes Camera and LiDAR sensors for perception and path planning. The model is trained on data from an expert driver provided by CARLA and it predicts the future waypoints/direction and the velocity of the ego vehicle. We substitute the LiDAR input with our integrated C-Shenron radar sensor and re-train multiple models with varying radar views. In our results, we showcase that using radar sensors have improved the driving score and overall situational awareness of the model, indicating the accuracy of our sensor.
Sensor Views
Comparison of views from Camera, Semantic LiDAR, and Shenron Radar in CARLA simulator. Like the above image, the camera view is from inside the ego vehicle whereas both radar views are in bird's eye view.