Demo - C-Shenron: Radar Simulation Framework for CARLA
Planning to Explore via Self-Supervised World Models
Demo - C-Shenron: Radar Simulation Framework for CARLA

Pushkal Mishra
pumishra@ucsd.edu
Satyam Srivastava
f20190188@pilani.bits-pilani.ac.in
Jerry Li
jli793@ucr.edu
Kshitiz Bansal
ksbansal@ucsd.edu
Dinesh Bharadia
dineshb@ucsd.edu
In Submission @ SenSys'25
Overview
The advancement of self-driving technology is driven by the need for robust and efficient perception systems along with frameworks for End-to-End testing, enabled by the CARLA simulator. We introduce C-Shenron, a novel integration of a realistic radar sensor model within CARLA, enabling researchers to develop and test navigation algorithms using radar data. It is the first realistic radar simulator which utilizes LiDAR and camera sensors to generate high-fidelity radar ADC measurements from physics based modeling of the environment. Utilizing this radar sensor and showcasing its capabilities in simulation, we demonstrate improved performance in end-to-end driving scenarios. Our setup aims to rekindle the interest in radar-based self-driving research and promote the development of algorithms that leverages its strengths.


Sample Videos Collected Across Different Routes in CARLA
Example 1
In this situation, the driving agent is attempting to make a left turn at an intersection. The Camera only model becomes stagnant at the intersection once the vehicle from the opposing lane passes by. Whereas the other two models, due to enhanced spatial awareness, do not stop at the intersection as it can see farther and confirm that no vehicle is coming from the opposite lane.

Input: Camera Only

Camera Only

Input: Camera + LiDAR

Camera + LiDAR

Input: Camera + Radar

Camera + Radar
Example 2
In this scene, the driving agent attempts to switch to the left lane. The Camera only model struggles to make the turn and ends up crashing with a vehicle coming from behind. Whereas in the other two models, both LiDAR and Radar detect a car behind and accordingly increase the speed of vehicle before switching the lane.

Input: Camera Only

Camera Only

Input: Camera + LiDAR

Camera + LiDAR

Input: Camera + Radar

Camera + Radar
Example 3
This is a special test scenario in CARLA where the traffic lights in opposing lanes are turned on to test the situational awareness of the driving agent. Here the vehicle is attempting to make a right turn at the intersection when the lights from crossing lane are on. The Camera only model fails to stop in time and crashes into the incoming car from the crossing lane. However the other two models using LiDAR and Radar manage to avoid the crash by stopping abruptly and proceeding only when it's safe.

Input: Camera Only

Camera Only

Input: Camera + LiDAR

Camera + LiDAR

Input: Camera + Radar

Camera + Radar

Website Template Originally made by Phillip Isola and Richard Zhang for colorful ECCV project; the code can be found here.