Abstract
The realization of robust and generalizable autonomous driving (AD) systems necessitates intelligent, human-like, and demonstrably safe decision-making behavior. Traditional methods, facing limitations in complex and novel scenarios, have been supplanted by modern end-to-end systems powered by imitation from vast amounts of expert data. Recent research however, highlights the surprising effectiveness of (self-play) reinforcement learning (RL), either in isolation or synergy with imitation learning (IL). This approach enables discovery of diverse skills and robust performance in out-of-distribution settings without relying purely on data and imitation of the expert as in conventional end-to-end systems. Furthermore, generative world models and Vision-Language-Action (VLA) models are revolutionizing the creation of closed-loop simulation environments by allowing for controllable and realistic scenario generation. These advancements, coupled with scalable, GPU-accelerated multi-agent training, facilitate efficient sim-to-real transfer, paving the way for safer and more capable autonomous vehicles. This workshop brings key researchers together to highlight and discuss these critical developments, with the goal of challenging assumptions and advancing understanding toward the mission of achieving robust autonomy.













