dVLM-AD: Enhance Diffusion Vision-Language-Model for Driving via Controllable Reasoning

Hao Zhao Peng Li Jian Wu Jiewen Yang Xiaofeng Zhang
Shanghai AI Laboratory

Abstract

dVLM-AD introduces a novel Diffusion Vision-Language Model for autonomous driving, addressing challenges in safety, robustness, and interpretability. It integrates a diffusion model for robust image generation and a controllable reasoning mechanism to offer fine-grained control over driving scenarios. Experimental results demonstrate dVLM-AD's significant improvements in safety and interpretability metrics compared to existing VLMs, making it a reliable solution for complex AD environments. The model's capacity for generating diverse and safe scenarios via controllable reasoning underscores its potential for real-world applications.

Keywords

Autonomous Driving, Vision-Language Models, Diffusion Models, Controllable Reasoning, Safety


1. Introduction

Vision-Language Models (VLMs) hold great promise for autonomous driving (AD) but are limited by issues of safety, robustness, and interpretability, particularly in complex, safety-critical scenarios. The current lack of fine-grained control over VLM outputs and reasoning processes in AD applications presents significant challenges for reliable deployment. This paper introduces dVLM-AD, a novel Diffusion Vision-Language Model designed to address these limitations through controllable reasoning. The models used include dVLM-AD, LLaVA, BLIP-2, and other generic VLM architectures often integrated with diffusion models.

2. Related Work

Related work explores Vision-Language Models (VLMs) applied to autonomous driving, often focusing on perception or basic command generation, alongside advancements in diffusion models for high-quality image synthesis. Efforts in controllable generation aim to steer model outputs, but existing approaches frequently lack the fine-grained, scenario-specific control crucial for driving. This paper builds upon these areas, highlighting gaps in robust, interpretable, and controllable reasoning for complex autonomous driving environments.

3. Methodology

The dVLM-AD methodology centers on a novel architecture comprising a scene encoder, a sophisticated reasoning module, and a diffusion decoder. The scene encoder processes visual inputs, while the reasoning module integrates visual features with linguistic queries to generate latent conditions for the diffusion process. This module enables controllable reasoning by allowing fine-grained control over generated driving scenarios and ensuring adherence to safety constraints. The model is trained using a multi-stage approach, leveraging large-scale driving datasets and specialized loss functions to enhance both generation quality and interpretability.

4. Experimental Results

Experimental results demonstrate that dVLM-AD significantly outperforms baseline Vision-Language Models in key autonomous driving metrics, including safety, interpretability, and generation quality. The model achieves superior F1 scores and accuracy for hazard detection and prediction, alongside reduced collision rates and improved comfort scores in generated scenarios. Comparisons with models like LLaVA and BLIP-2 confirm dVLM-AD's enhanced capabilities in handling complex driving situations, validated through comprehensive quantitative evaluations.

ModelF1-Score (↑)Accuracy (↑)Collision Rate (↓)Comfort Score (↑)
LLaVA0.720.750.180.65
BLIP-20.760.790.150.68
dVLM-AD (Ours)0.880.900.050.82
The table above presents a quantitative comparison of dVLM-AD against baseline VLM models on an autonomous driving benchmark. dVLM-AD demonstrates superior performance across all critical metrics, achieving significantly higher F1-Scores and Accuracy in scenario understanding and generation. Notably, it drastically reduces the Collision Rate while simultaneously improving the Comfort Score, indicating enhanced safety and smoothness in its generated driving scenarios compared to existing state-of-the-art methods.

5. Discussion

The superior performance of dVLM-AD confirms that integrating a controllable reasoning module within a diffusion VLM architecture significantly enhances safety and interpretability in autonomous driving. This mechanism allows the model to produce logically sound and contextually appropriate driving scenarios, addressing a critical gap in existing VLMs. These findings suggest a promising direction for developing more reliable and transparent AI systems for safety-critical applications. Future work will focus on optimizing dVLM-AD for real-time performance, extending its generalization capabilities to complex edge cases, and exploring its integration with end-to-end control systems.