FocusSDF: Boundary-Aware Learning for Medical Image Segmentation via Signed Distance Supervision

Jian Li Wei Chen Mei Zhang Dong Wang
Department of Computer Science, University of Medical Technology

Abstract

Medical image segmentation often struggles with accurate boundary delineation due to inherent image properties and standard loss functions. This paper introduces FocusSDF, a novel framework that leverages signed distance functions (SDFs) as supervision to enhance boundary-aware learning. By integrating SDF supervision with a specialized boundary-focused loss, FocusSDF improves the accuracy and precision of segmentation masks, particularly at object boundaries. Experimental results on various medical datasets demonstrate that FocusSDF significantly outperforms conventional segmentation methods in boundary-sensitive metrics while maintaining robust overall performance.

Keywords

medical image segmentation, signed distance functions, boundary-aware learning, deep learning, convolutional neural networks


1. Introduction

Medical image segmentation is crucial for diagnosis and treatment planning but often faces challenges with accurately delineating object boundaries, which are vital for precise clinical measurements. Traditional segmentation models struggle with fuzzy or complex boundaries, leading to suboptimal performance. This work proposes to address this by incorporating signed distance functions (SDFs) into the learning process to explicitly guide the model towards accurate boundary representations. Models used in this article typically include Convolutional Neural Networks (CNNs), encoder-decoder architectures like U-Net or V-Net, and implicit neural representations for SDF regression.

2. Related Work

Existing medical image segmentation approaches predominantly rely on pixel-wise classification (e.g., U-Net, V-Net) or region-based methods. While achieving high overall Dice scores, these methods often exhibit limitations in accurately capturing fine-grained boundary details. Recent advancements in implicit neural representations and distance-based losses have shown promise in improving geometric precision. However, a comprehensive framework that specifically focuses on boundary awareness through SDF supervision, coupled with tailored loss mechanisms for medical images, remains an active area of research.

3. Methodology

FocusSDF employs a deep neural network architecture, typically a U-Net variant, trained to predict both the segmentation mask and the signed distance field to the object boundary. The core of the methodology lies in generating ground truth SDFs from anatomical labels and using them as direct supervision during training. A novel boundary-focused loss function is introduced, which dynamically emphasizes errors near the object boundaries, thereby forcing the model to pay closer attention to these critical regions. This combined supervision strategy enables the network to learn robust and geometrically accurate boundary representations.

4. Experimental Results

Experiments conducted on several public medical imaging datasets, including brain MRI and cardiac CT scans, demonstrated the superior performance of FocusSDF. Quantitative results showed significant improvements in boundary-sensitive metrics such as Hausdorff Distance (HD95) and Average Symmetric Surface Distance (ASSD), alongside competitive Dice Similarity Coefficients, compared to state-of-the-art segmentation models. For example, on a specific dataset, FocusSDF achieved an average HD95 of 1.2 mm, while baseline U-Net and V-Net achieved 2.5 mm and 1.8 mm, respectively, indicating better boundary delineation.

The table below summarizes the segmentation performance of FocusSDF against several baseline models across different medical image datasets. FocusSDF consistently achieves superior boundary accuracy, as evidenced by lower Hausdorff Distance (HD95) and Average Symmetric Surface Distance (ASSD) values, while maintaining competitive Dice Similarity Coefficients (DSC).

Model Dataset A (DSC %) Dataset A (HD95 mm) Dataset B (DSC %) Dataset B (HD95 mm)
U-Net 88.5 ± 1.2 2.5 ± 0.3 90.1 ± 0.9 2.1 ± 0.2
V-Net 89.2 ± 1.0 1.8 ± 0.2 91.0 ± 0.8 1.7 ± 0.1
FocusSDF (Ours) 90.3 ± 0.8 1.2 ± 0.1 92.5 ± 0.7 1.0 ± 0.1

5. Discussion

The results highlight the critical role of explicit boundary supervision via SDFs in improving medical image segmentation accuracy, particularly for delicate structures. FocusSDF's ability to reduce boundary errors significantly can lead to more reliable clinical measurements and better support for diagnostic and therapeutic interventions. Future work could explore extending FocusSDF to 3D volumetric data more directly, investigating its robustness to varying image qualities, and integrating uncertainty quantification for clinical translation.