Cross-domain deep transfer learning for branching structure segmentation
Segmentation of thin, branching structures in volumetric imaging is a challenging computer vision task due to low contrast, strong class imbalance, and large variability in scale and topology. This work investigates a cross-domain deep transfer learning strategy that exploits morphological similarity between vascular-like branching patterns in different imaging modalities. Models are first pre-trained on the data-rich FIVES retinal vessel dataset and then fine-tuned on a subset of the NSCLC-Radiogenomics chest CT dataset containing annotations of branching structures. We evaluate four U-Net-based architectures — U-Net, Attention U-Net, R2 U-Net and Dense U-Net — and compare them with DeepLabV3 models using ResNet50 and ResNet101 backbones. A unified training pipeline with multi-stage intensity and contrast normalization is employed, along with a 10-fold stratified cross-validation protocol. Performance is assessed using accuracy, precision, Dice (F1 score), and area under the ROC curve (AUC). Cross-domain transfer learning leads to a substantial improvement over training from scratch: Dice scores increase from near-zero values to above 0.48 for the best-performing models. Attention U-Net achieves the highest Dice score of 0.4814, while DeepLabV3 (ResNet50) attains the highest AUC of 0.9621. Dense U-Net also provides competitive results, whereas R2 U-Net benefits less from the proposed transfer scheme. The results demonstrate that leveraging cross-domain morphological priors is an effective way to enhance segmentation of branching structures in data-scarce CT scenarios. The proposed framework provides a strong, reproducible baseline for future research on transfer learning and fine-structure segmentation in volumetric images.