Multi-modal unsupervised domain adaptation for semantic image segmentation - Archive ouverte HAL Access content directly
Journal Articles Pattern Recognition Year : 2023

Multi-modal unsupervised domain adaptation for semantic image segmentation

(1) , (1) , (1) , (1)


We propose a novel multi-modal-based Unsupervised Domain Adaptation (UDA) method for semantic segmentation. Recently, depth has proven to be a relevent property for providing geometric cues to enhance the RGB representation. However, existing UDA methods solely process RGB images or additionally cultivate depth-awareness with an auxiliary depth estimation task. We argue that geometric cues that are crucial to semantic segmentation, such as local shape and relative position, are challenging to recover from an auxiliary depth estimation task with mere color (RGB) information. In this paper, we propose a novel multi-modal UDA method named MMADT, which relies on both RGB and depth images as input. In particular, we design a Depth Fusion Block (DFB) to recalibrate depth information and leverage Depth Adversarial Training (DAT) to bridge the depth discrepancy between the source and target domain. Besides, we propose a self-supervised multi-modal depth estimation assistant network named Geo-Assistant (GA) to align the feature space of RGB and depth and shape the sensitivity of our MMADT to depth information. We experimentally observed significant performance improvement in multiple synthetic to real adaptation benchmarks, i.e., SYNTHIA-to-Cityscapes, GTA5-to-Cityscapes and SELMA-to-Cityscapes. Additionally, our multi-modal UDA scheme is easy to port to other UDA methods with a consistent performance boost.
Not file

Dates and versions

hal-03948429 , version 1 (20-01-2023)



Sijie Hu, Fabien Bonardi, Samia Bouchafa, Désiré Sidibé. Multi-modal unsupervised domain adaptation for semantic image segmentation. Pattern Recognition, 2023, 137, pp.109299. ⟨10.1016/j.patcog.2022.109299⟩. ⟨hal-03948429⟩
0 View
0 Download



Gmail Facebook Twitter LinkedIn More