Informative Multimodal Unsupervised Image-to-Image Translation - Université d'Évry Access content directly
Conference Papers Year : 2021

Informative Multimodal Unsupervised Image-to-Image Translation

Abstract

We propose a new method of multimodal image translation, called InfoMUNIT, which is an extension of the state-of-the-art method MUNIT. Our method allows controlling the style of the generated images and improves their quality and diversity. It learns to maximize the mutual information between a subset of style code and the distribution of the output images. Experiments show that our model cannot only translate one image from the source domain to multiple images in the target domain but also explore and manipulate features of the outputs without annotation. Furthermore, it achieves a superior diversity and a competitive image quality to state-of-the-art methods in multiple image translation tasks.

Dates and versions

hal-04432108 , version 1 (01-02-2024)

Identifiers

Cite

Tien Tai Doan, Guillaume Ghyselinck, Blaise Hanczar. Informative Multimodal Unsupervised Image-to-Image Translation. 9th International Conference of Security, Privacy and Trust Management (SPTM 2021), Apr 2021, Copenhagen, Denmark. pp.37--51, ⟨10.5121/csit.2021.110503⟩. ⟨hal-04432108⟩
5 View
0 Download

Altmetric

Share

Gmail Facebook X LinkedIn More