Visualizing Adapted Knowledge in Domain Transfer

Published in CVPR, 2021

Yunzhong Hou and Liang Zheng

A source model trained on source data and a target model learned through unsupervised domain adaptation (UDA) usually encode different knowledge. To understand the adaptation process, we portray their knowledge difference with image translation. Specifically, we feed a translated image and its original version to the two models respectively, formulating two branches. Through updating the translated image, we force similar outputs from the two branches. When such requirements are met, differences between the two images can compensate for and hence represent the knowledge difference between models. To enforce similar outputs from the two branches and depict the adapted knowledge, we propose a source-free image translation method that generates source-style images using only target images and the two models. We visualize the adapted knowledge on several datasets with different UDA methods and find that generated images successfully capture the style difference between the two domains. For application, we show that generated images enable further tuning of the target model without accessing source data. Code available.

alt text

alt text


paper

code

知乎-UDA可视化

知乎-无需风格图像的风格迁移


citation:

@inproceedings{hou2021visualizing,
  title={Visualizing Adapted Knowledge in Domain Transfer},
  author={Hou, Yunzhong and Zheng, Liang},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2021}
}