In this work, we explore leveraging the power of recently introduced Contrastive Language-Image Pre-training (CLIP) models in order to develop a text-based interface for StyleGAN image manipulation that does not require such manual effort.
2021: Or Patashnik, Zongze Wu, E. Shechtman, D. Cohen-Or, D. Lischinski
Methods: Adaptive Instance Normalization • Convolution • Dense Connections • Feedforward Network • Leaky ReLU • R1 Regularization • StyleGAN
https://arxiv.org/pdf/2103.17249v1.pdf