Discovering Interpretable GAN Controls [NeurIPS 2020]
APACHE-2.0 License
GANSpace: Discovering Interpretable GAN Controls Erik Härkönen1,2, Aaron Hertzmann2, Jaakko Lehtinen1,3, Sylvain Paris2 1Aalto University, 2Adobe Research, 3NVIDIA https://arxiv.org/abs/2004.02546
See the setup instructions.
This repository includes versions of BigGAN, StyleGAN, and StyleGAN2 modified to support per-layer latent vectors.
Interactive model exploration
# Explore BigGAN-deep husky
python interactive.py --model=BigGAN-512 --class=husky --layer=generator.gen_z -n=1_000_000
# Explore StyleGAN2 ffhq in W space
python interactive.py --model=StyleGAN2 --class=ffhq --layer=style --use_w -n=1_000_000 -b=10_000
# Explore StyleGAN2 cars in Z space
python interactive.py --model=StyleGAN2 --class=car --layer=style -n=1_000_000 -b=10_000
# Apply previously saved edits interactively
python interactive.py --model=StyleGAN2 --class=ffhq --layer=style --use_w --inputs=out/directions
Visualize principal components
# Visualize StyleGAN2 ffhq W principal components
python visualize.py --model=StyleGAN2 --class=ffhq --use_w --layer=style -b=10_000
# Create videos of StyleGAN wikiart components (saved to ./out)
python visualize.py --model=StyleGAN --class=wikiart --use_w --layer=g_mapping -b=10_000 --batch --video
Options
Command line paramaters:
--model one of [ProGAN, BigGAN-512, BigGAN-256, BigGAN-128, StyleGAN, StyleGAN2]
--class class name; leave empty to list options
--layer layer at which to perform PCA; leave empty to list options
--use_w treat W as the main latent space (StyleGAN / StyleGAN2)
--inputs load previously exported edits from directory
--sigma number of stdevs to use in visualize.py
-n number of PCA samples
-b override automatic minibatch size detection
-c number of components to keep
All figures presented in the main paper can be recreated using the included Jupyter notebooks:
figure_teaser.ipynb
figure_pca_illustration.ipynb
figure_pca_cleanup.ipynb
figure_style_content_sep.ipynb
figure_supervised_comp.ipynb
figure_biggan_style_resampling.ipynb
figure_edit_zoo.ipynb
models/wrappers.py
using the BaseModel
interface.get_model()
in models/wrappers.py
.It is possible to import trained StyleGAN and StyleGAN2 weights from TensorFlow into GANSpace.
conda install tensorflow-gpu=1.*
.__init__()
, load_model()
in models/wrappers.py
under class StyleGAN.checkpoints/stylegan2/<dataset>_<resolution>.pt
.__init__()
, download_checkpoint()
in models/wrappers.py
under class StyleGAN2.We would like to thank:
@inproceedings{härkönen2020ganspace,
title = {GANSpace: Discovering Interpretable GAN Controls},
author = {Erik Härkönen and Aaron Hertzmann and Jaakko Lehtinen and Sylvain Paris},
booktitle = {Proc. NeurIPS},
year = {2020}
}
The code of this repository is released under the Apache 2.0 license.
The directory netdissect
is a derivative of the GAN Dissection project, and is provided under the MIT license.
The directories models/biggan
and models/stylegan2
are provided under the MIT license.