Implementation of VAE and Style-GAN Architecture Achieving State of the Art Reconstruction
MIT License
VAE are among the state of the art generative models, but have recently lost their shine to GANs. The most prominent work recently in which is the Style-GAN by Karras et al. VAE has the ability to encode as well as decode - this advantage over the style-gan is useful in many downstream tasks. In this work we combine the style based architecture and VAE and achieve state of the art reconstruction and generation. We follow the work of Hou et al. DFC-VAE to use perceptual loss and we compare our results to this work.
The loss is comprised out of two components:
VaeLayers
PerceptualModel
StyleVae
StyleVaeTrainer
$ python train.py --load <True|False>
$ python test.py
./train_output
to restoreDataset
/data/svae/*.png
Test results can be seen in the visuals part
We used the provided model to train on the FFHQ dataset to produce a 256x256 results:
Available soon...