Yet another (very simple) approach for adversarial training.
APACHE-2.0 License
Yet another (very simple) approach for adversarial training.
GANs are adversarial networks, no need to introduce those.
MMD is maximum mean discrepancy (see e.g. this presentation by A. Smola), it is based on a very simple idea of how to detect difference between distributions.
A decade ago MMD was very popular, notable property of MMD (compared to other tests) is that it works well with kernels.
MMD was already combined with GANs, see e.g.
In my experiments something closer to traditional GANs was considered, because "discriminator" tries to find an appropriate mapping to maximize MMD.
Using pytorch for implementation, I am building upon DCGAN (official implementation from pytorch repo).
Discriminator also contains BatchNormalization at the last layer to ensure the scale of output, that's quite critical, as otherwise you can scale output to get enormous MMD.
Probably, it is among shortest implementations of GANs.
I haven't done much tuning, but here what was found