A Julia framework for invertible neural networks
MIT License
Documentation | Build Status | JOSS paper |
---|---|---|
Building blocks for invertible neural networks in the Julia programming language.
InvertibleNetworks is registered and can be added like any standard Julia package with the command:
] add InvertibleNetworks
Due to its memory scaling InvertibleNetworks.jl, has been particularily successful at Bayesian posterior sampling with simulation-based inference. To get started with this application refer to a simple example (Conditional sampling for MNSIT inpainting) but feel free to modify this script for your application and please reach out to us for help.
1x1 Convolutions using Householder transformations (example)
Residual block (example)
Invertible coupling layer from Dinh et al. (2017) (example)
Invertible hyperbolic layer from Lensink et al. (2019) (example)
Invertible coupling layer from Putzky and Welling (2019) (example)
Invertible recursive coupling layer HINT from Kruse et al. (2020) (example)
Activation normalization (Kingma and Dhariwal, 2018) (example)
Various activation functions (Sigmoid, ReLU, leaky ReLU, GaLU)
Objective and misfit functions (mean squared error, log-likelihood)
Dimensionality manipulation: squeeze/unsqueeze (column, patch, checkerboard), split/cat
Squeeze/unsqueeze using the wavelet transform
Invertible recurrent inference machines (Putzky and Welling, 2019) (generic example)
Generative models with maximum likelihood via the change of variable formula (example)
Glow: Generative flow with invertible 1x1 convolutions (Kingma and Dhariwal, 2018) (generic example, source)
GPU support is supported via Flux/CuArray. To use the GPU, move the input and the network layer to GPU via |> gpu
using InvertibleNetworks, Flux
# Input
nx = 64
ny = 64
k = 10
batchsize = 4
# Input image: nx x ny x k x batchsize
X = randn(Float32, nx, ny, k, batchsize) |> gpu
# Activation normalization
AN = ActNorm(k; logdet=true) |> gpu
# Test invertibility
Y_, logdet = AN.forward(X)
If you use InvertibleNetworks.jl in your research, we would be grateful if you cite us with the following bibtex:
@article{Orozco2024, doi = {10.21105/joss.06554}, url = {https://doi.org/10.21105/joss.06554}, year = {2024}, publisher = {The Open Journal}, volume = {9}, number = {99}, pages = {6554}, author = {Rafael Orozco and Philipp Witte and Mathias Louboutin and Ali Siahkoohi and Gabrio Rizzuti and Bas Peters and Felix J. Herrmann}, title = {InvertibleNetworks.jl: A Julia package for scalable normalizing flows}, journal = {Journal of Open Source Software} }
The following publications use InvertibleNetworks.jl:
"Reliable amortized variational inference with physics-based latent distribution correction"
"Learning by example: fast reliability-aware seismic imaging with normalizing flows"
"Preconditioned training of normalizing flows for variational inference in inverse problems"
"Generalized Minkowski sets for the regularization of inverse problems"
We welcome contributions and bug reports! Please see CONTRIBUTING.md for guidance.
InvertibleNetworks.jl development subscribes to the Julia Community Standards.
Rafael Orozco, Georgia Institute of Technology [[email protected]]
Philipp Witte, Georgia Institute of Technology (now Microsoft)
Gabrio Rizzuti, Utrecht University
Mathias Louboutin, Georgia Institute of Technology
Ali Siahkoohi, Georgia Institute of Technology
This package uses functions from NNlib.jl, Flux.jl and Wavelets.jl