Team: robust-and-stable
This repository provides the implementation of our submission to the AAPM DL-Sparse-View CT Challenge.
More details can be found in our ICML2022 paper Near-Exact Recovery for Tomographic Inverse Problems via Deep Learning (by M. Genzel, I. Gühring, J. Macdonald, and M. März) and our short challenge submission report Designing an Iterative Network for Fanbeam-CT with Unknown Geometry.
The repository contains code to train the complete pipeline (Operator -> UNet -> ItNet -> ItNet-post) of our proposed reconstruction method, as well as for two comparison networks (Tiramisu & Learned Primal Dual).
The challenge data is not contained in this repository and needs to be obtained directly from the challenge website.
config.py
. It specifies the directory paths for the data and results. By default, the data should be stored in the subdirectory raw_data
and results and model weights are stored in the subdirectory results
.script_radon_indentify.py
and script_radon_learn_inv.py
script_evaluate_operator.py
.script_train_*.py
.script_evaluate_test_*.py
The package versions are the ones we used. Other versions might work as well.
cudatoolkit
(v10.1.243)
matplotlib
(v3.1.3)
numpy
(v1.18.1)
pandas
(v1.0.5)
python
(v3.8.3)
pytorch
(v1.6.0)
torchvision
(v0.7.0)
tqdm
(v4.46.0)
Our implementation of the U-Net is based on and adapted from https://github.com/mateuszbuda/brain-segmentation-pytorch/. Our implementation of the Tiramisu network is based on and adapted from https://github.com/bfortuner/pytorch_tiramisu/. Our implementation of the Learned Primal Dual network is inspired by https://github.com/adler-j/learned_primal_dual/.
Thank you for making your code available.
This repository is MIT licensed, as found in the LICENSE file.