The one stop shop for biomedical landmark localization. Automatic configuration for non-expert users, deep & modular customization for developers/researchers.
MediMarker is an out-of-the-box automated pipeline for landmark localization. We also support uncertainty estimation for model predictions.
As a user, all you need to do is provide your data in the correct format and the pipeline will take care of the rest. You can also use our pre-trained models for inference.
The pipeline is simple, and the default model is based on the U-Net architecture. It automatically configures the size of the architecture based on the size of your images. By default, we use heatmap regression for landmark localization. We support ensemble learning, and uncertainty estimation.
As a researcher/developer, you can extend this framework to add your own models, loss functions, training schemes etc. by extending a few classes. The advantage of this is that there is a lot of code you don't have to write that is specific to landmark localization and you can concentrate on implementing the important stuff.
I provide easy instructions with examples on exactly what you need to do. You can use this framework to evaluate your own models over many datasets in a controlled environment. So far, beyond U-Net we have also added PHD-Net, which is a completely different paradigm of landmark localization, but can be integrated seamlessly with this framework. In our gaussian_process branch, we have also added a Convolutional Gaussian Process model for landmark localization.
For advanced users, we provide the following features:
For Gaussian Processes check the gaussian_process branch. For transformers and resnet check the tom branch.
conda create --name my_env
conda activate my_env
conda env update --name my_env --file requirements/environment.yml
That's it!
If you have the ISBI 2015 Cephalometric landmarking dataset accessible, you can run to perform inference on a pretrained model with the default U-Net model, or easily train your own. In MediMarker, we use .yaml files to configure the pipeline. You can find the .yaml files in the configs/examples folder.
To run inference, run:
python main.py --cfg configs/examples/U-Net_Classic/Cephalometric/unet_cephalometric_fold0.yaml
To train a model on this dataset, you can run:
python main.py --cfg configs/examples/U-Net_Classic/Cephalometric/unet_cephalometric_fold0_train.yaml
For students at the University of Sheffield using the Bessemer on the HPC, you have to load conda and CUDA. I have written a script to do so. Run the following:
cd scripts/scripts_bess
source run_train_config.sh --cfg configs/examples/U-Net_Classic/Cephalometric/unet_cephalometric_fold0.yaml
If you included a testing list in your JSON (this is the case for the above example), inference will be completed after training and the results will be saved in the OUTPUT.OUTPUT_DIR (defined in the .yaml file). If you cancel training early or want to re-run inference, change your .yaml file as follows:
If you did not include a testing list (e.g. you want to use a pre-trained model on your own data), you can run inference on a separate json file if you change your .yaml file as follows:
Please see Using Your Own Dataset for more details.