Holocron

PyTorch implementations of recent Computer Vision tricks (ReXNet, RepVGG, Unet3p, YOLOv4, CIoU loss, AdaBelief, PolyLoss, MobileOne). Other additions: AdEMAMix

APACHE-2.0 License

Downloads
224
Stars
316
Committers
4
Holocron - v0.2.1: Rebranded project architecture and bug fixes Latest Release

Published by frgfm over 2 years ago

This patch release improves project quality while fixing several bugs.

Note: holocron 0.2.1 requires PyTorch 1.9.1 and torchvision 0.10.1 or higher.

Highlights

โšก API improvements

When performing inference, speed is key. For this reason, the Gradio demo and FastAPI boilerplate were updated to switch from Pytorch backend to ONNX. What does this change?

Much lower latency, and much lighter dependencies. The size of the docker image for the API is significantly smaller. Additionally, Poetry was used to handle the dependencies of the API template. For backend tasks, dependency modifications can be critical and poetry is a great tool to manage this. This also comes with a nice integration for the Dependabot ๐Ÿค–

๐Ÿ’… Cleaning project hierarchy

With new PEP conventions, Python projects can now have their whole package definition in pyproject.toml using setup.tools. By moving most configuration files to this, the project is now much leaner.

โœจ PolyLoss

A new SOTA candidate as default loss for model training was recently published, and this release comes with a clean implementation!
Get started with your new training to try it out ๐Ÿƒโ€โ™‚๏ธ

Full changelog

New Features ๐Ÿš€

Bug Fixes ๐Ÿ›

Improvements

Miscellaneous

New Contributors

Full Changelog: https://github.com/frgfm/Holocron/compare/v0.2.0...v0.2.1

Holocron - v0.2.0: Improved performances, API boilerplate and demo app

Published by frgfm over 2 years ago

This release greatly improves classification performances and adds numerous tools to deploy or showcase your models.

Note: holocron 0.2.0 requires PyTorch 1.9.1 and torchvision 0.10.1 or newer.

Highlights

๐Ÿฆ“ New entries in the model zoo

RepVGG joins the model zoo to provide an interesting change of pace: using two forward-wise equivalent architectures, one for the training and the other for the inference.

This brings a very good balance between inference speed and performances for VGG-like models, as it outclasses several ResNet architectures (cf. https://github.com/frgfm/Holocron/tree/master/references/classification).

๐Ÿ“‘ Tutorial notebooks

To reduce friction between users and domain experts, a few tutorials were added to the documentation in the form of notebooks.

image

Thanks to Google Colab, you can run all the commands on a GPU without owning one ๐Ÿ‘

๐Ÿ’ป API boilerplate

Ever dreamt of deploying a small REST API to expose your vision models?
Using the great FastAPI library, a minimal API template was implemented for you to easily deploy models in containerized environments.

Once your API is running, the following snippet:

import requests
with open('/path/to/your/img.jpeg', 'rb') as f:
    data = f.read()
response = requests.post("http://localhost:8002/classification", files={'file': data}).json()

yields:

{'value': 'French horn', 'confidence': 0.9186984300613403}

For more information, please refer to the dedicated README.

๐ŸŽฎ Gradio demo

Hugging Face Spaces

To better showcase the capabilities of the pre-trained models, a small demo app was added to the project (with a live version hosted on HuggingFace Spaces).

demo

It was built for basic image classification using Gradio.

๐Ÿค— Integration with HuggingFace model hub

In order to have a more open way to contribute/share models, default configuration dicts are now accessible in every model. Thanks to this and HuggingFace Hub, checkpoints can be hosted freely (cf. https://huggingface.co/frgfm/repvgg_a0), and you can instantiate models from this.

from holocron.models.utils import model_from_hf_hub

model = model_from_hf_hub("frgfm/repvgg_a0").eval()

This opens the way for external contributors to upload their own checkpoint & config, and use Holocron seamlessly.

โšก Cutting-edge training scripts

This release comes with major upgrades for the reference scripts, in two aspects:

  • speed: adding support of Average Mixed Precision (AMP)
  • performance: updated the default augmentations, adding new optimizers (AdamP, AdaBelief) and regularization methods (mixup)

Those should help you to reach better results with your own experiments.

Breaking changes

License update

To better reflect the spirit of the projects of welcoming contributions from everywhere, the license was changed from MIT to Apache 2. This shouldn't impact your usage much as it is one of the most commonly used licenses for open source.

Deprecated features now supported by PyTorch

Since Holocron is meant as an addon to PyTorch/Torchvision, a few features have been deprecated as they were integrated into PyTorch. Those include:

  • activations: SiLU, Mish
  • optimizer: RAdam

Naming of trainer's method

The trainer's method to determine the optimal learning rate had its name changed from lr_find to find_lr.

0.1.3 0.2.0
>>> trainer = ... >>> trainer.lr_find() >>> trainer = ... >>> trainer.find_lr()

Full changelog

Breaking Changes ๐Ÿ› 

New Features ๐Ÿš€

Bug Fixes ๐Ÿ›

Improvements

Miscellaneous

New Contributors

Full Changelog: https://github.com/frgfm/Holocron/compare/v0.1.3...v0.2.0

Holocron - Task-specific model trainers and new losses & layers

Published by frgfm almost 4 years ago

This minor release introduces new losses, layers and trainer objects, on top of heavy refactoring.
Annotation typing was added to the codebase to improve CI checks.

Note: holocron 0.1.3 requires PyTorch 1.5.1 and torchvision 0.6.1 or newer.

Highlights

models

Implementations of deep learning models
New

  • Added implementations of Res2Net (#63, #91), TridentNet (#64, #82), ResNet-50D (#65), PyConvResNet & PyConvHGResNet (#66), CSPDarknet53 (#77, #87), SKNet (#96)
  • Added implementation of YOLOv4 (#78)
  • Added pretrained URLs for Darknets (#71), CSPDarknet53 (#87), ResNet50D (#87), TridentNet50 (#87), Res2Net (#92)
  • Updated pretrained URLs of ReXNet (#87)
  • Updated conv_sequence (#94)

Improvements

  • Improved pooling efficiency (#65)
  • Refactored model implementations (#67, #78, #99)

Fixes

  • Fixed pretrained URLs of ResNet, ReXNet (#61)
  • Fixed implementations of Darknet & YOLO (#69, #70, #72 #74, #75, #83)

nn

Neural networks building blocks
New

  • Added implementations of HardMish (#62), PyConv2d (#66), FReLU (#73), ClassBalancedWrapper (#76), BlurPool2d (#80), ComplementCrossEntropy (#90), SAM & SPP (#94), LambdaLayer (#95), MutualChannelLoss (#100)

optim

Optimizer and learning rate schedulers
New

  • Added implementation of AdaBelief (#93)

Improvements

  • Refactored existing optimizers (#93)

ops

Optimizer and learning rate schedulers
New

  • Added implementation of Generalized IoU loss and Complete IoU loss (#78, #88)

trainer

Utility objects for easier training on different tasks
New

  • Added Trainer, ClassificationTrainer (#81), SegmentationTrainer and DetectionTrainer (#83)

References

Verifications of the package well-being before release
Improvements

  • Refactored training scripts (#68, #72, #81, #83, #84)
  • Added contiguous param trick (#79)

Fixes

  • Fixed detection script (#84, #85), segmentation script (#86)

Others

Improvements

  • Optimized cache (#98, #99)
  • Added annotation typing to package (#99)
Holocron - Object detection, segmentation and new layers

Published by frgfm over 4 years ago

This minor release introduces new model tasks and training scripts.
In the release attachments, you will find remapped ReXNet ImageNet pretrained weights from https://github.com/clovaai/rexnet, ImageNette pretrained weights from the repo owner.

Note: holocron 0.1.2 requires PyTorch 1.5.1 and torchvision 0.6.1 or newer.

Highlights

models

Implementations of deep learning models
New

  • Added implementations of UNet (#43), UNet++ (#46), and UNet3+ (#47)
  • Added implementation of ResNet (#55), ReXNet (#56, #58, #59, #60)

Improvements

  • Updated Darknet pretrained models (#32)
  • Improved Darknet flexibility (#45)

Fixes

  • Fixed YOLO inference and loss (#38)

nn

Neural networks building blocks
New

  • Added implementations for Add2d (#35), NormConv (#34), SlimConv (#36, #49)
  • Added Dropblock implementation (#53)
  • Added implementations of SiLU/Swish (#54, #57)

Improvements

  • Improved efficiency of ConcatDownsample2d (#48)

optim

Optimizer and learning rate schedulers
New

  • Added implementation of TAdam (#52)

Improvements

  • Added support for rendering in notebooks (#39)
  • Fixed inplace add operator usage in optimizers (#40, #42)

Documentation

Online resources for potential users
Improvements

  • Improved docstring for better understanding (#37,

References

Verifications of the package well-being before release
New

  • Added training script for object detection (#41)
  • Added training script for semantic segmentation (#50)

Others

Improvements

  • Cleaned codebase (#44, #51)

Fixes

  • Fixed conda upload job (#33)
Holocron - Pretrained models for image classification

Published by frgfm over 4 years ago

This minor release updates some model pretrained weights and documentation.

Note: holocron 0.1.1 requires PyTorch 1.2 and torchvision 0.4 or newer.

Highlights

models

Implementations of deep learning models
Improvements

  • Add pretrained wiehgts for Darknet-24, Darknet-19 and Darknet-53 (#29, #30)

Documentation

Online resources for potential users
Improvements

  • Updated docstring references (#31)
  • Added installation instructions (#31)
  • Cleaned documentation hierarchy (#31)
  • Adding website referencing (#31)

References

Verifications of the package well-being before release
Improvements

  • Updated result reported in README (#30)
Holocron - Pretrained for image classification

Published by frgfm over 4 years ago

This release adds implementations of both image classification and object detection models.

Note: holocron 0.1.0 requires PyTorch 1.2 and torchvision 0.4 or newer.

Highlights

models

Implementations of deep learning models
New

  • Add implementations of Darknet-24, Darknet-19 and Darknet-53 (#20, #22, #23, #24)
  • Add implementations of YOLOv1 and YOLOv2 (#22, #23).

nn

Neural networks building blocks
New

  • Add weight initialization function (#24)
  • Add mish & nl_relu activations
  • Add implementations of focal loss, multi label cross-entropy loss and label smoothing cross-entropy loss (#16, #17, #25)
  • Add mixup loss wrapper (#27)

ops

High-performance batch operations
New

  • Add implementations of distance IoU and complete IoU losses (#12)

optim

Optimizer and learning rate schedulers
New

  • Add implementations for LARS, Lamb, RAdam, and Lookahead (#6)
  • Add an implementation of OneCycle scheduler

Documentation

Online resources for potential users
New

  • Add sphinx automatic documentation build for existing features (#7, #8, #13, #21)
  • Add contribution guidelines (#1)
  • Add installation & usage instructions in readme (#1, #2)

References

Verifications of the package well-being before release
New

  • Add a training script for Imagenette (#28)

Others

Other tools and implementations

  • Add ฬ€lr_finder` to estimate the optimal starting learning rate (#26).
  • Add 'mixup_collate` to use Mixup on existing DataLoader (#27)