PyTorch implementations of recent Computer Vision tricks (ReXNet, RepVGG, Unet3p, YOLOv4, CIoU loss, AdaBelief, PolyLoss, MobileOne). Other additions: AdEMAMix
APACHE-2.0 License
Published by frgfm over 2 years ago
This patch release improves project quality while fixing several bugs.
Note: holocron 0.2.1 requires PyTorch 1.9.1 and torchvision 0.10.1 or higher.
When performing inference, speed is key. For this reason, the Gradio demo and FastAPI boilerplate were updated to switch from Pytorch backend to ONNX. What does this change?
Much lower latency, and much lighter dependencies. The size of the docker image for the API is significantly smaller. Additionally, Poetry was used to handle the dependencies of the API template. For backend tasks, dependency modifications can be critical and poetry is a great tool to manage this. This also comes with a nice integration for the Dependabot ๐ค
With new PEP conventions, Python projects can now have their whole package definition in pyproject.toml
using setup.tools
. By moving most configuration files to this, the project is now much leaner.
A new SOTA candidate as default loss for model training was recently published, and this release comes with a clean implementation!
Get started with your new training to try it out ๐โโ๏ธ
Full Changelog: https://github.com/frgfm/Holocron/compare/v0.2.0...v0.2.1
Published by frgfm over 2 years ago
This release greatly improves classification performances and adds numerous tools to deploy or showcase your models.
Note: holocron 0.2.0 requires PyTorch 1.9.1 and torchvision 0.10.1 or newer.
RepVGG joins the model zoo to provide an interesting change of pace: using two forward-wise equivalent architectures, one for the training and the other for the inference.
This brings a very good balance between inference speed and performances for VGG-like models, as it outclasses several ResNet architectures (cf. https://github.com/frgfm/Holocron/tree/master/references/classification).
To reduce friction between users and domain experts, a few tutorials were added to the documentation in the form of notebooks.
Thanks to Google Colab, you can run all the commands on a GPU without owning one ๐
Ever dreamt of deploying a small REST API to expose your vision models?
Using the great FastAPI library, a minimal API template was implemented for you to easily deploy models in containerized environments.
Once your API is running, the following snippet:
import requests
with open('/path/to/your/img.jpeg', 'rb') as f:
data = f.read()
response = requests.post("http://localhost:8002/classification", files={'file': data}).json()
yields:
{'value': 'French horn', 'confidence': 0.9186984300613403}
For more information, please refer to the dedicated README.
To better showcase the capabilities of the pre-trained models, a small demo app was added to the project (with a live version hosted on HuggingFace Spaces).
It was built for basic image classification using Gradio.
In order to have a more open way to contribute/share models, default configuration dicts are now accessible in every model. Thanks to this and HuggingFace Hub, checkpoints can be hosted freely (cf. https://huggingface.co/frgfm/repvgg_a0), and you can instantiate models from this.
from holocron.models.utils import model_from_hf_hub
model = model_from_hf_hub("frgfm/repvgg_a0").eval()
This opens the way for external contributors to upload their own checkpoint & config, and use Holocron seamlessly.
This release comes with major upgrades for the reference scripts, in two aspects:
Those should help you to reach better results with your own experiments.
To better reflect the spirit of the projects of welcoming contributions from everywhere, the license was changed from MIT to Apache 2. This shouldn't impact your usage much as it is one of the most commonly used licenses for open source.
Since Holocron is meant as an addon to PyTorch/Torchvision, a few features have been deprecated as they were integrated into PyTorch. Those include:
SiLU
, Mish
RAdam
The trainer's method to determine the optimal learning rate had its name changed from lr_find
to find_lr
.
0.1.3 | 0.2.0 |
---|---|
>>> trainer = ... >>> trainer.lr_find()
|
>>> trainer = ... >>> trainer.find_lr()
|
Full Changelog: https://github.com/frgfm/Holocron/compare/v0.1.3...v0.2.0
Published by frgfm almost 4 years ago
This minor release introduces new losses, layers and trainer objects, on top of heavy refactoring.
Annotation typing was added to the codebase to improve CI checks.
Note: holocron 0.1.3 requires PyTorch 1.5.1 and torchvision 0.6.1 or newer.
Implementations of deep learning models
New
Improvements
Fixes
Neural networks building blocks
New
Optimizer and learning rate schedulers
New
Improvements
Optimizer and learning rate schedulers
New
Utility objects for easier training on different tasks
New
Verifications of the package well-being before release
Improvements
Fixes
Improvements
Published by frgfm over 4 years ago
This minor release introduces new model tasks and training scripts.
In the release attachments, you will find remapped ReXNet ImageNet pretrained weights from https://github.com/clovaai/rexnet, ImageNette pretrained weights from the repo owner.
Note: holocron 0.1.2 requires PyTorch 1.5.1 and torchvision 0.6.1 or newer.
Implementations of deep learning models
New
Improvements
Fixes
Neural networks building blocks
New
Improvements
Optimizer and learning rate schedulers
New
Improvements
Online resources for potential users
Improvements
Verifications of the package well-being before release
New
Improvements
Fixes
Published by frgfm over 4 years ago
This minor release updates some model pretrained weights and documentation.
Note: holocron 0.1.1 requires PyTorch 1.2 and torchvision 0.4 or newer.
Implementations of deep learning models
Improvements
Online resources for potential users
Improvements
Verifications of the package well-being before release
Improvements
Published by frgfm over 4 years ago
This release adds implementations of both image classification and object detection models.
Note: holocron 0.1.0 requires PyTorch 1.2 and torchvision 0.4 or newer.
Implementations of deep learning models
New
Neural networks building blocks
New
mish
& nl_relu
activationsHigh-performance batch operations
New
Optimizer and learning rate schedulers
New
Online resources for potential users
New
Verifications of the package well-being before release
New
Other tools and implementations