Published by Scitator over 2 years ago
Published by Scitator over 2 years ago
tests/pipelines
folder for more information.BackwardCallback
and BackwardCallbackOrder
as an abstraction on top of loss.backward
. Now you could easily log model gradients or transform them before OptimizerCallback
.CheckpointCallbackOrder
for ICheckpointCallback
.3.7
, minimal PyTorch version moved to 1.4.0
.examples
folder from the Catalyst API. The only Runners API, that will be supported in the future: IRunner
, Runner
, ISupervisedRunner
, SupervisedRunner
due to their consistency. If you are interested in any other Runner API - feel free to write your own CustomRunner
and use SelfSupervisedRunner
as an example.Runner.{global/stage}_{batch/loader/epoch}_metrics
renamed to Runner.{batch/loader/epoch}_metrics
CheckpointCallback
rewritten from scratch.IRunner
for all log_*
methods.topk_args
renamed to topk
.catalyst.contrib
- removed, use from catalyst.contrib.{smth} import {smth}
. Could be change to full-imports-only in future versions for stability.89
right margin. Honestly speaking, it's much easier to maintain Catalyst with 89
right margin on MBP'16.ITrial
removed.CustomRunner
with rewritten API.catalyst-dl
scripts removed. Without Config API we don't need them anymore.Nvidia Apex
, Fairscale
, Albumentations
, Nifti
, Hydra
requiremets removed.OnnxCallback
, PruningCallback
, QuantizationCallback
, TracingCallback
removed from callbacks API. These callbacks are under review now.If you have any questions on the Catalyst 22 edition updates, please join Catalyst slack for discussion.
Published by Scitator over 2 years ago
Beta version of Catalyst 22 edition.
Published by Scitator almost 3 years ago
Distributed engines update (multi-node support) and many other improvements.
num_classes
for classification metrics became optional (#1379)requests
requirements for catalyst[cv]
added (#1371)@bagxi @ditwoo @MrNightSky @Nimrais @y-ksenia @sergunya17 @Thiefwerty @zkid18
Published by Scitator almost 3 years ago
Framework architecture simplification and speedup + SSL & RecSys extensions.
resume
support - resolved #1193 (#1349)profile
flag for runner.train
(#1348)SETTINGS.log_batch_metrics
, SETTINGS.log_epoch_metrics
, SETTINGS.compute_per_class_metrics
for framework-wise Metric & Logger APIs specification (#1357)log_batch_metrics
and log_epoch_metrics
options for all available Loggers (#1357)compute_per_class_metrics
option for all available multiclass/label metrics (#1357)catalyst-contrib
scripts reduced to collect-env
and project-embeddings
onlycatalyst-dl
scripts recuded to run
and tune
onlytransforms.
prefix deprecated for Catalyst-based transformscatalyst.tools
moved to catalyst.extras
catalyst.data
moved to catalyst.contrib.data
catalyst.data.transforms
moved to catalyst.contrib.data.transforms
Normalize
, ToTensor
transforms renamed to NormalizeImage
, ImageToTensor
catalyst.contrib.data
catalyst.contrib
moved to code-as-a-documentation developmentcatalyst[cv]
and catalyst[ml]
extensions moved to flatten architecture design; examples: catalyst.contrib.data.dataset_cv
, catalyst.contrib.data.dataset_ml
catalyst.contrib
moved to flatten architecture design; exampels: catalyst.contrib.data
, catalyst.contrib.datasets
, catalyst.contrib.layers
, catalyst.contrib.models
, catalyst.contrib.optimizers
, catalyst.contrib.schedulers
***._misc
modulescatalyst.utils.mixup
moved to catalyst.utils.torch
catalyst.utils.numpy
moved to catalyst.contrib.utils.numpy
SETTINGS.log_batch_metrics=True/False
or os.environ["CATALYST_LOG_BATCH_METRICS"]
SETTINGS.log_epoch_metrics=True/False
or os.environ["CATALYST_LOG_EPOCH_METRICS"]
SETTINGS.compute_per_class_metrics=True/False
or os.environ["CATALYST_COMPUTE_PER_CLASS_METRICS"]
catalyst.contrib.pandas
catalyst.contrib.parallel
catalyst.contrib.models.cv
catalyst.utils.misc
functionscatalyst.extras
removed from the public documentation@asteyo @Dokholyan @Nimrais @y-ksenia @sergunya17
Published by Scitator almost 3 years ago
Readmes and tutorials with a few ddp fixes.
TopKMetric
asbtraction (#1330)CMCMetric
renamed from <prefix>cmc<suffix><k>
to <prefix>cmc<k><suffix>
(#1330)Published by Scitator about 3 years ago
NTXentLoss
(#1278), SupervisedContrastiveLoss
(#1293)ISelfSupervisedRunner
, SelfSupervisedConfigRunner
, SelfSupervisedRunner
, SelfSupervisedDatasetWrapper
(#1278)CategoricalRegressionLoss
and QuantileRegressionLoss
to the contrib
(#1295)WandbLogger
to support artifacts and fix logging steps (#1309)Runner
cleanup, with callbacks and loaders destruction, moved to PipelineParallelFairScaleEngine
only (#1295)HuberLoss
renamed to HuberLossV0
for the PyTorch compatibility (#1295)@asteyo @AyushExel @bagxi @DN6 @gr33n-made @Nimrais @Podidiving @y-ksenia
Published by Scitator about 3 years ago
Published by Scitator about 3 years ago
Hi guys, nice project!
This is the test case release to check out our updated infrastructure.
Published by Scitator about 3 years ago
AdaptiveHingeLoss
, BPRLoss
, HingeLoss
, LogisticLoss
, RocStarLoss
, WARPLoss
(#1269, #1282)sync_bn
support for all available engines (#1275)
hydra-slayer
(#1264))AccumulationMetric
renamed to AccumulativeMetric
catalyst.metrics._metric
to catalyst.metrics._accumulative
accululative_fields
renamed to keys
@bagxi @Casyfill @ditwoo @Nimrais @penguinflys @sergunya17 @zkid18
Published by Scitator about 3 years ago
pre-commit
hook to run codestyle checker on commit (#1257)on publish
github action for docker and docs added (#1260)utils.mixup_batch
(#1241)expdir
in catalyst-dl run
optional (#1249)requirements-neptune.txt
(#1251)BatchPrefetchLoaderWrapper
issue with batch-based PyTorch samplers (#1262)@AlekseySh @bagxi @Casyfill @Dokholyan @leoromanovich @Nimrais @y-ksenia
Published by Scitator over 3 years ago
utils.ddp_sync_run
function for synchronous ddp rundataset_from_params
support in config API (#1231)utils.ddp_sync_run
for utils.ddp_sync_run
data preparationpredict_loader
(#1235)1.1.0
version changesHuberLoss
name conflict for pytorch 1.9 hotfix (#1239)@bagxi @y-ksenia @ditwoo @BorNick @Inkln
Published by Scitator over 3 years ago
tests
folder (#1208)tests/pipelines
(#1215)train()
notebook (#1203)BONUS: Catalyst workshop videos!
Published by Scitator over 3 years ago
Published by Scitator over 3 years ago
catalyst.contrib
modulePublished by Scitator over 3 years ago
TensorboardLogger
switched from global_batch_step
counter to global_sample_step
one (#1174)TensorboardLogger
logs loader metric on_loader_end
rather than on_epoch_end
(#1174)prefix
renamed to metric_key
for MetricAggregationCallback
(#1174)micro
, macro
and weighted
aggregations renamed to _micro
, _macro
and _weighted
(#1174)BatchTransformCallback
updated (#1153)torch.sigmoid
usage for metrics.AUCMetric
and metrics.auc
(#1174)ConsoleLogger
(1142)_key_value
for schedulers in case of multiple optimizers fixed (#1146)Engine
logic during runner.predict_loader
(#1134)Published by Scitator over 3 years ago
The v20
is dead, long live the v21
!
Engine
abstraction to support various hardware backends and accelerators: CPU, GPU, multi GPU, distributed GPU, TPU, Apex, and AMP half-precision training.Logger
abstraction to support various monitoring tools: console, tensorboard, MLflow, etc.Trial
abstraction to support various hyperoptimization tools: Optuna, Ray, etc.Metric
abstraction to support various of machine learning metrics: classification, segmentation, RecSys and NLP.Experiment
abstraction merged into Runner
one.Runner
abstraction simplified to store only current state of the experiment run: all validation logic was moved to the callbacks (by this way, you could easily select best model on various metrics simultaneously).Runner.input
and Runner.output
merged into united Runner.batch
storage for simplicity.catalyst.utils.metrics
to catalyst.metrics
.Callbacks
to appropriate Loggers
.KorniaCallbacks
refactored to BatchTransformCallback
.CallbackOrder.Validation
and CallbackOrder.Logging
Release docs,
Python API minimal examples,
Config/Hydra API example.