Published by Scitator almost 4 years ago
reciprocal_rank
metriccatalyst.metrics
wrap_metric_fn_with_activation
for model outputs wrapping with activationper_class=False
option for metrics callbacksPrecisionCallack
, RecallCallack
for multiclass problemsAMPOptimizerCallback
and OptimizerCallback
were merged (#1007)SchedulerCallback
tensorboard, ipython, matplotlib, pandas, scikit-learn
moved to optional requirementsPerplexityMetricCallback
moved to catalyst.callbacks
from catalyst.contrib.callbacks
PerplexityMetricCallback
renamed to PerplexityCallback
catalyst.contrib.utils.confusion_matrix
renamed to catalyst.contrib.utils.torch_extra
catalyst.data
moved to catalyst.contrib.data
catalyst.data.scripts
moved to catalyst.contrib.scripts
catalyst.utils
, catalyst.data.utils
and catalyst.contrib.utils
restructuredReaderSpec
renamed to IReader
SupervisedExperiment
renamed to AutoCallbackExperiment
dcg
/ndcg
metrics (#998)catalyst[cv]
, catalyst[dev]
, catalyst[log]
, catalyst[ml]
, catalyst[nlp]
,catalyst[tune]
KNNMetricCallback
sklearn
mode for ConfusionMatrixLogger
catalyst.data.utils
catalyst.tools.meters
Published by Scitator almost 4 years ago
OneOf
and OneOfV2
batch transforms (#951)precision_recall_fbeta_support
metric (#971)20.10.1
for tutorials (#967)IRunner
-> simplified IRunner
(#984)set_global_seed
moved from utils.seed
to utils.misc
(#986)Published by Scitator almost 4 years ago
CHANGELOG.md
file and add information about unit test to PULL_REQUEST_TEMPLATE.md
([#955])(https://github.com/catalyst-team/catalyst/pull/955)catalyst-dl tune
config specification - now optuna params are grouped under study_params
(#947)IRunner._prepare_for_stage
logic moved to IStageBasedRunner.prepare_for_stage
(#947)
MnistMLDataset
and MnistQGDataset
data split logic - now targets of the datasets are disjoint (#949)catalyst.experiments
/catalyst.runners
/catalyst.callbacks
respectivelycatalyst.tools.*
to catalyst.*
catalyst.*.utils
to catalyst.utils
catalyst.utils
(#963)Published by Scitator almost 4 years ago
catalyst-dl tune
command - Optuna with Config API integration for AutoML hyperparameters optimization (#937)OptunaPruningCallback
alias for OptunaCallback
(#937)catalyst.contrib.nn.criterion
(#942)utils.prepare_config_api_components
(#936)Published by Scitator almost 4 years ago
MovieLens dataset
loader (#903)force
and bert-level
keywords to catalyst-data text2embedding
(#917)OptunaCallback
to catalyst.contrib
(#915)DynamicQuantizationCallback
and catalyst-dl quantize
script for fast quantization of your model (#890)OptiomizerCallback
- flag use_fast_zero_grad
for faster (and hacky) version of optimizer.zero_grad()
(#927)IOptiomizerCallback
, ISchedulerCallback
, ICheckpointCallback
, ILoggerCallback
as core abstractions for Callbacks (#933)USE_AMP
for PyTorch AMP usage (#933)Published by Scitator almost 4 years ago
CMCScoreCallback
(#880)BatchTransformCallback
(#862)average_precision
and mean_average_precision
metrics (#883)MultiLabelAccuracyCallback
, AveragePrecisionCallback
and MeanAveragePrecisionCallback
callbacks (#883)Imagenette
, Imagewoof
, and Imagewang
datasets (#902)IMetricCallback
, IBatchMetricCallback
, ILoaderMetricCallback
, BatchMetricCallback
, LoaderMetricCallback
abstractions (#897)HardClusterSampler
inbatch sampler (#888)catalyst.registry
(#883)mean_average_precision
logic merged with average_precision
(#897)catalyst.contrib.data
merged to catalyst.data
(#905)ToTensor
was renamed to ImageToTensor
(#905)TracerCallback
moved to catalyst.dl
(#905)ControlFlowCallback
, PeriodicLoaderCallback
moved to catalyst.core
(#905)Published by Scitator almost 4 years ago
log
parameter to WandbLogger
(#836)WrapperCallback
and ControlFlowCallback
(#842)BatchOverfitCallback
(#869)overfit
flag for Config API (#869)InBatchSamplers
: AllTripletsSampler
and HardTripletsSampler
(#825)SqueezeAndExcitation
-> cSE
ChannelSqueezeAndSpatialExcitation
-> sSE
ConcurrentSpatialAndChannelSqueezeAndChannelExcitation
-> scSE
_MetricCallback
-> IMetricCallback
dl.Experiment.process_loaders
-> dl.Experiment._get_loaders
LRUpdater
become abstract class (#837)calculate_confusion_matrix_from_arrays
changed params order (#837)dl.Runner.predict_loader
uses _prepare_inner_state
and cleans experiment
(#863)toml
to the dependencies (#872)crc32c
dependency (#872)workflows/deploy_push.yml
failed to push some refs (#864).dependabot/config.yml
contained invalid details (#781)LanguageModelingDataset
(#841)global_*
counters in Runner
(#858)PeriodicLoaderCallback
overwrites best state (#867)OneCycleLRWithWarmup
(#851)Published by Scitator over 4 years ago
utils.process_components
moved from utils.distributed
to utils.components
(#822)catalyst.core.state.State
merged to catalyst.core.runner._Runner
(#823) (backward compatibility included)
catalyst.core.callback.Callback
now works directly with catalyst.core.runner._Runner
state_kwargs
renamed to stage_kwargs
CheckpointCallback
: new argument load_on_stage_start
which accepts str
and Dict[str, str]
(#797)TracerCallback
(#789)CheckpointCallback
: additional logic for argument load_on_stage_end
- accepts str
and Dict[str, str]
(#797)utils.trace_model
: changed logic - runner
argument was changed to predict_fn
(#789)contrib.data
and contrib.datasets
(#820)catalyst.utils.meters
moved to catalyst.tools
(#820)catalyst.contrib.utils.tools.tensorboard
moved to catalyst.contrib.tools
(#820)Published by Scitator over 4 years ago
Published by Scitator over 4 years ago
Published by Scitator over 4 years ago
Published by Scitator over 4 years ago
We finally organise Experiment-Runner-State-Callback as it should be.
We also have great documentation update!
Experiment - an abstraction that contains information about the experiment – a model, a criterion, an optimizer, a scheduler, and their hyperparameters. It also contains information about the data and transformations used. In general, the Experiment knows what you would like to run.
Runner - a class that knows how to run an experiment. It contains all the logic of how to run the experiment, stages, epoch and batches.
State - some intermediate storage between Experiment and Runner that saves the current state of the Experiments – model, criterion, optimizer, schedulers, metrics, loaders, callbacks, etc
Callback - a powerful abstraction that lets you customize your experiment run logic. To give users maximum flexibility and extensibility we allow callback execution anywhere in the training loop
on_stage_start
on_epoch_start
on_loader_start
on_batch_start
# ...
on_batch_end
on_epoch_end
on_stage_end
on_exception
First of all - just read the docs for State, Experiment, Runner and Callback abstractions.
Long story short, State just saves everything during experiment and passes to every Callback in Experiment through Runner.run_event.
For example, usual case, for some custom metric implementation, all you need to do is
from catalyst.dl import Callback, State
class MyPureMetric(Callback):
def on_batch_end(self, state: State):
"""To store batch-based metrics"""
state.batch_metrics[{metric_name}] = metric_value
def on_loader_end(self, state: State):
"""To store loader-based metrics"""
state.loader_metrics[{metric_name}] = metric_value
def on_epoch_end(self, state: State):
"""To store epoch-based metrics"""
state.epoch_metrics[{metric_name}] = metric_value
There are coming many more Catalyst concepts, tutorials and docs in near future.
data time per batch
, model time per batch
, samples per second
Working with Catalyst.DL it's better to import everything in straightforward way like
from catalyst.dl import SomethingGreat
from catalyst.dl import utils
utils.do_something_cool()
CriterionAggregatorCallback
moved to catalyst.contrib and will be deprecated in 20.04
release.SchedulerCallback
reduce_metric
was renamed to reduced_metric
:)During 20.03 -> 20.04 releases, we are going to deprecate all SomeContribRunner
and transfer them to SomeContribLogger
as more general purpose solution.
Published by Scitator over 4 years ago
Published by Scitator almost 5 years ago
NeurIPS 2019: Learn to Move - Walk Around, 2nd place
Published by Scitator about 5 years ago
NeurIPS 2019: Recursion Cellular Image Classification
Published by Scitator about 5 years ago
We are happy to announce MLComp release – a distributed DAG (Directed acyclic graph) framework for machine learning with UI. Powered by Catalyst.Team.
We also release a detailed classification tutorial and comprehensive classification pipeline.
Slowly-slowly, more and more challenges are powered by catalyst https://github.com/catalyst-team/catalyst/pull/302.
We also update the licence to Apache 2.0, start the Patreon and even run catalyst-info repo!
And finally, we have integrate wandb to the catalyst, both DL & RL!
parallel-gpu-run
and catalyst-rl-run
https://github.com/catalyst-team/catalyst/pull/288
state_dict
param support for all contrib encoders https://github.com/catalyst-team/catalyst/pull/292
requires_grad
logic update https://github.com/catalyst-team/catalyst/pull/346
Published by Scitator about 5 years ago
Published by Scitator over 5 years ago
Published by Scitator over 5 years ago
from catalyst.dl import utils
LossCallback
replaced with CriterionCallback