A library for ML benchmarking. It's powerful.
Requires pytorch-metric-learning 0.9.92, which also means pytorch 1.6 is required.
umap.UMAP
under the "visualizer"
type, then you can do:--tester~APPLY~2 {visualizer: {UMAP: {}}}
Plots will be saved in a saved_plots
folder per split. When evaluating an ensemble, the plots will be saved in meta_logs/saved_plots
split_to_aggregate
:aggregator:
MeanAggregator:
split_to_aggregate: val
0th model is saved as the "best" before training begins, so that there always exists a "best" model, in case the 0th model is never surpassed.
Update loss factory to be compatible with pytorch-metric-learning 0.9.92, so the nested objects (distances, reducers, weight regularizers, embedding regularizers, and weight init functions) can be specified in the config for the loss function.
Added api_parser
config option, which is null by default. In this default setting, BaseAPIParser
is used. If you use a custom trainer, it will try to use API<name_of_your_trainer>
, and if that doesn't exist, it will use BaseAPIParser
. If you set the api_parser
option, then that will be used:
api_parser:
your_custom_parser:
Changed default folder locations in run.py. Before it was /content
, which wasn't a nice user experience for first time users.
Added log_data_to_tensorboard
config option. It is True by default. Set it to False if you don't want to log data to tensorboard. This can be useful if your disk I/O is slow.