Published by Scitator over 5 years ago
baselogdir
, timestamp and config hash (https://github.com/catalyst-team/catalyst/pull/204/commits/a141e4c95dde81aa15d0eb0c6daaaf8648df8e20)--version
feature (https://github.com/catalyst-team/catalyst/pull/188)prepare_*
, that have returned something, renamed to get_*
;Experiment._prepare_logdir
renamed to Experiment._get_logdir
UtilsFactory.prepare_models
renamed to UtilsFactory.process_components
and supports PyTorch model, criterion, optimizer and scheduler nowcatalyst.contrib.models.segmentation.models
: ResNetUnet
and ResNetLinknet
per_gpu_batch_size
renamed to per_gpu_scaling
and affects on batch_size and num_workers nowfrom catalyst.contrib.models.segmentation import \
Unet, Linknet, FPNUnet, PSPnet, \
ResnetUnet, ResnetLinknet, ResnetFPNUnet, ResnetPSPnet
runner = SupervisedRunner()
runner.train(
model=model,
criterion=criterion,
optimizer=optimizer,
scheduler=scheduler,
fp16=True
...)
or
runner = SupervisedRunner()
runner.train(
model=model,
criterion=criterion,
optimizer=optimizer,
scheduler=scheduler,
fp16={"opt_level": "O1"} # and other apex.initalize kwargs
...)
#!/usr/bin/env bash
export MASTER_ADDR="127.0.0.1"
export MASTER_PORT=29500
export WORLD_SIZE=2 # number of gpus
RANK=0 LOCAL_RANK=0 catalyst-dl run --config=config.yml --distributed_params/rank=0:int & # gpu 0
sleep 5
RANK=1 LOCAL_RANK=1 catalyst-dl run --config=config.yml --distributed_params/rank=1:int & # gpu 1