- fixed an issue with log output height during training
- fixed an issue with device_id config not working when using DDP
- fixed an issue training stops after one "epoch" when unit is "step" and total iterations exceed an epoch
- added pytorch based warmup, decay schedulers
- moved pytorch loss, optimizer, scheduler from learner to trainer section
- now resuming does not overwrite newly set configurations