* An argument updated in `Trainer.fit()`: `save_every_ckpt` -> `save_ckpt_every_k_epochs` * Added `params_sharded` and `opt_state_sharded` in `Trainer.__init__()`, for memory saving.
0.4.22
* Simplified argument names for the random key in `loss_fn()` and `pred_fn()`: * `train_rng`/`pred_rng` -> `rng`
0.4.21
* Accelerated Inference for multi-host, purely data parallel case * Added optional argument `train_step_fn` in `Trainer` for fully customizing every training step, e.g., per-sample gradient noising for data-private training. * Slight argument name change in `Deployer.get_lr_schedule_fn()`: `warmup_rate` -> `warmup_ratio`
0.4.20
* Updated data example type support -- can be a `list` of whatever types now, e.g., `examples=[str, str, str, ...]` or `examples=[dict, dict, dict, ...]` * Updated mixed-precision training -- by setting `compute_dtype`, e.g., `Trainer(compute_dtype=jnp.bfloat16)`.