Composer

Latest version: v0.27.0

Safety actively analyzes 682404 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 10 of 11

0.7.1

bash
pip install --upgrade mosaicml==0.7.1

Alternatively, install Composer with Conda:

bash
conda install -c mosaicml mosaicml=0.7.1


Bug Fixes

* Upgraded `wandb>=0.12.17`, to fix incompatibility with protobuf >= 4 (https://github.com/wandb/client/pull/3709)

Changelog

https://github.com/mosaicml/composer/compare/v0.7.0...v0.7.1

0.7.0

bash
pip install --upgrade mosaicml==0.7.0

Alternatively, install Composer with Conda:

bash
conda install -c mosaicml mosaicml=0.7.0


New Features

1. **🏎️ FFCV Integration**

Composer supports [FFCV](https://ffcv.io/), a fast dataloader for image datasets. We've found FFCV can speed up ResNet-56 training by 16\%, in addition to existing speed-ups already supported by Composer! It's easy to use FFCV with any existing image dataset:

python
import ffcv
from ffcv.fields.decoders import IntDecoder, SimpleRGBImageDecoder
from torchvision.datasets import ImageFolder

from composer import Trainer
from composer.datasets.ffcv_utils import write_ffcv_dataset, ffcv_monkey_patches

Convert the dataset to FFCV format
This step needs to be done only once per dataset
dataset = ImageFolder(...)
ffcv_dataset_path = "my_ffcv_dataset.ffcv"
write_ffcv_dataset(dataset=dataset, write_path=ffcv_dataset_path)

In FFCV v0.0.3, len(dataloader) is expensive. Fix that via a monkeypatch
ffcv_monkey_patches()

Construct the train dataloader
train_dl = ffcv.Loader(
ffcv_dataset_path,
...
)

Construct the trainer
trainer = Trainer(
train_dataloader=train_dl,
)

Train using FFCV!
trainer.fit()


See our notebook on [training with FFCV](https://github.com/mosaicml/composer/blob/v0.7.0/notebooks/composer_with_ffcv_dataloaders.ipynb) for a full example.

1. **✅ Autoresume from Checkpoints**

When setting `autoresume=True`, Composer can automatically resume from an existing checkpoint before starting a new training run. Specifically, the trainer will look in the `save_folder` (and any loggers that save artifacts) for the latest checkpoint; if none is found, then it'll start from the beginning.

This feature does not require a different entrypoint to distinguish between starting a new training run or automatically resuming from an existing one, making it easy to use Composer on spot preemptable cloud instances. Simply set `autoresume=True`, point the instance to your training script, and Composer will handle the rest!


python
from composer import Trainer

When using `autoresume`, it is required to specify the
`run_name`, so Composer will know which training run to
resume
run_name = "my_autoresume_training_run"

trainer = Trainer(
...,
run_name=run_name,
specify where to save checkpoints
save_folder="./my_autoresume_training_run",
autoresume=True,
)

Train! Composer will handle loading an existing
checkpoint or starting a new training run
trainer.fit()


See the [Trainer API Reference](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.trainer.trainer.html#composer.trainer.trainer.Trainer) for more information.

1. **♻️ Reuse the Trainer**

Want to train on multiple dataloaders sequentially? Each trainer object now supports multiple calls to [`Trainer.fit()`](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.trainer.trainer.html#composer.trainer.trainer.Trainer.fit), so you can continue training an existing model on a new dataloader, with new schedulers, all while using the same model and trainer object.

For example:

python
from torch.utils.data import DataLoader

from composer import Trainer

train_dl_1 = DataLoader(...)
trainer = Trainer(
model=model,
max_duration='5ep',
train_dataloader=train_dl_1,
)

Train once!
trainer.fit()

Train again with a new dataloader for another 5 epochs
train_dl_2 = DataLoader(...)
trainer.fit(
train_dataloader=train_dl_2,
duration='5ep',
)


See the [Trainer API Reference](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.trainer.trainer.html#composer.trainer.trainer.Trainer.fit) for more information.

1. **⚖️ Eval or Predict Only? No Problem**

You can evaluate or predict on an existing model, without having to supply a train dataloader or training duration argument -- they're now optional.

python

import torchmetrics
from torch.utils.data import DataLoader

from composer import Trainer

Construct the trainer
trainer = Trainer(model=model)

Evaluate!
eval_dl = DataLoader(...)
trainer.eval(
dataloader=eval_dl,
metrics=torchmetrics.Accuracy(),
)

Examine evaluation metrics
print("Eval metrics", trainer.state.metrics['eval'])

Or, predict!
predict_dl = DataLoader(...)
trainer.predict(dataloader=predict_dl)


See the [Trainer API Reference](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.trainer.trainer.html#composer.trainer.trainer.Trainer.eval) for more information.


1. **🛑 Early Stopper and Threshold Stopper Callbacks**

The [Early Stopper](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.callbacks.early_stopper.html#composer.callbacks.early_stopper.EarlyStopper) and [Threshold Stopper](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.callbacks.threshold_stopper.html#composer.callbacks.threshold_stopper.ThresholdStopper) callbacks end training early when the target metrics are met:

python
from composer.callbacks.early_stopper import EarlyStopper
from torchmetrics.classification.accuracy import Accuracy

Construct the callback
early_stopper = EarlyStopper(
monitor="Accuracy",
dataloader_label="eval",
patience=2,
)

Construct the trainer
trainer = Trainer(
...,
callbacks=early_stopper,
max_duration="100ep",
)

Train!
Training will end early if the accuracy does not improve
over two epochs
trainer.fit()

1. **🪵 Load Checkpoints from Loggers**

It's now possible to restore checkpoints from loggers that support file artifacts (such as the [Weights & Baises Logger](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.loggers.wandb_logger.html#composer.loggers.wandb_logger.WandBLogger)). No need to download your checkpoints manually anymore.

python
from composer import Trainer
from composer.loggers import WandBLogger

Configure the W&B Logger
wandb_logger = WandBLogger(
set to True to capture artifacts, like checkpoints
log_artifacts=True,
init_params={
'project': 'my-wandb-project-name',
},
)

Then, to train and save checkpoints to W&B:
trainer = Trainer(
...,
loggers=wandb_logger,
save_folder="/tmp/checkpoints",
save_interval="1ep",
save_artifact_name="epoch{epoch}.pt",
)

Finally, to load checkpoints from W&B
trainer = Trainer(
...,
load_object_store=wandb_logger,
load_path="epoch1.pt:latest",
)



1. **⌛ Wall Clock, Evaluation, and Prediction Time Tracking**

The [timestamp](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.core.time.html#composer.core.time.Timestamp) object measures wall clock time via three new fields: `total_wct`, `epoch_wct`, and `batch_wct`. These fields track the total elapsed training time, the elapsed training time of the current epoch, and the time to train the last batch. Read the wall clock time via a callback:

python
from composer import Callback, Trainer

class MyCallback(Callback):
def batch_end(self, state, event):
print(f"Total wct: {state.timetsamp.total_wct}")
print(f"Epoch wct: {state.timetsamp.epoch_wct}")
print(f"Batch wct: {state.timetsamp.batch_wct}")

Construct the trainer with this callback
trainer = Trainer(
...,
callbacks=MyCallback(),
)

Train!
trainer.fit()


In addition, the training [state](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.core.state.html#composer.core.state.State) object has two new fields for tracking time during evaluation and prediction: [`eval_timestamp`](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.core.state.html#composer.core.state.State.eval_timestamp) and [`predict_timestamp`](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.core.state.html#composer.core.state.State.predict_timestamp). These fields, just like any others on the state object, are accessible to algorithms, callbacks, and loggers.

1. **Training DeepLabv3+ on the ADE20k Dataset**

[DeepLabv3+](https://arxiv.org/abs/1802.02611) is a common baseline model for semantic segmentation tasks. We provide a `ComposerModel` implementation for DeepLabv3+ built using [torchvision](https://pytorch.org/vision/stable/index.html) and [mmsegmentation](https://github.com/open-mmlab/mmsegmentation) for the backbone and head, respectively.

We found the DeepLabv3+ baseline can be significantly improved using the [new PyTorch pre-trained weights](https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/). Additional gains are made through a hyperparameter sweep.

We benchmark our DeepLabv3+ model on a single 8xA100 machine using [ADE20k](https://arxiv.org/abs/1608.05442), a popular semantic segmentation dataset. The final results on ADE20k are:

| Model | mIoU | Time-to-Train |
| ---------------------- | -------------- | ------------- |
| Unoptimized DeepLabv3+ | 44.17 +/- 0.14 | 6.39 hr |
| Optimized DeepLabv3+ | 45.78 +/- 0.26 | 4.67 hr |

Checkout [our documentation](https://docs.mosaicml.com/en/v0.7.0/model_cards/deeplabv3.html) for more info!

API Changes

1. **🍪 Additional Batch Type Support**

Composer v0.7.0 removed the `BatchDict` and `BatchPair` types, and now supports any batch type. We're updating our algorithms to support batches of custom formats.

1. **🏎️ Simplified Profiling Arguments**

To simplify the Trainer constructor, the profiling arguments were replaced with a single `profiler` argument, which takes an instance of the [Profiler](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.profiler.profiler.html#composer.profiler.profiler.Profiler).

python
from composer.trainer import Trainer
from composer.profiler import PRofiler, JSONTraceHandler, cyclic_schedule

trainer = Trainer(
...,
profiler=Profiler(
trace_handlers=JSONTraceHandler(
folder=composer_trace_dir,
overwrite=True,
),
schedule=cyclic_schedule(
wait=0,
warmup=1,
active=4,
repeat=1,
),
torch_prof_folder=torch_trace_dir,
torch_prof_overwrite=True,
...,
)
)


See the [profiling guide](https://docs.mosaicml.com/en/v0.7.0/trainer/performance_tutorials/profiling.html) for additional information.

1. **🚪 `Event.FIT_END` and `Engine.close()`**

With support for reusing the trainer for multiple calls to [`Trainer.fit`](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.trainer.trainer.html#composer.trainer.trainer.Trainer.fit), callbacks and loggers are no longer closed at the end of a training run.

Instead, [`Event.FIT_END`](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.core.event.html#composer.core.event.Event.FIT_END) was added, which can be used by Callbacks for anything that should happen at the end of _each_ invocation of [`Trainer.fit`](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.trainer.trainer.html#composer.trainer.trainer.Trainer.fit). See the [Event Guide](https://docs.mosaicml.com/en/v0.7.0/trainer/events.html) for aadditional inforrmation.

Finally, whenever the trainer is garbage collected or [`Trainer.close`](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.trainer.trainer.html#composer.trainer.trainer.Trainer.close) is called, [`Callback.close`](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.core.callback.html#composer.core.callback.Callback.close) and [`Callback.post_close`](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.core.callback.html#composer.core.callback.Callback.post_close) are invoked, ensuring that they will be called only once per trainer.

1. **⌛ `State.timesamp` replaces `State.timer`**

Removed `State.timer` and replaced it with [`State.timestamp`](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.core.state.html#composer.core.state.State.timestamp), which is now a static [Timestamp](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.core.time.html#composer.core.time.Timestamp) object. The training loop replaces `State.timestamp` with a new object on each batch. See the [Time Guide](https://docs.mosaicml.com/en/v0.7.0/trainer/time.html#tracking-time) for additional information.

1. **💿 Data Configuration**

Two new proerties, [`State.dataloader`](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.core.state.html#composer.core.state.State.dataloader) and [`State.dataloader_label`](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.core.state.html#composer.core.state.State.dataloader_label), were added to the state. These properties track the currently active dataloader (e.g. the training dataloader when training; the evaluation dataloader when evaluating).

In adddition, `State.subset_num_batches` was renamed to [`State.dataloader_len`](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.core.state.html#composer.core.state.State.dataloader_len) to reflect the actual dataloader length that will be used for training and evaluation.

A helper method [`State.set_dataloader`](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.core.state.html#composer.core.state.State.set_dataloader) was added to ensure the dataloader properties are updated correctly.


1. **⚖️ Removed the Deprecated Scale Schedule Algorithm**

The scale schedule algorithm class, deprecated in v0.4.0, has been removed. Instead, use the `scale_schedule_ratio` argument when constructing the trainer.

python
from composer import Trainer
from composer.optim.scheduler import MultiStepScheduler

trainer = Trainer(
...,
max_duration="20ep",
schedulers=MultiStepScheduler(milestones=["10ep", "16ep"]),
scale_schedule_ratio=0.5,
)


See the [Scale Schedule Method Card](https://docs.mosaicml.com/en/v0.7.0/method_cards/scale_schedule.html) for additional info.

Bug Fixes

* Fixed an bug where `Event.FIT_END` was not being called in the training loop (1054)
* Fixed a bug where evaluation would not run at the end of training unless if it aligned with the ``eval_interval`` (1045)
* Fixed a bug where models trained with SWA could not be used with checkpoints (1015)
* Fixed a bug where the [Speed Monitor](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.callbacks.speed_monitor.html#composer.callbacks.speed_monitor.SpeedMonitor) included validation time in the training throughput measurements, resulting in slower reported throughput measurements (1053)
* Fixed a bug to make the [ComposerClassifier](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.models.tasks.classification.html#composer.models.tasks.classification.ComposerClassifier) compatible with TorchScript (1036)
* Fixed a bug where fractional [Time Objects](https://docs.mosaicml.com/en/v0.7.0/api_reference/composer.core.time.html#composer.core.time.Time) were being truncated instead of raising an exception (1038)
* Changed the defaults for [Selective Backprop](https://docs.mosaicml.com/en/v0.7.0/method_cards/selective_backprop.html) to not scale inputs, so the algorithm can work with non-vision workloads (#896)


New Contributors
* ofirpress made their first contribution in https://github.com/mosaicml/composer/pull/955
* QiyaoWei made their first contribution in https://github.com/mosaicml/composer/pull/866
* pavithranrao made their first contribution in https://github.com/mosaicml/composer/pull/879


Changelog

https://github.com/mosaicml/composer/compare/v0.6.1...v0.7.0

0.6.1

Go ahead and upgrade; it's fully backwards compatible with Composer v0.6.0.

Install via `pip`:

bash
pip install --upgrade mosaicml==0.6.1


Alternatively, install Composer with Conda:

bash
conda install -c mosaicml mosaicml=0.6.1


What's New?

1. **📎 Adaptive Gradient Clipping (AGC)**

[Adaptive Gradient Clipping (AGC)](https://docs.mosaicml.com/en/v0.6.1/method_cards/agc.html) clips gradients based on the ratio of their norms with weights' norms. This technique helps stabilize training with large batch sizes, especially for models without batchnorm layers.

1. **🚚 Exponential Moving Average (EMA)**

[Exponential Moving Average (EMA)](https://docs.mosaicml.com/en/v0.6.1/method_cards/ema.html) is a model averaging technique that maintains an exponentially weighted moving average of the model parameters during training. The averaged parameters are used for model evaluation. EMA typically results in less noisy validation metrics over the course of training, and sometimes increased generalization.

1. **🪵 Logger is available in the ComposerModel**

The [Logger](https://docs.mosaicml.com/en/v0.6.1/trainer/logging.html) is bound to the [ComposerModel](https://docs.mosaicml.com/en/v0.6.1/composer_model.html) via the ``self.logger`` attribute. It is available during training on all methods (other than `__init__`).

For example, to log hidden activation:

python
class Net(ComposerModel):

def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
if self.logger:
self.logger.data_batch({
"hidden_activation_norm": x.norm(2).item(),
})
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x)


1. **🐛 Environment Collection Script**

Composer v0.6.1 includes an [environment collection script](https://docs.mosaicml.com/en/v0.6.1/api_reference/composer.utils.collect_env.html#module-composer.utils.collect_env) which generates a printout of your system configuration and python environment. If you run into a bug, the results from this script will help us debug the issue and fix Composer.

To collect your environment information:

bash
$ pip install mosaicml if composer is not already installed
$ composer_collect_env


Then, include the output in your [GitHub Issue](https://github.com/mosaicml/composer/issues/new?assignees=&labels=bug&template=---bug-report.md&title=).

What's Improved?

1. **📜 TorchScriptable Algorithms**

[BlurPool](https://docs.mosaicml.com/en/v0.6.1/method_cards/blurpool.html), [Ghost BatchNorm](https://docs.mosaicml.com/en/v0.6.1/method_cards/ghost_batchnorm.html), and [Stochastic Depth](https://docs.mosaicml.com/en/v0.6.1/method_cards/stochastic_depth.html) are now TorchScript-compatible. Try exporting your models with these algorithms enabled!

1. **🏛️ ColOut on Segmentation**

[ColOut](https://docs.mosaicml.com/en/v0.6.1/method_cards/colout.html) now supports segmentation-style models.

What's Fixed?

1. **🚑️ Loggers capture the Traceback**

We fixed a bug so the [Loggers](https://docs.mosaicml.com/en/v0.6.1/trainer/logging.html), such as the [Weights & Biases Logger](https://docs.mosaicml.com/en/v0.6.1/api_reference/composer.loggers.wandb_logger.html) and the [File Logger](https://docs.mosaicml.com/en/v0.6.1/api_reference/composer.loggers.file_logger.html), will capture the traceback any exception that crashes the training process.

1. **🏋️ Weights & Biases Logger Config**

We fixed a bug where the the [Weights & Biases Logger](https://docs.mosaicml.com/en/v0.6.1/api_reference/composer.loggers.wandb_logger.html) was not properly recording the configuration.

Full Changelog

https://github.com/mosaicml/composer/compare/v0.6.0...v0.6.1

0.6.0

New Contributors
* vahidfazelrezai made their first contribution in https://github.com/mosaicml/composer/pull/781
* murthyn made their first contribution in https://github.com/mosaicml/composer/pull/789
* dlmgary made their first contribution in https://github.com/mosaicml/composer/pull/818
* IanWorley made their first contribution in https://github.com/mosaicml/composer/pull/835

**Full Changelog**: https://github.com/mosaicml/composer/compare/v0.5.0...v0.6.0

0.5

New Contributors
* nikhilsardana made their first contribution in https://github.com/mosaicml/composer/pull/433
* knighton made their first contribution in https://github.com/mosaicml/composer/pull/284

**Full Changelog**: https://github.com/mosaicml/composer/compare/v0.4.0...v0.5.0

0.5.0

We are excited to share Composer v0.5, a library of speed-up methods for efficient neural network training. This release features:
* Revamped checkpointing API based on community feedback
* New baselines: ResNet34-SSD, GPT-3, and Vision Transformers
* Additional improvements to our [documentation](https://docs.mosaicml.com/en/latest/)
* Support for `bfloat16`
* Streaming dataset support
* Unified functional API for our algorithms

Highlights

Checkpointing API

Checkpointing models are now a Callback, so that users can easily write and add their own callbacks. The callback is automatically appended if a `save_folder` is provided to the Trainer.

python
trainer = Trainer(
model=model,
algorithms=algorithms,
save_folder="checkpoints",
save_interval="1ep"
)

Alternatively, `CheckpointSaver` can be directly added as a callback:

python
trainer = Trainer(..., callbacks=[
CheckpointSaver(
save_folder='checkpoints',
name_format="ep{epoch}-ba{batch}/rank_{rank}",
save_latest_format="latest/rank_{rank}",
save_interval="1ep",
weights_only=False,
)
])


Subclass from `CheckpointSaver` to add your own logic for saving the best model, or saving at specific intervals. Thanks to mansheej siriuslee and other users for their feedback.

bloat16

We've added experimental support for `bfloat16`, which can be provided via the `precision` argument to the Trainer:

python
trainer = Trainer(
...,
precision="bfloat16"
)


Streaming datasets

We've added support for fast streaming datasets. For NLP-based datasets such as C4, we use the HuggingFace datasets backend, and add dataset-specific shuffling, tokenization , and grouping on-the-fly. To support data parallel training, we added specific sharding logic for efficiency. See `C4Datasets` for more details.

Vision streaming datasets are supported via a patched version of the `webdatasets` package, and added support for data sharding by workers for fast augmentations. See `composer.datasets.webdataset` for more details.

Baseline GPT-3, ResNet34-SSD, and Vision Transformer benchmarks

Configurations for GPT-3-like models ranging from 125m to 760m parameters are now released, and use DeepSpeed Zero Stage 0 for memory-efficient training.
* [GPT3-125m](https://github.com/mosaicml/composer/blob/v0.5.0/composer/yamls/models/gpt3_125m.yaml)
* [GPT3-350m](https://github.com/mosaicml/composer/blob/v0.5.0/composer/yamls/models/gpt3_350m.yaml)
* [GPT3-760m](https://github.com/mosaicml/composer/blob/v0.5.0/composer/yamls/models/gpt3_760m.yaml)

We've also added the Single Shot Detection (SSD) model ([Wei et al, 2016](https://arxiv.org/abs/1512.02325)) with a ResNet34 backbone, based on the MLPerf reference implementation.

Our first Vision Transformer benchmark is the ViT-S/16 model from [Touvron et al, 2021](https://arxiv.org/pdf/2012.12877.pdf), and based on the `vit-pytorch` package.

See below for the full details:

What's Changed
* Export Transforms in `composer.algorithms` by ajaysaini725 in https://github.com/mosaicml/composer/pull/603
* Make batchnorm default for UNet by dskhudia in https://github.com/mosaicml/composer/pull/535
* Fix no_op_model algorithm by dskhudia in https://github.com/mosaicml/composer/pull/614
* Pin pre-1.0 packages by bandish-shah in https://github.com/mosaicml/composer/pull/595
* Updated dark mode composer logo, and graph by nqn in https://github.com/mosaicml/composer/pull/617
* Jenkins + Docker Improvements by ravi-mosaicml in https://github.com/mosaicml/composer/pull/621
* update README links by hanlint in https://github.com/mosaicml/composer/pull/628
* Remove all old timing calls by ravi-mosaicml in https://github.com/mosaicml/composer/pull/594
* Remove state shorthand by mvpatel2000 in https://github.com/mosaicml/composer/pull/629
* add bfloat16 support by nikhilsardana in https://github.com/mosaicml/composer/pull/433
* v0.4.0 Hotfix: Docker documentation updates by bandish-shah in https://github.com/mosaicml/composer/pull/631
* Fix wrong icons in the method cards by hanlint in https://github.com/mosaicml/composer/pull/636
* fix autocast for pytorch < 1.10 by nikhilsardana in https://github.com/mosaicml/composer/pull/639
* Add tutorial notebooks to the README by moinnadeem in https://github.com/mosaicml/composer/pull/630
* Converted Stateless Schedulers to Classes by ravi-mosaicml in https://github.com/mosaicml/composer/pull/632
* Jenkinsfile Fixes Part 2 by ravi-mosaicml in https://github.com/mosaicml/composer/pull/627
* Add C4 Streaming dataset by abhi-mosaic in https://github.com/mosaicml/composer/pull/489
* CONTRIBUTING.md additions by kobindra in https://github.com/mosaicml/composer/pull/648
* Hide showing `object` as a base class; fix skipping documentation of `forward`; fixed docutils dependency. by ravi-mosaicml in https://github.com/mosaicml/composer/pull/643
* Matthew/functional docstrings update by growlix in https://github.com/mosaicml/composer/pull/622
* docstrings improvements for core modules by dskhudia in https://github.com/mosaicml/composer/pull/598
* ssd-resnet34 on COCO map 0.23 by florescl in https://github.com/mosaicml/composer/pull/646
* Fix broken "best practices" link by growlix in https://github.com/mosaicml/composer/pull/649
* Update progressive resizing to work for semantic segmentation by coryMosaicML in https://github.com/mosaicml/composer/pull/604
* Let C4 Dataset overwrite `num_workers` if set incorrectly by abhi-mosaic in https://github.com/mosaicml/composer/pull/655
* Lazy imports for `pycocotools` by abhi-mosaic in https://github.com/mosaicml/composer/pull/656
* W&B excludes final eval metrics when plotted as a fxn of epoch or trainer/global_step by growlix in https://github.com/mosaicml/composer/pull/633
* Update GPT3-yamls for default 8xA100-40GB by abhi-mosaic in https://github.com/mosaicml/composer/pull/663
* Set WandB default to log rank zero only by abhi-mosaic in https://github.com/mosaicml/composer/pull/461
* Update schedulers guide by hanlint in https://github.com/mosaicml/composer/pull/661
* [XS] Fix a TQDM deserialization bug by jbloxham in https://github.com/mosaicml/composer/pull/665
* Add defaults to the docstrings for algorithms by hanlint in https://github.com/mosaicml/composer/pull/662
* Fix ZeRO config by jbloxham in https://github.com/mosaicml/composer/pull/667
* [XS] fix formatting for colout by hanlint in https://github.com/mosaicml/composer/pull/666
* Composer.core docstring touch-up by ravi-mosaicml in https://github.com/mosaicml/composer/pull/657
* Add Uniform bounding box sampling option for CutOut and CutMix by coryMosaicML in https://github.com/mosaicml/composer/pull/634
* Update README.md by ravi-mosaicml in https://github.com/mosaicml/composer/pull/678
* Fix bug in trainer test by hanlint in https://github.com/mosaicml/composer/pull/651
* InMemoryLogger has get_timeseries() method by growlix in https://github.com/mosaicml/composer/pull/644
* Batchwise resolution for SWA by growlix in https://github.com/mosaicml/composer/pull/654
* Fixed the conda build script so it runs on jenkins by ravi-mosaicml in https://github.com/mosaicml/composer/pull/676
* Yahp version update to 0.1.0 by Averylamp in https://github.com/mosaicml/composer/pull/674
* Streaming vision datasets by knighton in https://github.com/mosaicml/composer/pull/284
* Fix DeepSpeed checkpointing by jbloxham in https://github.com/mosaicml/composer/pull/686
* Vit by A-Jacobson in https://github.com/mosaicml/composer/pull/243
* [S] cleanup tldr; standardize `__all__` by hanlint in https://github.com/mosaicml/composer/pull/688
* Unify algorithms part 2: mixup, cutmix, label smoothing by dblalock in https://github.com/mosaicml/composer/pull/658
* `composer.optim` docstrings by jbloxham in https://github.com/mosaicml/composer/pull/653
* Fix DatasetHparams, WebDatasetHparams docstring by growlix in https://github.com/mosaicml/composer/pull/697
* Models docstrings by A-Jacobson in https://github.com/mosaicml/composer/pull/469
* docstrings improvements for composer.datasets by dskhudia in https://github.com/mosaicml/composer/pull/694
* Updated contributing.md and the style guide by ravi-mosaicml in https://github.com/mosaicml/composer/pull/670
* Ability to retry ADE20k crop transform by Landanjs in https://github.com/mosaicml/composer/pull/702
* Add mmsegmentation DeepLabv3(+) by Landanjs in https://github.com/mosaicml/composer/pull/684
* Unify functional API part 3 by dblalock in https://github.com/mosaicml/composer/pull/715
* Update example notebooks by coryMosaicML in https://github.com/mosaicml/composer/pull/707
* [Checkpointing - PR1] Store the `rank_zero_seed` on state by ravi-mosaicml in https://github.com/mosaicml/composer/pull/680
* [Checkpointing - PR2] Added in new Checkpointing Events by ravi-mosaicml in https://github.com/mosaicml/composer/pull/690
* [Checkpointing - PR3] Clean up RNG and State serialization by ravi-mosaicml in https://github.com/mosaicml/composer/pull/692
* [Checkpointing - PR4] Refactored the `CheckpointLoader` into a `load_checkpoint` function by ravi-mosaicml in https://github.com/mosaicml/composer/pull/693
* Update {blurpool,factorize,ghostbn} method cards by dblalock in https://github.com/mosaicml/composer/pull/711
* [Checkpointing - PR 5] Move the `CheckpointSaver` to a callback. by ravi-mosaicml in https://github.com/mosaicml/composer/pull/687
* Update datasets docstrings by growlix in https://github.com/mosaicml/composer/pull/709
* add notebooks and functional api by hanlint in https://github.com/mosaicml/composer/pull/714
* Migrating from PTL notebook by florescl in https://github.com/mosaicml/composer/pull/436
* Docs 0.4.1: Profiler section and tutorials by bandish-shah in https://github.com/mosaicml/composer/pull/696
* Improve datasets docstrings by knighton in https://github.com/mosaicml/composer/pull/695
* Update `C4Dataset` to repeat, handle `max_samples` safely by abhi-mosaic in https://github.com/mosaicml/composer/pull/722
* Fix docs build by ravi-mosaicml in https://github.com/mosaicml/composer/pull/773

Page 10 of 11

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.