Optimum-graphcore

Latest version: v0.7.1

Safety actively analyzes 688735 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

3.0

The main feature this release provide is the support of PopTorch version 3.0 (183), which comes with a new way of tracing PyTorch models: the PyTorch dispatcher is used instead of `torch.jit.trace`. Not only tracing is now faster, it is also much more powerful (more about it [here](https://docs.graphcore.ai/projects/poptorch-user-guide/en/latest/tracing.html#dispatcher-support)).


General

- LAMB does not update the bias parameters anymore (178)
- The DistilBERT architecture is now supported for the tasks: (181)
- Masked language modeling
- Multiple choice
- Question answering
- Sequence classification
- Token classification
- The masked language modeling task is now supported by Deberta
- The `IPUTrainer` and `IPUTrainingArguments` were synchronized with their transformers counterparts (179)
- Some parameters in the `IPUConfig` were removed:
- `use_popdist`
- `decompose_grad_sum`
- `profile_dir`

Bug fixes

- Documentation building fixes
- Wav2vec2 with `dataloder_mode=async_rebatched` fixed (168)

Notebooks

- Audio classification for HuBERT notebook (157)
- Language modeling finetuning notebook (161)
- Question answering notebook (163)
- Multiple choice notebook (166)
- A notebook showing how to train a model supported in the librarie (171)

Documentation

The documentation was updated, and contains more content, for instance:

- The `IPUTrainer` API is described
- The `IPUConfig` attributes are explained
- A new page explaining how to contribute by adding a new model architecture to the library

0.7.1

What's Changed
* Support for Whisper fine-tuning after a slice assignment bug was fixed.
* Whisper inference can now take advantage of group-quantization, where model parameters are stored in INT4, and decoded into FP16 on-the-fly as needed. The memory saving is estimated at 3.5x with minimal degradation in WER, and can be enabled via the `use_group_quantized_linears` parallelize kwarg.
* KV caching and on-device generation is now also available for T5.
* Fixed interleaved training and validation for `IPUSeq2SeqTrainer`.
* Added notebooks for Whisper fine-tuning, Whisper group-quantized inference, embeddings models, and BART-L summarization.
* UX improvement that ensures a dataset of sufficient size is provided to the `IPUTrainer`.

Commits
* Support C600 card by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/446
* Remove deprecated pod_type argument by jimypbr in https://github.com/huggingface/optimum-graphcore/pull/447
* Fix inference replication factor pod type removal by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/448
* T5 enable self-attention kv caching by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/449
* Workflows: use explicit venv names and use --clear in creation by jimypbr in https://github.com/huggingface/optimum-graphcore/pull/452
* Workflow: add venv with clear for code quality and doc-builder workflows by jimypbr in https://github.com/huggingface/optimum-graphcore/pull/453
* Support overriding *ExampleTester class attribute values in test_examples.py by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/439
* Adding missing license headers and copyrights by jimypbr in https://github.com/huggingface/optimum-graphcore/pull/454
* Fix shift tokens right usage which contains slice assignment by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/451
* Base models and notebooks for general IPU embeddings model by arsalanu in https://github.com/huggingface/optimum-graphcore/pull/436
* Fix mt5 translation training ipu config by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/456
* Add back source optimum graphcore install in embeddings notebook by arsalanu in https://github.com/huggingface/optimum-graphcore/pull/457
* Add parallelize kwargs as an IPU config entry by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/427
* Change tests to point to MPNet ipu config by arsalanu in https://github.com/huggingface/optimum-graphcore/pull/458
* T5 enable generation optimisation by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/459
* Fix ipus per replica check in whisper cond encoder by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/461
* Check that the dataset has enough examples to fill a batch when creat… by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/462
* Add notebook for whisper finetuning by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/460
* Use index select in BART positional embedding for better tile placement by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/463
* Add group quantization for whisper by jimypbr in https://github.com/huggingface/optimum-graphcore/pull/429
* Change max length adaption messages to debug by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/465
* Fix finetuning whisper notebook text by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/466
* Fix finetuning whisper notebook text v2 by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/467
* Add BART-L text summarization notebook by jayniep-gc in https://github.com/huggingface/optimum-graphcore/pull/464
* Fix evaluate then train by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/469
* Use token=False in whisper nb by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/470
* Add Whisper inference with quantization notebook by jimypbr in https://github.com/huggingface/optimum-graphcore/pull/468


**Full Changelog**: https://github.com/huggingface/optimum-graphcore/compare/v0.7.0...v0.7.1

0.7.0

What's Changed

* Optimum has been updated to support Poplar SDK 3.3.
* A new feature in that SDK is the `poptorch.cond` operation, which enables conditional compute. This enabled us to implement some new optimisations.
* Using the the new `cond` operation we are able to fit Whisper-tiny encoder and decoder on a single IPU. To enable, pass the option `use_cond_encoder` to Whisper's `parallelize` method.
* Added the option for cross-attention KV caching in Whisper, also using the `cond` op. To enable, pass the option `use_cross_cache` to Whisper's `parallelize` method.
* We added support for the MT5 model for summarisation and translation tasks.
* The version of `transformers` has been updated to 4.29. One of the things this enables in Optimum is Whisper timestamp decoding.
* Added `optimum.graphcore.models.whisper.WhisperProcessorTorch` - a faster, drop-in replacement for `transformers.WhisperProcessor`.
* The `pod_type` argument, which was deprecated in 0.6.1, has been removed.


Commits

* Fixing links to API references by jayniep-gc in https://github.com/huggingface/optimum-graphcore/pull/391
* Do not override replicated_tensor_sharding in the IPUConfig by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/393
* Preserve the set padding idx in SerializedEmbedding by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/395
* Add MT5 by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/392
* deberta/translation/summarization notebook fixes by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/396
* MT5 notebooks: prefix exec cache with mt5 by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/397
* Flan-T5 Notebook Formatting Tweaks by HMellor in https://github.com/huggingface/optimum-graphcore/pull/398
* Add cross KV caching by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/329
* Beam search adjustment by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/394
* Updating Whisper notebook so it uses new SDK and new features by lukem-gc in https://github.com/huggingface/optimum-graphcore/pull/399
* Add `padding_idx` to appropriate embedding split by HMellor in https://github.com/huggingface/optimum-graphcore/pull/403
* Bump transformers to 4.29.2 by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/389
* Fix Whisper processor torch with transformers 4.29.2 bump by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/405
* Fix Stable Diffusion notebooks by HMellor in https://github.com/huggingface/optimum-graphcore/pull/408
* Add IPU support for HF pipelines to Whisper by paolot-gc in https://github.com/huggingface/optimum-graphcore/pull/368
* Throw error is kwargs isn't empty by end of init by HMellor in https://github.com/huggingface/optimum-graphcore/pull/406
* Add Whisper pipeline tests by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/409
* Enable fine-tuning of `whisper-tiny` by HMellor in https://github.com/huggingface/optimum-graphcore/pull/400
* Fix issue where exe cache dir was set too late by HMellor in https://github.com/huggingface/optimum-graphcore/pull/411
* Enable generation tests by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/407
* Add Seq2Seq trainer test by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/404
* Use the generation config to control generation by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/410
* Add support for Whisper timestamp decoding with on-device generation by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/413
* Fix IPUWhisperTimeStampLogitsProcessor for beam search by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/414
* Remove usage of deprecated config: `pod_type` by HMellor in https://github.com/huggingface/optimum-graphcore/pull/416
* Fix `matmul_proportion` `ManagedAttribute` usage by HMellor in https://github.com/huggingface/optimum-graphcore/pull/415
* Enable Whisper encoder and decoder to run on 1 IPU by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/418
* Enable replication with on device text generation by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/420
* Update doc workflows by regisss in https://github.com/huggingface/optimum-graphcore/pull/417
* Update whisper pipeline example for latest features by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/421
* Fix text encoder for SD with 4.29 bump by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/424
* Use the faster whisper feature extractor in whisper pipelines by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/423
* Remove engine references from SD pipelines by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/422
* Add support for `whisper-small` fine-tuning by HMellor in https://github.com/huggingface/optimum-graphcore/pull/426
* Use index select for whisper position embedding for better tile utili… by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/435
* Print execution time of each example test by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/440
* SplitProjection layer: Add output channels serialization mode by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/438
* 3.3 Examples CI Fixes by jimypbr in https://github.com/huggingface/optimum-graphcore/pull/443
* Support T5EncoderModel for t5-based embedding models by alex-coniasse in https://github.com/huggingface/optimum-graphcore/pull/437
* Integrate whisper large into the existing notebook by alex-coniasse in https://github.com/huggingface/optimum-graphcore/pull/441
* Bump SDK version to 3.3 in the github workflows by jimypbr in https://github.com/huggingface/optimum-graphcore/pull/444
* Update examples requirements for sdk3.3 by jimypbr in https://github.com/huggingface/optimum-graphcore/pull/434


**Full Changelog**: https://github.com/huggingface/optimum-graphcore/compare/v0.6.1...v0.7.0

0.6.1

Faster Text Generation

0.6.1 provides significant speed-ups of up to 9x for Whisper and BART text generation! We have put the entire text generation loop onto IPU and enabled KV caching for self-attention layers.

* Use buffers to cache the encoder hidden states in decoder wrapper by jimypbr in https://github.com/huggingface/optimum-graphcore/pull/285
* Move whisper decoder projection to IPU 0 since there is weight tying by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/309
* move the IndexedInputLinear out of the decoder wrapper by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/319
* Add generic KV caching support, use it with Whisper by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/307
* On device text generation POC for greedy search by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/357
* Add on device beam search by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/370
* Add attention serialization to the attention mixin and enable it with Whisper by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/372
* BART KV-caching + on-device by jimypbr in https://github.com/huggingface/optimum-graphcore/pull/363
* Fix cached_beam_idx check for non on device generation by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/378
* Attn mixin improvements by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/381
* Add a faster torch based version of the whisper feature extractor by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/376
* Fix BART Positional embeddings for generation without caching by jimypbr in https://github.com/huggingface/optimum-graphcore/pull/386

New Models

Fine-tuning of text generation model

Text generation with `IPUSeq2SeqTrainer` is now enabled.

* Fix IPUSeq2SeqTrainer for models that have persistent buffers by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/337
* Enable generation in notebooks that use IPUSeq2SeqTrainer by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/341
* Fix IPUSeq2SeqTrainer for models that have persistent buffers by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/337
* Enable generation in notebooks that use IPUSeq2SeqTrainer by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/341
* Fix: reparallelize for training after generation by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/387

Wav2vec2 Large
* Adding Wav2vec2 Large pretraining and fine-tuning by atsyplikhin in https://github.com/huggingface/optimum-graphcore/pull/323

Flan-T5

Added support for Flan-T5 inference. This comes with numerical fixes to T5 for running in float16.

* Enable Flan-T5 inference in `float16` by HMellor in https://github.com/huggingface/optimum-graphcore/pull/296
* Add Flan-T5 notebook by HMellor in https://github.com/huggingface/optimum-graphcore/pull/318
* T5 revert fp16 clamping removal by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/332
* Skip equal check for denormals in known T5 layer by HMellor in https://github.com/huggingface/optimum-graphcore/pull/383

MT5

Added MT5 model, `MT5ForConditionalGeneration`. To support this, two new options were added to `IPUConfig`:
* `serialized_projection_splits_per_ipu`: (`List[int]`, *optional*, defaults to None):
Specifies the number of splits of the embedding layer that will be put on each IPU for pipelined execution.
The format has to be the same as that for `layers_per_ipu` however wildcards are not supported.
For instance: `[3, 1, 0, 0]` specifies how to place an embedding layer serialized into
4 sub-embedding layers across a 4-IPU pipeline. IPU-1 has 3 splits and IPU-2 has 1 split.
* `projection_serialization_factor`: (`int`, *optional*, defaults to 1 if `serialized_projection_splits_per_ipu` is `None`):
The factor to use to either serialize the matmuls that are performed in the linear projection layer, or,
serialize the projection layer into a set of individual linear layers that can be optionally placed on different IPUs.
Nothing happens if `projection_serialization_factor = 1`.

PRs:
* Support sharding serialized layers across ipus by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/355
* Add MT5 model and fine-tuning notebook by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/392

HubertForCTC
* Add support for HubertForCTC by jimypbr in https://github.com/huggingface/optimum-graphcore/pull/347
* Change hyper-parameters to fix Hubert for CTC CI by jimypbr in https://github.com/huggingface/optimum-graphcore/pull/390

User Experience

The `pod_type` argument to `IPUTrainingArguments` has now been deprecated and replaced by `n_ipu`. Consequently, `pod_type` dictionary values of `IPUConfig` are no longer supported.
* Pod type sets replication factor by rahult-graphcore in https://github.com/huggingface/optimum-graphcore/pull/271


`IPUConfig` now supports `inference_` versions of the parameters:
* `layers_per_ipu`
* `ipus_per_replica`
* `matmul_proportion`
* `serialized_embedding_splits_per_ipu`
* `projection_serialization_factor`
* `serialized_projection_splits_per_ipu`

PRs:
* Enable training and inference specific configurations using a single `IPUConfig` by HMellor in https://github.com/huggingface/optimum-graphcore/pull/308
* Matmul proportion support float or len(List[float]) == ipus_per_replica by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/375
* Refactor: prefix IPUConfig `ManagedAttribute`s instead of overloading user provided attributes by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/366
* Add attribute validation by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/371
* Refactor SerializedEmbedding to use to/from_model by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/382

Notebooks

* Add narrative to the whisper notebook by payoto in https://github.com/huggingface/optimum-graphcore/pull/312
* Add Flan-T5 notebook by HMellor in https://github.com/huggingface/optimum-graphcore/pull/318
* Deberta notebook to accompany blog post by lukem-gc in https://github.com/huggingface/optimum-graphcore/pull/369
* Add MT5 model and fine-tuning notebook by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/392


New Contributors
* atsyplikhin made their first contribution in https://github.com/huggingface/optimum-graphcore/pull/323
* lukem-gc made their first contribution in https://github.com/huggingface/optimum-graphcore/pull/369

**Full Changelog**: https://github.com/huggingface/optimum-graphcore/compare/v0.6.0...v0.6.1

0.6.0

Text Generation
This release comes with full support for text generation for GPT2, BART, T5, and Whisper!
* Add text generation support by jimypbr in https://github.com/huggingface/optimum-graphcore/pull/253
* Run encoder on IPU for encoder-decoder text-gen models by jimypbr in https://github.com/huggingface/optimum-graphcore/pull/283
* Efficient decoder text generation wrapper by jimypbr in https://github.com/huggingface/optimum-graphcore/pull/273
* Text Gen slice decoder projection optimisation by jimypbr in https://github.com/huggingface/optimum-graphcore/pull/295
* Add text generation prediction support for IPUSeq2SeqTrainer by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/284

IPU pipelined models can call `.generate()`. Text generation can also be done with `pipelines`.

Stable Diffusion

* We now support Stable Diffusion inference pipelines from Diffusers in `optimum/graphcore/diffusers` katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/300
* Much improved performance on IPU by running all SD modules on IPU by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/274
* Add file for SD ipu configs by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/301


UX Improvements
We've improved the usability of `IPUConfig`.

* You know longer need to specify both `ipus_per_replica` and `layers_per_ipu`. You can specify just one and the other will be inferred from it: HMellor in https://github.com/huggingface/optimum-graphcore/pull/282
* `layers_per_ipu` can support a combination of integers and wildcards (`-1`) e.g `[1, 1, -1, -1] will put 1 layer each on IPU0 and IPU1, and split the remaining layers evenly between IPU2 and IPU3. If there are an odd number of layers, the extra layer is placed on the last wildcard IPU. rahult-graphcore in https://github.com/huggingface/optimum-graphcore/pull/275

New Models
* Add Groupbert model GroupBert (https://arxiv.org/abs/2106.05822) by ivansche in https://github.com/huggingface/optimum-graphcore/pull/139
* Add Whisper model for inference by paolot-gc in https://github.com/huggingface/optimum-graphcore/pull/262

Notebooks
* Packed bert notebook by alex-coniasse in https://github.com/huggingface/optimum-graphcore/pull/222
* Name Entity Extraction notebook by anjleeg-gcai in https://github.com/huggingface/optimum-graphcore/pull/237
* Whisper notebook for inference by paolot-gc in https://github.com/huggingface/optimum-graphcore/pull/262

Bugfixes
* SerializedEmbedding: override default freeze=True on deserialization by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/304
* Fix training mode outputs for roberta and distilbert mlm models by jimypbr in https://github.com/huggingface/optimum-graphcore/pull/254
* Remove the work-around for rebuilding BaseModelOutput in BART and T5 by jimypbr in https://github.com/huggingface/optimum-graphcore/pull/297
* Populate `_hooks` for T5 and BART by HMellor in https://github.com/huggingface/optimum-graphcore/pull/291
* Instantiate optimizer in compile only mode by kundaMwiza in https://github.com/huggingface/optimum-graphcore/pull/292
* Add back removed variables from the wav2vec2 pretraining forward sig by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/259
* Pipeline bug fixes by jimypbr in https://github.com/huggingface/optimum-graphcore/pull/260
* Fix PR doc build when the PR comes from a clone with a different name by regisss in https://github.com/huggingface/optimum-graphcore/pull/281

Misc
* Bump transformers to 4.25.1 by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/247
* Bump diffusers to 0.12.1 by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/302
* Remove deprecated IPU config arguments by katalinic-gc in https://github.com/huggingface/optimum-graphcore/pull/250
* Remove the custom layernorm for convnext by jimypbr in https://github.com/huggingface/optimum-graphcore/pull/255
* Updates for SDK 3.2 by jimypbr in https://github.com/huggingface/optimum-graphcore/pull/256
* Add PopART option that enables the use of models with weights exceeding ~2GB by HMellor in https://github.com/huggingface/optimum-graphcore/pull/277
* Pin the optimum version requirement by jimypbr in https://github.com/huggingface/optimum-graphcore/pull/293
* More concise dev install instructions by HMellor in https://github.com/huggingface/optimum-graphcore/pull/294


New Contributors
* alex-coniasse made their first contribution in https://github.com/huggingface/optimum-graphcore/pull/222
* ivansche made their first contribution in https://github.com/huggingface/optimum-graphcore/pull/139
* evawGraphcore made their first contribution in https://github.com/huggingface/optimum-graphcore/pull/264
* arsalanu made their first contribution in https://github.com/huggingface/optimum-graphcore/pull/266
* HMellor made their first contribution in https://github.com/huggingface/optimum-graphcore/pull/279
* kundaMwiza made their first contribution in https://github.com/huggingface/optimum-graphcore/pull/292
* paolot-gc made their first contribution in https://github.com/huggingface/optimum-graphcore/pull/262

**Full Changelog**: https://github.com/huggingface/optimum-graphcore/compare/v0.5.0...v0.6.0

0.5.0

Changes
- This release makes `optimum-graphcore` compatible with the latest Poplar SDK 3.1 (https://github.com/huggingface/optimum-graphcore/pull/239).
- Please see [Poplar SDK3.1 release notes](https://docs.graphcore.ai/projects/release-notes/en/3.1.0/release_overview.html)
- PopTorch is now comptable with PyTorch 1.13 (upgraded from 1.10), all requirements in `optimum-graphcore` have been updated for this.
- Small behaviour change: `IPUTrainingArgs` `report_to` default is now `"none"` instead of `None` which would default to `"all"`. This means that reporting is now opt-in instead of opt-out. (239)

Page 1 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.