Lightseq

Latest version: v3.0.1

Safety actively analyzes 681844 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

3.0.1

What's Changed
* compatible gcq params by HandH1998 in https://github.com/bytedance/lightseq/pull/409
* Fix gpu name by godweiyang in https://github.com/bytedance/lightseq/pull/415


**Full Changelog**: https://github.com/bytedance/lightseq/compare/v3.0.0...v3.0.1

3.0.0

It's been a long time since our last release (v2.2.0). For the past one year, we have focused on **int8 quantization**.

In this release, LightSeq supports **int8 quantized training and inference**. Compared with PyTorch QAT, LightSeq int8 training has a speedup of 3x without any performance loss. Compared with previous LightSeq fp16 inference, int8 engine has a speedup up to 1.7x.

LightSeq int8 engine supports multiple models, such as Transformer, BERT, GPT, etc. For int8 training, the users only need to apply quantization mode to the model using `model.apply(enable_quant)`. For int8 inference, the users only need to use `QuantTransformer` instead of fp16 `Transformer`.

Other releases include supporting models like MoE, fix bugs, performance improvement, etc.

2.2.0

Inference
Support more multi-language models 209

Fixes
Fix inference error on HDF5 208
Fix training error when batch_size=1 192
Other minor fixes: 205 202 193

2.1.3

This version contains several features and bug fixes.

Training
relax restriction of layer norm hidden size 137 161
support inference during training for transformer 141 146 147

Inference
Add inference support and examples for BERT 145

Fixes
fix save/load for training with pytorch 139
fix pos embedding index bug 144

2.1.0

This version contains several features and bug fixes.

Training
support BertEncoder 116
support torch amp and apex amp 100

Inference
support big models like gpt2-large and bart-large 82

Fixes
fix adam bug when param size < 1024 98
fix training compiling fail in cuda < 11 80

2.0.2

[inference] fix warp reduce bug in inference. 74

Page 1 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.