Compressed-tensors

Latest version: v0.8.0

Safety actively analyzes 681812 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

0.8.0

What's Changed
* [Observer Restructure]: Separate out scale/zp and observer init; separate out calibration from forward pass by dsikka in https://github.com/neuralmagic/compressed-tensors/pull/188
* Fix device allocation for MSE observer by anmarques in https://github.com/neuralmagic/compressed-tensors/pull/190
* drop 3.8 and add 3.12 to testing by dhuangnm in https://github.com/neuralmagic/compressed-tensors/pull/196
* Fix test which required accelerate, apply style by kylesayrs in https://github.com/neuralmagic/compressed-tensors/pull/194
* [Bugfix] Move observer and g_idx until after module in onloaded by kylesayrs in https://github.com/neuralmagic/compressed-tensors/pull/195
* Add sparsity structure enum by rahul-tuli in https://github.com/neuralmagic/compressed-tensors/pull/197
* Observer Restructure: Remove Observers, `calibration`, and applying `frozen` steps from lifecycle by dsikka in https://github.com/neuralmagic/compressed-tensors/pull/189
* Clean up observer defaulting logic, better error message by kylesayrs in https://github.com/neuralmagic/compressed-tensors/pull/200
* apply style and quality by kylesayrs in https://github.com/neuralmagic/compressed-tensors/pull/201
* [BugFix] Fix Marlin24 Bug by dsikka in https://github.com/neuralmagic/compressed-tensors/pull/203
* Bump version to v0.8.0 by dsikka in https://github.com/neuralmagic/compressed-tensors/pull/204

New Contributors
* anmarques made their first contribution in https://github.com/neuralmagic/compressed-tensors/pull/190

**Full Changelog**: https://github.com/neuralmagic/compressed-tensors/compare/0.7.1...0.8.0

0.7.1

What's Changed
* [Observer Restructure]: Remove MemoryLess Observer; use helper function for dynamic quantization by dsikka in https://github.com/neuralmagic/compressed-tensors/pull/187
* bump up to 0.7.1 for patch release by dhuangnm in https://github.com/neuralmagic/compressed-tensors/pull/192


**Full Changelog**: https://github.com/neuralmagic/compressed-tensors/compare/0.7.0...0.7.1

0.7.0

What's Changed
* Make INT8 activation PRESET_SCHEMES explicit by mgoin in https://github.com/neuralmagic/compressed-tensors/pull/158
* Write the current version into model configs by mgoin in https://github.com/neuralmagic/compressed-tensors/pull/160
* [KV-Cache] Make k_scale, v_scale as attributes of self_attn using HFCache by horheynm in https://github.com/neuralmagic/compressed-tensors/pull/148
* [Bugfix] Fix quant config parsing by kylesayrs in https://github.com/neuralmagic/compressed-tensors/pull/162
* Ignore Dense sparsity config by rahul-tuli in https://github.com/neuralmagic/compressed-tensors/pull/169
* fix bug by horheynm in https://github.com/neuralmagic/compressed-tensors/pull/170
* Replace `compression_config` to be `quantization_config` for `HFQuantizer` support by dsikka in https://github.com/neuralmagic/compressed-tensors/pull/164
* ignore list by horheynm in https://github.com/neuralmagic/compressed-tensors/pull/171
* switch default to release and disable pushing to pypi for now by dhuangnm in https://github.com/neuralmagic/compressed-tensors/pull/175
* Fix missing quant_method value by kylesayrs in https://github.com/neuralmagic/compressed-tensors/pull/174
* Fix ModelCompressor parsing in HF Quantizer case by kylesayrs in https://github.com/neuralmagic/compressed-tensors/pull/176
* Calibration Code Clarity by kylesayrs in https://github.com/neuralmagic/compressed-tensors/pull/168
* Add: base sparsity/quantization compressors by rahul-tuli in https://github.com/neuralmagic/compressed-tensors/pull/165
* Update compressors folder structure by rahul-tuli in https://github.com/neuralmagic/compressed-tensors/pull/166
* Update number of groups by dsikka in https://github.com/neuralmagic/compressed-tensors/pull/178
* Bring nightly build/test back by dhuangnm in https://github.com/neuralmagic/compressed-tensors/pull/179
* Remove unused function by kylesayrs in https://github.com/neuralmagic/compressed-tensors/pull/156
* Revert "Ignore Dense sparsity config (169)" by rahul-tuli in https://github.com/neuralmagic/compressed-tensors/pull/181
* Workaround HF Quantizer `apply_quantization_config` misuse by kylesayrs in https://github.com/neuralmagic/compressed-tensors/pull/180
* bump up version to 0.7.0 by dhuangnm in https://github.com/neuralmagic/compressed-tensors/pull/186


**Full Changelog**: https://github.com/neuralmagic/compressed-tensors/compare/0.6.0...0.7.0

0.6.0

What's Changed
* Add simple GHA workflow to run tests by dbogunowicz in https://github.com/neuralmagic/compressed-tensors/pull/2
* Define BaseModels for Quantization by Satrat in https://github.com/neuralmagic/compressed-tensors/pull/3
* Quantization refactor by horheynm in https://github.com/neuralmagic/compressed-tensors/pull/5
* Apply quantization config implementation by bfineran in https://github.com/neuralmagic/compressed-tensors/pull/4
* decorate fake quant with torch.no_grad by bfineran in https://github.com/neuralmagic/compressed-tensors/pull/8
* fix observer bugs by bfineran in https://github.com/neuralmagic/compressed-tensors/pull/9
* [lifecycle] docstrings + ux update to work with torch.apply by bfineran in https://github.com/neuralmagic/compressed-tensors/pull/11
* Fix Device Mismatch by Satrat in https://github.com/neuralmagic/compressed-tensors/pull/12
* Serialize Config from Model by Satrat in https://github.com/neuralmagic/compressed-tensors/pull/7
* [Observers] pull shared logic into a helper function by bfineran in https://github.com/neuralmagic/compressed-tensors/pull/13
* Rename the repo to `compressed-tensors` by dbogunowicz in https://github.com/neuralmagic/compressed-tensors/pull/14
* fix style post rename PR by bfineran in https://github.com/neuralmagic/compressed-tensors/pull/25
* Quantization Examples and Correctness Fixes by Satrat in https://github.com/neuralmagic/compressed-tensors/pull/26
* Fix failing GHA by dbogunowicz in https://github.com/neuralmagic/compressed-tensors/pull/29
* Pretrained Model Reload + SparseGPT Support by Satrat in https://github.com/neuralmagic/compressed-tensors/pull/31

0.5.0

What's Changed
* Add simple GHA workflow to run tests by dbogunowicz in https://github.com/neuralmagic/compressed-tensors/pull/2
* Define BaseModels for Quantization by Satrat in https://github.com/neuralmagic/compressed-tensors/pull/3
* Quantization refactor by horheynm in https://github.com/neuralmagic/compressed-tensors/pull/5
* Apply quantization config implementation by bfineran in https://github.com/neuralmagic/compressed-tensors/pull/4
* decorate fake quant with torch.no_grad by bfineran in https://github.com/neuralmagic/compressed-tensors/pull/8
* fix observer bugs by bfineran in https://github.com/neuralmagic/compressed-tensors/pull/9
* [lifecycle] docstrings + ux update to work with torch.apply by bfineran in https://github.com/neuralmagic/compressed-tensors/pull/11
* Fix Device Mismatch by Satrat in https://github.com/neuralmagic/compressed-tensors/pull/12
* Serialize Config from Model by Satrat in https://github.com/neuralmagic/compressed-tensors/pull/7
* [Observers] pull shared logic into a helper function by bfineran in https://github.com/neuralmagic/compressed-tensors/pull/13
* Rename the repo to `compressed-tensors` by dbogunowicz in https://github.com/neuralmagic/compressed-tensors/pull/14
* fix style post rename PR by bfineran in https://github.com/neuralmagic/compressed-tensors/pull/25
* Quantization Examples and Correctness Fixes by Satrat in https://github.com/neuralmagic/compressed-tensors/pull/26
* Fix failing GHA by dbogunowicz in https://github.com/neuralmagic/compressed-tensors/pull/29
* Pretrained Model Reload + SparseGPT Support by Satrat in https://github.com/neuralmagic/compressed-tensors/pull/31

0.4.0

New Features:
* Scheme alias support in quant config (40)
* New compressors: packed int4 (47), Marlin 2:4 (77)

Changes:
* None

Resolved Issues:
* Group-size quantization implementation addressed to ensure correctness. (60)

Known Issues:
* None

Page 1 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.