Llmcompressor

Latest version: v0.5.0

Safety actively analyzes 724166 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 2

0.1.0

What's Changed
* Address Test Failures by Satrat in https://github.com/vllm-project/llm-compressor/pull/1
* Remove SparseZoo Usage by Satrat in https://github.com/vllm-project/llm-compressor/pull/2
* SparseML Cleanup by markurtz in https://github.com/vllm-project/llm-compressor/pull/6
* Remove all references to Neural Magic copyright within LLM Compressor by markurtz in https://github.com/vllm-project/llm-compressor/pull/7
* Add FP8 Support by Satrat in https://github.com/vllm-project/llm-compressor/pull/4
* Fix Weekly Test Failure by Satrat in https://github.com/vllm-project/llm-compressor/pull/8
* Add Scheme UX for QuantizationModifier by Satrat in https://github.com/vllm-project/llm-compressor/pull/9
* Add Group Quantization Test Case by Satrat in https://github.com/vllm-project/llm-compressor/pull/10
* Loguru logging standardization for LLM Compressor by markurtz in https://github.com/vllm-project/llm-compressor/pull/11
* Clarify Function Names for Logging by Satrat in https://github.com/vllm-project/llm-compressor/pull/12
* [ Examples ] E2E Examples by robertgshaw2-neuralmagic in https://github.com/vllm-project/llm-compressor/pull/5
* Update setup.py by robertgshaw2-neuralmagic in https://github.com/vllm-project/llm-compressor/pull/15
* SmoothQuant Mapping Defaults by Satrat in https://github.com/vllm-project/llm-compressor/pull/13
* Initial README by bfineran in https://github.com/vllm-project/llm-compressor/pull/3
* [Bug] Fix validation errors for smoothquant modifier + update examples by rahul-tuli in https://github.com/vllm-project/llm-compressor/pull/19
* [MOE Quantization] Warn against "undercalibrated" modules by dbogunowicz in https://github.com/vllm-project/llm-compressor/pull/20
* Port SparseML Remote Code Fix by Satrat in https://github.com/vllm-project/llm-compressor/pull/21
* Update Quantization Save Defaults by Satrat in https://github.com/vllm-project/llm-compressor/pull/22
* [Bugfix] Add fix to preserve modifier order when passed as a list by rahul-tuli in https://github.com/vllm-project/llm-compressor/pull/26
* GPTQ - move calibration of quantiztion params to after hessian calibration by bfineran in https://github.com/vllm-project/llm-compressor/pull/25
* Fix typos by eldarkurtic in https://github.com/vllm-project/llm-compressor/pull/31
* Remove ceiling from `datasets` dep by mgoin in https://github.com/vllm-project/llm-compressor/pull/27
* Revert naive compression format by Satrat in https://github.com/vllm-project/llm-compressor/pull/32
* Fix layerwise targets by Satrat in https://github.com/vllm-project/llm-compressor/pull/36
* Move Weight Update Out Of Loop by Satrat in https://github.com/vllm-project/llm-compressor/pull/40
* Fix End Epoch Default by Satrat in https://github.com/vllm-project/llm-compressor/pull/39
* Fix typos in example for w8a8 quant by eldarkurtic in https://github.com/vllm-project/llm-compressor/pull/38
* Model Offloading Support Pt 2 by Satrat in https://github.com/vllm-project/llm-compressor/pull/34
* set version to 1.0.0 for release by bfineran in https://github.com/vllm-project/llm-compressor/pull/44
* Update version for first release by markurtz in https://github.com/vllm-project/llm-compressor/pull/50
* BugFix: Update TRL example scripts to point to the right SFTTrainer by rahul-tuli in https://github.com/vllm-project/llm-compressor/pull/51
* Update examples/quantization_24_sparse_w4a16 README by dbarbuzzi in https://github.com/vllm-project/llm-compressor/pull/52
* Fix Failing Transformers Tests by Satrat in https://github.com/vllm-project/llm-compressor/pull/53
* Offloading Bug Fix by Satrat in https://github.com/vllm-project/llm-compressor/pull/58

New Contributors
* markurtz made their first contribution in https://github.com/vllm-project/llm-compressor/pull/6
* bfineran made their first contribution in https://github.com/vllm-project/llm-compressor/pull/3
* dbogunowicz made their first contribution in https://github.com/vllm-project/llm-compressor/pull/20
* eldarkurtic made their first contribution in https://github.com/vllm-project/llm-compressor/pull/31
* mgoin made their first contribution in https://github.com/vllm-project/llm-compressor/pull/27
* dbarbuzzi made their first contribution in https://github.com/vllm-project/llm-compressor/pull/52

**Full Changelog**: https://github.com/vllm-project/llm-compressor/commits/0.1.0

Page 2 of 2

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.