Torch-pruning

Latest version: v1.5.0

Safety actively analyzes 681775 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 5

1.5.0

What's Changed
* Merge by VainF in https://github.com/VainF/Torch-Pruning/pull/434
* Add SliceOp; Add Phi-3 & Qwen-2 by VainF in https://github.com/VainF/Torch-Pruning/pull/435

**Full Changelog**: https://github.com/VainF/Torch-Pruning/compare/v1.4.3...v1.5.0

1.4.3

What's Changed
* Fixed some issues in GQA Pruning
* [fix] Clarify variable naming in linear_scheduler function, add typing by janumiko in https://github.com/VainF/Torch-Pruning/pull/423

New Contributors
* janumiko made their first contribution in https://github.com/VainF/Torch-Pruning/pull/423

**Full Changelog**: https://github.com/VainF/Torch-Pruning/compare/v1.4.2...v1.4.3

1.4.2

What's Changed
* fixed a bug in attention head pruning
* fixed potentially buggy typo by Alejandro-Casanova in https://github.com/VainF/Torch-Pruning/pull/405

New Contributors
* Alejandro-Casanova made their first contribution in https://github.com/VainF/Torch-Pruning/pull/405

**Full Changelog**: https://github.com/VainF/Torch-Pruning/compare/v1.4.1...v1.4.2

1.4.1

What's Changed
* Add Isomorphic Pruning, an improved algorithm for global pruning.
* Unify local/global/isomorphic pruning with ``Scope`` for importance ranking
* Allow user-defined scope for importance ranking. The following key-value pair ``(model.layer1, model.layer2): 0.4`` will perform global ranking only within layer1 and layer2, with the pruning ratio of 40%.
python
pruner = tp.pruner.MetaPruner(
...
global_pruning=True,
pruning_ratio=0.5, default pruning ratio
pruning_ratio_dict = {(model.layer1, model.layer2): 0.4, model.layer3: 0.2},
Global pruning will be performed on layer1 and layer2
)

* Bugfixing

New Contributors
* Miocio-nora made their first contribution in https://github.com/VainF/Torch-Pruning/pull/380

**Full Changelog**: https://github.com/VainF/Torch-Pruning/compare/v1.4.0...v1.4.1

1.4.0

What's Changed
* Add support for Grouped Query Attention (GQA) in Huggingface transformers.
* Include [minimal examples](https://github.com/VainF/Torch-Pruning/tree/master/examples/LLMs) for Large Language Models (LLaMA-2 & LLaMA-3).

**Full Changelog**: https://github.com/VainF/Torch-Pruning/compare/v1.3.7...v1.4.0

1.3.7

* Add more docstrings and comments
* Minor bug fixing

**Full Changelog**: https://github.com/VainF/Torch-Pruning/compare/v1.3.6...v1.3.7

Page 1 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.