Intel-npu-acceleration-library

Latest version: v1.4.0

Safety actively analyzes 682334 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

1.4.0

Please update the NPU driver to the latest version to fully utilize the library features
- Windows: [link](https://www.intel.com/content/www/us/en/download/794734/intel-npu-driver-windows.html)
- Linux: [ink](https://github.com/intel/linux-npu-driver)

PIP package: https://pypi.org/project/intel-npu-acceleration-library/1.4.0/

What's Changed
* Add doc for implementing new operations by SarahByrneIntel in https://github.com/intel/intel-npu-acceleration-library/pull/79
* Adding power and log softmax operations by SarahByrneIntel in https://github.com/intel/intel-npu-acceleration-library/pull/80
* Adding support for operations on tensors by SarahByrneIntel in https://github.com/intel/intel-npu-acceleration-library/pull/81
* Add c++ examples by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/86
* NPU compilation tutorial by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/87
* Fix ops and r_ops in case of float and int by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/88
* Adding support and testing for chunk tensor operation by SarahByrneIntel in https://github.com/intel/intel-npu-acceleration-library/pull/90
* Make matmul op () torch compliant by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/91
* Update scikit-learn requirement from <=1.5.0 to <=1.5.1 by dependabot in https://github.com/intel/intel-npu-acceleration-library/pull/93
* Support for Phi-3 MLP layer by SarahByrneIntel in https://github.com/intel/intel-npu-acceleration-library/pull/84
* Fix OpenSSF scan by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/99
* Enable npu compile in compiler.py by xduzhangjiayu in https://github.com/intel/intel-npu-acceleration-library/pull/100
* Dtype mismatch fix for model training by SarahByrneIntel in https://github.com/intel/intel-npu-acceleration-library/pull/104
* Add the position_imbeddings param to LlamaAttention.forward by Nagico2 in https://github.com/intel/intel-npu-acceleration-library/pull/105
* add param in profile_mlp.py to enable graph mode or not by xduzhangjiayu in https://github.com/intel/intel-npu-acceleration-library/pull/106
* Add prelu and normalize ops by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/107
* qwen2_math_7b.py to support Qwen Math 7b LLM network by andyyeh75 in https://github.com/intel/intel-npu-acceleration-library/pull/119
* Update scikit-learn requirement from <=1.5.1 to <=1.5.2 by dependabot in https://github.com/intel/intel-npu-acceleration-library/pull/123
* Fix some issues on CI by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/130
* Model compiling demo by SarahByrneIntel in https://github.com/intel/intel-npu-acceleration-library/pull/115
* 'Audio-Spectrogram-Transformer' example added by sbasia in https://github.com/intel/intel-npu-acceleration-library/pull/134
* Building on Ubuntu 24.04 by ytxmobile98 in https://github.com/intel/intel-npu-acceleration-library/pull/129
* Add turbo mode by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/140
* Reinstate llama tests by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/141

New Contributors
* Nagico2 made their first contribution in https://github.com/intel/intel-npu-acceleration-library/pull/105
* andyyeh75 made their first contribution in https://github.com/intel/intel-npu-acceleration-library/pull/119
* sbasia made their first contribution in https://github.com/intel/intel-npu-acceleration-library/pull/134
* ytxmobile98 made their first contribution in https://github.com/intel/intel-npu-acceleration-library/pull/129

**Full Changelog**: https://github.com/intel/intel-npu-acceleration-library/compare/v1.3.0...v1.4.0

1.3.0

Please update the NPU driver to the latest version to fully utilize the library features
- Windows: [link](https://www.intel.com/content/www/us/en/download/794734/intel-npu-driver-windows.html)
- Linux: [ink](https://github.com/intel/linux-npu-driver)

PIP package: https://pypi.org/project/intel-npu-acceleration-library/1.3.0/

What's Changed
* Fix export error with trust_remote_code by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/43
* Create warnings if driver is old by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/46
* Fix int4 quantization for llama and gemma by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/47
* Add C++ example by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/54
* adding new operations by SarahByrneIntel in https://github.com/intel/intel-npu-acceleration-library/pull/50
* Fix for NaNs in LLM inference by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/58
* Change function generate_with_static_shape by xduzhangjiayu in https://github.com/intel/intel-npu-acceleration-library/pull/60
* Native convolution and dw convolution by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/61
* Sarah/feature/constant operation support by SarahByrneIntel in https://github.com/intel/intel-npu-acceleration-library/pull/62
* Add memory operation and tensor class by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/63
* Adding support for L2 normalisation operation by SarahByrneIntel in https://github.com/intel/intel-npu-acceleration-library/pull/65
* Better torch integration by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/66
* Add torch.nn.functional.conv2d by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/70
* fix BatchNorm layer by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/71
* Sarah/feature/operations by SarahByrneIntel in https://github.com/intel/intel-npu-acceleration-library/pull/68
* Add torch NPU device by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/72
* Automatic handling of output layers by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/73
* Sarah/feature/reduce ops by SarahByrneIntel in https://github.com/intel/intel-npu-acceleration-library/pull/74
* Hotfix for module by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/76
* Fix SDPA in case attn_mask == None by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/78

New Contributors
* SarahByrneIntel made their first contribution in https://github.com/intel/intel-npu-acceleration-library/pull/50
* xduzhangjiayu made their first contribution in https://github.com/intel/intel-npu-acceleration-library/pull/60

**Full Changelog**: https://github.com/intel/intel-npu-acceleration-library/compare/v1.2.0...v1.3.0

1.2.0

**Please use the latest driver to take full advantage of the new features** [link](https://www.intel.com/content/www/us/en/download/794734/intel-npu-driver-windows.html)

What's Changed
* Update scikit-learn requirement from <1.5.0 to <1.6.0 by dependabot in https://github.com/intel/intel-npu-acceleration-library/pull/31
* Add int4 support by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/32
* Backend performance optimization by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/35
* Implement SDPA (Scalar dot product attention) NPU kernel by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/36
* Persistent compilation by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/39

New Contributors
* dependabot made their first contribution in https://github.com/intel/intel-npu-acceleration-library/pull/31

**Full Changelog**: https://github.com/intel/intel-npu-acceleration-library/compare/v1.1.0...v1.2.0

1.1.0

**Be sure to use the latest NPU driver to fully exploit latest features!** [link](https://www.intel.com/content/www/us/en/download/794734/intel-npu-driver-windows.html)


What's Changed
* Alessandro/feature/better compilation by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/11
* Add Conv2D support by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/18
* Add attribute to conv by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/19
* Add function to explicitly clean model cache to improve tests and avoid OOM errors by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/21
* Add driver versioning script for windows by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/28
* Driver support for true quantization in eager mode by alessandropalla in https://github.com/intel/intel-npu-acceleration-library/pull/20


**Full Changelog**: https://github.com/intel/intel-npu-acceleration-library/compare/v1,0,0...v1.1.0

v1,0,0
Intel NPU Acceleration Library release!

New Contributors
* alessandropalla made their first contribution in https://github.com/intel/intel-npu-acceleration-library/pull/1

**Full Changelog**: https://github.com/intel/intel-npu-acceleration-library/commits/v1,0,0

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.