E2eaiok

Latest version: v1.2.0

Safety actively analyzes 632648 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

1.2

Highlights

This release introduces 3 new capabilities: RecDP-AutoFE, RecDP-LLM and DeltaTuner.
* RecDP-AutoFE provides automatic feature engineering capability to generate new features for any tabular dataset, this function is proven to be able to achieve competitive or even better accuracy comparing to data scientist's solution.
* RecDP-LLM is an one stop solution for LLM data preparation, it provides a ray and spark enhanced parallel data pipeline for pretrain data clean, RAG text extract/splitting/indexing, and finetune data quality evaluation and enhancement.
* DeltaTuner is an extension for [Peft](https://github.com/huggingface/peft) to improve LLM fine-tuning speed through multiple optimizations, including leveraging the compact model constructor denas to construct/modify the compact delta layers in a hardware-aware and train-free approach and adding more new deltatuning algorithms.

This release provides following major features:
* [RecDP-AutoFE](https://github.com/intel/e2eAIOK/tree/main/RecDP/pyrecdp/autofe)
* [RecDP-LLM](https://github.com/intel/e2eAIOK/tree/main/RecDP/pyrecdp/LLM)
* [DeltaTuner](https://github.com/intel/e2eAIOK/tree/main/e2eAIOK/deltatuner)

Papers and Blogs

* [Enhance Productivity with Auto Feature Engineering Workflow](https://www.intel.com/content/www/us/en/developer/articles/technical/productivity-auto-feature-engineering-workflow.html)


Versions and Components
* PyTorch >= 1.13.1
* Python 3.10
* Peft 0.4.0
* Pypark 3.4.1
* Ray 2.7.1


Links

* https://github.com/intel/e2eAIOK
* https://pypi.org/project/e2eAIOK-deltatuner/1.2.0/
* https://pypi.org/project/e2eAIOK-recdp/1.2.0/


Full Changelog: https://github.com/intel/e2eAIOK/commits/v1.2

1.1

Highlights
-----------
This release introduces a new component: Model Adaptor. It adopts transfer learning methodologies to reduce training time, improve inference throughput and reduce data labeling by taking the advantage of public pretrained models and datasets. The three methods in [Model Adaptor](https://github.com/intel/e2eAIOK/tree/main/e2eAIOK/ModelAdapter) are: Finetuner, Distiller, and Domain Adapter. Currently, model adaptor supports ResNet, BERT, GPT2, 3D Unet models, covering Image Classification, Natural Language Processing and Medical Segmentation domains.


This release provides following major features:
* Model Adaptor Finetuner
* Model Adaptor Distiller
* Model Adaptor Domain Adaptor
* Support Hugging Face models in training free NAS

Improvements
----------------
* Updated demo with colab click-to-run support
* Updated docker with jupyter support


Papers and Blogs
-------------------
* [The Parallel Universe Magazine - Accelerate AI Pipelines with New End-to-End AI Kit](https://www.intel.com/content/www/us/en/developer/articles/technical/accelerate-ai-with-intel-e2e-ai-optimization-kit.html)
* [Multi-Model, Hardware-Aware Train-Free Neural Architecture Search](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Multi-Model-Hardware-Aware-Train-Free-Neural-Architecture-Search/post/1479863)
* [SigOpt Blog - Enhance Multi-Model Hardware-Aware Train-Free NAS with SigOpt](https://sigopt.com/blog/enhance-multi-model-hardware-aware-train-free-nas-with-sigopt)
* [The Intel® SIHG4SR Solution for the ACM RecSys Challenge 2022](https://www.intel.com/content/www/us/en/developer/articles/technical/sihg4sr-graph-solution-for-recsys-challenge-2022.html)


Versions and Components
-----------------------------
* TensorFlow 2.10.0
* PyTorch 1.5, 1.12
* Intel® Extension for TensorFlow 2.10.x
* Intel® Extension for Pytorch 0.2, 1.12.x
* Horovod 0.26
* Python 3.9.12

Links
------
* [https://github.com/intel/e2eAIOK](https://github.com/intel/e2eAIOK)
* [https://pypi.org/project/e2eAIOK](https://pypi.org/project/e2eAIOK)


Full Changelog: [https://github.com/intel/e2eAIOK/commits/v1.1](https://github.com/intel/e2eAIOK/commits/v1.1)

1.0

Highlights
-----------
This release introduces a new component: multi-model, hardware-aware training free neural architecture search module DE-NAS to extend model optimization to more domains. DE-NAS supports CNN, ViT, NLP and ASR models, and leverage training-free score to construct compact models directly on CPU clusters.

This release provides following major features:
* Multi-model, hardware aware training free NAS framework
* Pluggable search strategy
* Training-free scoring for candidate evaluation
* CNN, ViT, NLP, ASR DE-NAS recipes

Improvements
----------------
* New docker file supports PyTorch 1.12
* New CI/CD workflows support
* Updated data processing with RecDP for DLRM
* Automated packaging and delivery

Versions and Components
-----------------------------
* TensorFlow 2.10
* PyTorch 1.5, 1.10, 1.12
* Intel® Extension for TensorFlow 2.10.x
* Intel® Extension for Pytorch 0.2, 1.10.x, 1.12.x
* Horovod 0.26
* Spark 3.1
* Python 3.x

Links
------
* [https://github.com/intel/e2eAIOK](https://github.com/intel/e2eAIOK)
* [https://pypi.org/project/e2eAIOK](https://pypi.org/project/e2eAIOK)
* [https://hub.docker.com/repository/docker/e2eaiok/e2eaiok-tensorflow](https://hub.docker.com/repository/docker/e2eaiok/e2eaiok-tensorflow)
* [https://hub.docker.com/repository/docker/e2eaiok/e2eaiok-pytorch](https://hub.docker.com/repository/docker/e2eaiok/e2eaiok-pytorch)


Full Changelog: [https://github.com/intel/e2eAIOK/commits/v1.0](https://github.com/intel/e2eAIOK/commits/v1.0)

0.2

Intel® End-to-End AI Optimization Kit is a composable toolkits for E2E AI optimization to deliver high performance, lightweight networks/models efficiently on commodity HW like CPU, intending to make E2E AI pipelines faster, easier and more accessible.

Highlights
-----------
This release introduced 4 new deeply optimized End to End AI workflows including Computer Vision model ResNet, Speech Recognition model RNN-T, NLP model BERT and Reinforcement Learning model MiniGo that delivers optimized performance on CPU. The major optimizations are: improves scale-out capabilities on distributed CPU nodes, and built-in model optimization and auto hyperparameter tuning with Smart Democratization Advisor (SDA).

This release provides following highlighted features:
* Single click AI solution deployment in distributed CPU clusters
* Enhanced Smart Democratization Advisor (SDA)
* Optimized popular models Resnet, RNN-T, Bert, MiniGo on CPU.

Improvements
----------------
* Easy clustering deployment script
* Click-to-run optimized AI pipelines
* Updated data processing with RecDP for DLRM
* Step by Step guides and demos

Versions and Components
-----------------------------
* Tensorflow 2.5, 2.10
* Pytorch 1.10
* Horovod 0.23, 0.26
* Spark 3.1
* Python 3.x


Links
-----
* https://pypi.org/project/e2eAIOK
* https://github.com/intel/e2eAIOK


**Full Changelog**: https://github.com/intel/e2eAIOK/commits/v0.2

0.1

Highlights
------
* First release of Smart Democratization Advisor (SDA)
* End to End AI pipeline of 3 Recommender System models: DLRM, DIEN, WnD

Contributors
* xuechendi made their first contribution in https://github.com/intel/e2eAIOK/pull/3
* Jian-Zhang made their first contribution in https://github.com/intel/e2eAIOK/pull/6
* zigzagcai made their first contribution in https://github.com/intel/e2eAIOK/pull/7
* csdingbin made their first contribution in https://github.com/intel/e2eAIOK/pull/8
* XinyaoWa made their first contribution in https://github.com/intel/e2eAIOK/pull/12
* Peach-He made their first contribution in https://github.com/intel/e2eAIOK/pull/2
* tianyil1 made their first contribution in https://github.com/intel/e2eAIOK/pull/18

**Full Changelog**: https://github.com/intel/e2eAIOK/commits/v0.1

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.