What's Changed
* Enable jit load module and cpu_adagrad op by jinyouzhi in https://github.com/intel/intel-extension-for-deepspeed/pull/1
* remove dependence to builder_names by delock in https://github.com/intel/intel-extension-for-deepspeed/pull/4
* Change interface according to latest deepspeed upstream by delock in https://github.com/intel/intel-extension-for-deepspeed/pull/5
* Support oneAPI 2023.0 basekit by jinyouzhi in https://github.com/intel/intel-extension-for-deepspeed/pull/3
* mapping mpich env variable for multi-node training by inkcherry in https://github.com/intel/intel-extension-for-deepspeed/pull/6
* add empty InfereceBuilder for xpu inference by dc3671 in https://github.com/intel/intel-extension-for-deepspeed/pull/9
* add fsycl flag to support icpx for jit_load by ys950902 in https://github.com/intel/intel-extension-for-deepspeed/pull/7
* fix transformer.py for jit load UT pass by ys950902 in https://github.com/intel/intel-extension-for-deepspeed/pull/10
* update readme to support jit-load by ys950902 in https://github.com/intel/intel-extension-for-deepspeed/pull/11
* Add mapping rules for pmix launcher by YizhouZ in https://github.com/intel/intel-extension-for-deepspeed/pull/12
* update kernel for new compiler by ys950902 in https://github.com/intel/intel-extension-for-deepspeed/pull/14
* builder.py: fix python dict query code by YizhouZ in https://github.com/intel/intel-extension-for-deepspeed/pull/29
* Update the pypi package info by rogerxfeng8 in https://github.com/intel/intel-extension-for-deepspeed/pull/30
* add _vector_add kernel by baodii in https://github.com/intel/intel-extension-for-deepspeed/pull/32
* fix bugs: by baodii in https://github.com/intel/intel-extension-for-deepspeed/pull/33
* Update README.md by rogerxfeng8 in https://github.com/intel/intel-extension-for-deepspeed/pull/34
New Contributors
* jinyouzhi made their first contribution in https://github.com/intel/intel-extension-for-deepspeed/pull/1
* delock made their first contribution in https://github.com/intel/intel-extension-for-deepspeed/pull/4
* inkcherry made their first contribution in https://github.com/intel/intel-extension-for-deepspeed/pull/6
* dc3671 made their first contribution in https://github.com/intel/intel-extension-for-deepspeed/pull/9
* ys950902 made their first contribution in https://github.com/intel/intel-extension-for-deepspeed/pull/7
* rogerxfeng8 made their first contribution in https://github.com/intel/intel-extension-for-deepspeed/pull/30
**Full Changelog**: https://github.com/intel/intel-extension-for-deepspeed/compare/rel_2022.4...v0.9.4
rel_2022.4
Initial preproduction release of Intel Extension to DeepSpeed that supports GPT-2 3.6 billion parameters Large Language Model training, with below features verified on Intel GPUs:
• DeepSpeed ZeRO stage 2
• Model parallel size is 1