Liger-kernel

Latest version: v0.5.2

Safety actively analyzes 693883 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 3

0.3.1

Summary
This patch release brings important updates and fixes to Liger-Kernel. Notable changes include:

* **KLDiv calculation fix:** KLDiv now functions correctly with larger vocab sizes
* **SwiGLU/GeGLU casting fix:** Program IDs are now cast to int64 in SwiGLU/GeGLU kernels to prevent memory errors with larger dimensions.
* **AutoLigerKernelForCausalLM fix:** The model now properly passes through all original keyword arguments
* **Post-init model patching fix:** Fix to post-init model patching to ensure HF Trainer integration works correctly
* **Relaxed transformers dependency:** Improve compatibility with a broader range of versions.

What's Changed
* Remove debug print statement by EdoardoLuciani in https://github.com/linkedin/Liger-Kernel/pull/247
* [Easy] Cast program_id to int64 in SwiGLU/GeGLU kernels by hansonw in https://github.com/linkedin/Liger-Kernel/pull/251
* Fix a comment typo in flce by Tcc0403 in https://github.com/linkedin/Liger-Kernel/pull/256
* Fix AutoLigerKernelForCausalLM to pass through original kwargs by shimizust in https://github.com/linkedin/Liger-Kernel/pull/263
* Update contributing guide for adding a new model by shivam15s in https://github.com/linkedin/Liger-Kernel/pull/260
* chore: Add Qwen2.5 and Phi3.5 to Readme by tyler-romero in https://github.com/linkedin/Liger-Kernel/pull/265
* rename cuda mode to gpu mode by msaroufim in https://github.com/linkedin/Liger-Kernel/pull/267
* Fix sharing a ResBlock layer for each head in Medusa example by chiwanpark in https://github.com/linkedin/Liger-Kernel/pull/269
* Fix/kldiv by S1ro1 in https://github.com/linkedin/Liger-Kernel/pull/262
* Post-init model patching fix by shimizust in https://github.com/linkedin/Liger-Kernel/pull/280
* Relaxed transformers dependency by shimizust in https://github.com/linkedin/Liger-Kernel/pull/270
* Disable gemma2 and qwen2_vl tests by shimizust in https://github.com/linkedin/Liger-Kernel/pull/288
* Release version 0.3.1 by shimizust in https://github.com/linkedin/Liger-Kernel/pull/286

New Contributors
* EdoardoLuciani made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/247
* msaroufim made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/267

**Full Changelog**: https://github.com/linkedin/Liger-Kernel/compare/v0.3.0...v0.3.1

0.3.0

Opening Thoughts
Thank you, everyone! Your overwhelming support continues to fuel our passion for innovation. With your engagement, we've pushed the boundaries further in this release!




**We are hosting our 1st IRL event, 'Scaling AI Infra - GPUs, Kernels, LLMs and More'. We will discuss Liger-Kernel and invite speakers to talk about DeepSpeed, SGLang, and the TensorCore team. Please RSVP at [our event page](https://scalingaiinfragpuskernelsllmsa.splashthat.com).** | [<img width="1280" alt="Screenshot 2024-09-13 at 2 39 20 PM" src="https://github.com/user-attachments/assets/b733d388-7234-4d6e-9c53-44c1c9f8c96b">](https://scalingaiinfragpuskernelsllmsa.splashthat.com) |
| --------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------ |



What's New

🌐 Large Vision Language Model Support
Welcome Qwen-VL, our first venture into the large vision language models! This expansion allows more versatility in applying our solutions across different AI domains.
✨ Patch Kernels on Model Instances
Enhancing flexibility, our latest API update supports model name string and instance as input, streamlining the integration with Hugging Face's SFT trainer. This enhancement ensures that you can easily patch Liger kernels into your models, whether you're starting from scratch or adapting an existing model setup.

🚀 SWIFT Trainer Integration
We're excited to be integrated into the [SWIFT Trainer Framework](https://github.com/modelscope/ms-swift). This integration signifies our commitment to delivering cutting-edge tools that empower the community toward enhancing training efficiency across all supported models.

🔧 New Kernels and Features
**KL Divergence Kernel**: Dive deeper into model behaviors with our new KL divergence kernel, perfect for those needing model distillation, alignment, and beyond.
**Experimental Kernel for Embedding**: Explore acceleration possibilities with our experimental kernel that optimizes embedding operations.
**Extended Cross Entropy Functionality**: Now we support label smoothing and sum reduction, enabling more robust training and flexible loss calculations for neural networks.

Get Involved and Stay Tuned
Join us on our journey! Connect with us on our CUDA MODE server's Discord channel, and don't forget to follow our official account on X for the latest updates: https://x.com/liger_kernel.

A Look Ahead
We're not stopping here! Looking forward, we plan to expand our support to include even more model families and to explore further optimizations and innovative features. Your feedback is invaluable, so please keep it coming as we shape the future of Liger together!

🌟 Acknowledgments
Your contributions make a difference! Thanks to everyone who has starred, contributed, and provided feedback. Each contribution enriches our community and helps us grow stronger together.





What's Changed
* Skip Tests for GPUs Not Supporting `bf16` by austin362667 in https://github.com/linkedin/Liger-Kernel/pull/159
* [Operators] LayerNorm Kernels + LigerLayerNorm by AndreSlavescu in https://github.com/linkedin/Liger-Kernel/pull/169
* README: ensure modeling code is patched before model instantiation by tmm1 in https://github.com/linkedin/Liger-Kernel/pull/170
* Updated wave snippet to use AutoLigerKernelForCausalLM by shimizust in https://github.com/linkedin/Liger-Kernel/pull/181
* [Documentation] LayerNorm added to README by AndreSlavescu in https://github.com/linkedin/Liger-Kernel/pull/180
* Remove torch compile from benchmark scripts by shimizust in https://github.com/linkedin/Liger-Kernel/pull/183
* Update release guide by yundai424 in https://github.com/linkedin/Liger-Kernel/pull/167
* Extract forward/backward core computation bits outside of torch autograd context for easy reuse by qingquansong in https://github.com/linkedin/Liger-Kernel/pull/178
* custom Embedding kernel by AndreSlavescu in https://github.com/linkedin/Liger-Kernel/pull/135
* Feat/functional api by S1ro1 in https://github.com/linkedin/Liger-Kernel/pull/172
* [feat] FusedLinearCrossEntropy support for Mixtral by ryankert01 in https://github.com/linkedin/Liger-Kernel/pull/136
* [Docs] Update README to include LigerEmbedding by AndreSlavescu in https://github.com/linkedin/Liger-Kernel/pull/186
* compute quantiles for memory usage by kvignesh1420 in https://github.com/linkedin/Liger-Kernel/pull/187
* TypoFixed repo_foward -> rope_forward by LucioPalmucci in https://github.com/linkedin/Liger-Kernel/pull/191
* Switch Lightning 1 GPU example to Qwen2 0.5B instruct model with 1024 max seq length by qingquansong in https://github.com/linkedin/Liger-Kernel/pull/193
* [BUILD] Add pyproject.toml by AndreSlavescu in https://github.com/linkedin/Liger-Kernel/pull/150
* ci fix by AndreSlavescu in https://github.com/linkedin/Liger-Kernel/pull/202
* Update the casting logic of RMSNorm by lancerts in https://github.com/linkedin/Liger-Kernel/pull/201
* Update test_rms_norm.py by lancerts in https://github.com/linkedin/Liger-Kernel/pull/203
* Refactored benchmark tests by shimizust in https://github.com/linkedin/Liger-Kernel/pull/196
* Update layer_norm.py by lancerts in https://github.com/linkedin/Liger-Kernel/pull/207
* Uplift kernel APIs to top level by austin362667 in https://github.com/linkedin/Liger-Kernel/pull/210
* Feat: Kl Divergence kernel by S1ro1 in https://github.com/linkedin/Liger-Kernel/pull/194
* minor refactor of rms and layernorm by lancerts in https://github.com/linkedin/Liger-Kernel/pull/213
* Fix compatibility issue on triton=2.3.1 by Tcc0403 in https://github.com/linkedin/Liger-Kernel/pull/219
* Elaborate ack section by ByronHsu in https://github.com/linkedin/Liger-Kernel/pull/222
* Add license in ack section by ByronHsu in https://github.com/linkedin/Liger-Kernel/pull/224
* Reference Unsloth in header by momochen in https://github.com/linkedin/Liger-Kernel/pull/216
* Add label smoothing for cross entropy by Tcc0403 in https://github.com/linkedin/Liger-Kernel/pull/198
* Added HF use-case benchmark script by shimizust in https://github.com/linkedin/Liger-Kernel/pull/223
* (fix) fix pyproject.toml by wizyoung in https://github.com/linkedin/Liger-Kernel/pull/218
* Update swiglu and geglu forward: zeros_like -> empty_like by IvanYashchuk in https://github.com/linkedin/Liger-Kernel/pull/217
* add repr infomation for layer_norm and rms_norm by wizyoung in https://github.com/linkedin/Liger-Kernel/pull/220
* (fix) fix pyproject.toml by wizyoung in https://github.com/linkedin/Liger-Kernel/pull/226
* Refactor/benchmarking visualizer by S1ro1 in https://github.com/linkedin/Liger-Kernel/pull/212
* Feat: add kl div to readme by S1ro1 in https://github.com/linkedin/Liger-Kernel/pull/229
* Monkeypatch for Qwen2-VL by tyler-romero in https://github.com/linkedin/Liger-Kernel/pull/175
* Optimize fused_linear_cross_entropy when weight does not require grads by hansonw in https://github.com/linkedin/Liger-Kernel/pull/237
* SWIFT Trainer Integration by tastelikefeet in https://github.com/linkedin/Liger-Kernel/pull/240
* Add label smoothing to FLCE and unit tests by Tcc0403 in https://github.com/linkedin/Liger-Kernel/pull/244
* Restore monkey patched modules by austin362667 in https://github.com/linkedin/Liger-Kernel/pull/232
* Support for patching post-model initialization by shimizust in https://github.com/linkedin/Liger-Kernel/pull/199
* Reduction support for CrossEntropy and Division by 0 Fix by shivam15s in https://github.com/linkedin/Liger-Kernel/pull/153
* Release Liger-Kernel version 0.3.0 by qingquansong in https://github.com/linkedin/Liger-Kernel/pull/246

New Contributors
* austin362667 made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/159
* tmm1 made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/170
* S1ro1 made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/172
* ryankert01 made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/136
* kvignesh1420 made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/187
* LucioPalmucci made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/191
* momochen made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/216
* wizyoung made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/218
* IvanYashchuk made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/217
* hansonw made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/237
* tastelikefeet made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/240

**Full Changelog**: https://github.com/linkedin/Liger-Kernel/compare/v0.2.1...v0.3.0

0.2.1

Patch Release
Fix bug in Gemma patch function that FLCE and CE are both true by default ruh roh

What's Changed
* Bug fix for gemma: fused_linear_cross_entropy flag and cross_entropy flag are mutual exclusive by JasonZhu1313 in https://github.com/linkedin/Liger-Kernel/pull/168
* Add gemma 7b it benchmark by JasonZhu1313 in https://github.com/linkedin/Liger-Kernel/pull/166
* bump patch ver by yundai424 in https://github.com/linkedin/Liger-Kernel/pull/171


**Full Changelog**: https://github.com/linkedin/Liger-Kernel/compare/v0.2.0...v0.2.1

0.2.0

Opening Thoughts 🫶
Thank You!
We'd love to take this chance to express our sincere gratefulness to the community! **2500+ ⭐ , 10+ new contributors, 50+ PRs**, plus integration into [Hugging Face 🤗](https://github.com/huggingface/transformers/tree/main), [axolotl](https://github.com/axolotl-ai-cloud/axolotl) and [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) in **less than one week** since going open sourced is totally beyond our expectation. Being able to work together with all the cool people in the community is a bliss and we can't wait for further collaborations down the road!

Looking Ahead
We look forward to further enhancing our collaboration with the community, to work together on a lot of cool stuff -- support for more model families, squeeze out all optimization opportunities for kernels, and, why not, [**llama.triton**](https://github.com/linkedin/Liger-Kernel/issues/119)? 😉

Get Involved and Stay Tuned
Please feel free to join our [discord channel](https://discord.com/channels/1189498204333543425/1275130785933951039) hosted in CUDA MODE server, and follow our repo's official account on X: https://x.com/liger_kernel !

Welcome Phi3 and Qwen2 🚀
This release ships with support for other popular models including Phi3 and Qwen2. All existing kernels in Liger repo can be leveraged to boost your training with models from these families now. Please refer to our [API guide](https://github.com/linkedin/Liger-Kernel/tree/v0.1.2?tab=readme-ov-file#apis) for how to use.

Even Easier API ❤️
Experimenting with different model families and tired of having if-else everywhere just to switch between kernel patching functions? You can now try out our new **model-agnostic** API to apply Liger kernels. Still a one-liner, but more elegant :) For example:
python
from liger_kernel.transformers import AutoLigerKernelForCausalLM

This AutoModel wrapper class automatically monkey-patches the
model with the optimized Liger kernels if the model is supported.
model = AutoLigerKernelForCausalLM.from_pretrained(...)


More Features
* Support optional bias term in FusedLinearCrossEntropy (144)
* Mistral is now equipped with the humongous memory reduction from FusedLinearCrossEntropy now (93)
* Gemma is now equipped with the humongous memory reduction from FusedLinearCrossEntropy now (111)

Bug Fixes
* Fixed import error when using `triton>=3.0.0` on NGC containers (79)
* Fixed the missing offset in Gemma RMSNorm (85) oops
* Added back missing dataclass entries in efficiency callback (116)
* There was some confusion on which Gemma do we support, we now support all! (125)
* Fallback to torch native linear + CrossEntropy when without label (128)
* Match the exact dtype up and downcasting in Llama & Gemma for RMSNorm (92)
* Address the bug that RoPE gets very slow when using dynamic sequence length (149)

What's Changed
* Updated test tolerances for H100 by shimizust in https://github.com/linkedin/Liger-Kernel/pull/55
* Update README.md by lancerts in https://github.com/linkedin/Liger-Kernel/pull/58
* Update benchmark result of Medusa for batch size = 6 setup by JasonZhu1313 in https://github.com/linkedin/Liger-Kernel/pull/59
* Add star graph by shivam15s in https://github.com/linkedin/Liger-Kernel/pull/60
* Add monkey patch for Qwen2 models by chiwanpark in https://github.com/linkedin/Liger-Kernel/pull/69
* Add pytest and datasets to dev dependencies by chiwanpark in https://github.com/linkedin/Liger-Kernel/pull/68
* Fix typos by pchng in https://github.com/linkedin/Liger-Kernel/pull/77
* Remove unused images in `examples/medusa/docs/images/` by pchng in https://github.com/linkedin/Liger-Kernel/pull/78
* chore: update cross_entropy.py by eltociear in https://github.com/linkedin/Liger-Kernel/pull/84
* Fix incorrect import for triton 3 by arvindsun in https://github.com/linkedin/Liger-Kernel/pull/79
* update install from source guide by yundai424 in https://github.com/linkedin/Liger-Kernel/pull/86
* Fix Gemma RMSNorm by davidgonmar in https://github.com/linkedin/Liger-Kernel/pull/85
* Fix example bugs by qingquansong in https://github.com/linkedin/Liger-Kernel/pull/88
* Make tests passing on AMD GPU with 24GB ram by helloworld1 in https://github.com/linkedin/Liger-Kernel/pull/90
* modified: README.md by leaf-soba in https://github.com/linkedin/Liger-Kernel/pull/91
* pytest without need to dealing with PYTHONPATH by helloworld1 in https://github.com/linkedin/Liger-Kernel/pull/95
* Update test_cross_entropy.py by lancerts in https://github.com/linkedin/Liger-Kernel/pull/94
* Add FusedLinerCrossEntropy support for Mistral by Tcc0403 in https://github.com/linkedin/Liger-Kernel/pull/93
* Remove duplicate images by qingquansong in https://github.com/linkedin/Liger-Kernel/pull/107
* Add Qwen benchmarks by shivam15s in https://github.com/linkedin/Liger-Kernel/pull/108
* Fix Mixtral typo by Tcc0403 in https://github.com/linkedin/Liger-Kernel/pull/109
* Explicitly add dependencies in req.txt for medusa example by JasonZhu1313 in https://github.com/linkedin/Liger-Kernel/pull/110
* Add convergence tests and trainer integration test for Qwen2 by Tcc0403 in https://github.com/linkedin/Liger-Kernel/pull/105
* [Bug fix] Efficiency callback missing dataclass entries by tyler-romero in https://github.com/linkedin/Liger-Kernel/pull/116
* Monkeypatch for Phi3 by tyler-romero in https://github.com/linkedin/Liger-Kernel/pull/76
* Add FusedLinearCrossEntropy to Gemma by Luke-Chesley in https://github.com/linkedin/Liger-Kernel/pull/111
* Makefile command for env-report by tyler-romero in https://github.com/linkedin/Liger-Kernel/pull/114
* [WIP] Fix confusion on Gemma by yundai424 in https://github.com/linkedin/Liger-Kernel/pull/121
* [tiny] reformat code by tyler-romero in https://github.com/linkedin/Liger-Kernel/pull/122
* Revert "[WIP] Fix confusion on Gemma (121)" by yundai424 in https://github.com/linkedin/Liger-Kernel/pull/123
* Fix gemma 1 and 2 support by yundai424 in https://github.com/linkedin/Liger-Kernel/pull/125
* Adding AutoLigerKernelForCausalLM by shimizust in https://github.com/linkedin/Liger-Kernel/pull/115
* fallback to torch native linear+CE when without label by yundai424 in https://github.com/linkedin/Liger-Kernel/pull/128
* Add code to save medusa heads and model by JasonZhu1313 in https://github.com/linkedin/Liger-Kernel/pull/130
* Add FusedLinerCrossEntropy support for Phi3 by tyler-romero in https://github.com/linkedin/Liger-Kernel/pull/103
* Add GPU CI support by helloworld1 in https://github.com/linkedin/Liger-Kernel/pull/134
* Make GPU CI optional until it is more stable by helloworld1 in https://github.com/linkedin/Liger-Kernel/pull/141
* Add gemma lightning example for single L40 GPU by qingquansong in https://github.com/linkedin/Liger-Kernel/pull/120
* feat: correct casts in RMSNorm to match references by davidgonmar in https://github.com/linkedin/Liger-Kernel/pull/92
* Bias for fused linear cross entropy by davidgonmar in https://github.com/linkedin/Liger-Kernel/pull/144
* Rerun FLCE benchmark after bias added by ByronHsu in https://github.com/linkedin/Liger-Kernel/pull/148
* updated sl to be non-constexpr by AndreSlavescu in https://github.com/linkedin/Liger-Kernel/pull/149
* update readme to use absolute paths by shaoruu in https://github.com/linkedin/Liger-Kernel/pull/157
* fix convergence test, phi3 import and update benchmark by yundai424 in https://github.com/linkedin/Liger-Kernel/pull/155
* bump lowest HF version by yundai424 in https://github.com/linkedin/Liger-Kernel/pull/158
* Add missing tf_keras to req.txt by JasonZhu1313 in https://github.com/linkedin/Liger-Kernel/pull/161
* Re-enable GPU CI enforce by helloworld1 in https://github.com/linkedin/Liger-Kernel/pull/142
* Bump package ver by yundai424 in https://github.com/linkedin/Liger-Kernel/pull/163
* Update version in setup.py to 0.2.0 by yundai424 in https://github.com/linkedin/Liger-Kernel/pull/164

New Contributors
* chiwanpark made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/69
* pchng made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/77
* eltociear made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/84
* arvindsun made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/79
* davidgonmar made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/85
* leaf-soba made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/91
* Tcc0403 made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/93
* tyler-romero made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/116
* Luke-Chesley made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/111
* AndreSlavescu made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/149
* shaoruu made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/157

**Full Changelog**: https://github.com/linkedin/Liger-Kernel/compare/v0.1.1...v0.2.0

0.1.1

What's Changed
* Fix unwanted scale/bias while testing and simplify _test_memory function by shivam15s in https://github.com/linkedin/Liger-Kernel/pull/50
* Update README by JacobHelwig in https://github.com/linkedin/Liger-Kernel/pull/44
* Added metadata for PyPI and bumped version by shimizust in https://github.com/linkedin/Liger-Kernel/pull/52
* Replace model / data with public HF path, update readme by JasonZhu1313 in https://github.com/linkedin/Liger-Kernel/pull/53

New Contributors
* JacobHelwig made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/44

**Full Changelog**: https://github.com/linkedin/Liger-Kernel/compare/v0.1.0...v0.1.1

0.1.0

What's Changed
* Update PR template and contribution guide by lancerts in https://github.com/linkedin/Liger-Kernel/pull/20
* Add GeGLU and updage readme by yundai424 in https://github.com/linkedin/Liger-Kernel/pull/3
* Added CI workflow with checkstyle job by shimizust in https://github.com/linkedin/Liger-Kernel/pull/27
* Create bug_report.yaml and feature_request.yaml by lancerts in https://github.com/linkedin/Liger-Kernel/pull/29
* Update feature_request.yaml and bug_report.yaml by lancerts in https://github.com/linkedin/Liger-Kernel/pull/30
* update gif by zain-merchant in https://github.com/linkedin/Liger-Kernel/pull/31
* Add lightning trainer and HF trainer fine-tuning example by yundai424 in https://github.com/linkedin/Liger-Kernel/pull/17
* use correct fsdp act ckpt & redo benchmark by ByronHsu in https://github.com/linkedin/Liger-Kernel/pull/32
* Update README.md by ByronHsu in https://github.com/linkedin/Liger-Kernel/pull/33
* Update README.md with Kernel descriptions by qingquansong in https://github.com/linkedin/Liger-Kernel/pull/34
* remove mfu and non used methods by zain-merchant in https://github.com/linkedin/Liger-Kernel/pull/35
* Byhsu/readme 3 by ByronHsu in https://github.com/linkedin/Liger-Kernel/pull/37
* Zain/singletest by zain-merchant in https://github.com/linkedin/Liger-Kernel/pull/38
* Add deepspeed to lightning example by yundai424 in https://github.com/linkedin/Liger-Kernel/pull/36
* Update README.md by lancerts in https://github.com/linkedin/Liger-Kernel/pull/39
* improve rms norm code quality by ByronHsu in https://github.com/linkedin/Liger-Kernel/pull/43
* Refactored convergence tests to be portable by shimizust in https://github.com/linkedin/Liger-Kernel/pull/41
* Added more generic monkey patch function by shimizust in https://github.com/linkedin/Liger-Kernel/pull/42
* Remove override dependency by shivam15s in https://github.com/linkedin/Liger-Kernel/pull/45
* Changed pointer variable names for clarity for SwiGLU by zain-merchant in https://github.com/linkedin/Liger-Kernel/pull/46
* Update CONTRIBUTING.md by lancerts in https://github.com/linkedin/Liger-Kernel/pull/47
* Release version 0.1.0 by shimizust in https://github.com/linkedin/Liger-Kernel/pull/49

New Contributors
* shimizust made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/27
* shivam15s made their first contribution in https://github.com/linkedin/Liger-Kernel/pull/45

**Full Changelog**: https://github.com/linkedin/Liger-Kernel/compare/v0.0.1...v0.1.0

Page 2 of 3

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.