Llama-recipes

Latest version: v0.0.4.post1

Safety actively analyzes 681812 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

3.2

* Upstream merge by albertodepaola in https://github.com/meta-llama/llama-recipes/pull/677

New and updated recipes
* Adding end-to-end llama chatbot recipe using Retrieval Augmented Fine Tuning (RAFT) by wukaixingxp in https://github.com/meta-llama/llama-recipes/pull/569
* [WIP] adding chatbot-e2e by HamidShojanazeri in https://github.com/meta-llama/llama-recipes/pull/462
* [Azure] Update Azure API usage example to 3.1 by WuhanMonkey in https://github.com/meta-llama/llama-recipes/pull/615
* Corrected wrong order of commands by BakungaBronson in https://github.com/meta-llama/llama-recipes/pull/602
* Fill in one sentence in the prompt guard tutorial. by cynikolai in https://github.com/meta-llama/llama-recipes/pull/609
* Llamaguard notebook colab link fix by tryrobbo in https://github.com/meta-llama/llama-recipes/pull/619
* Updating llama 3 references to 3.1 model by init27 in https://github.com/meta-llama/llama-recipes/pull/632
* recipes/quickstart/Getting_to_know_Llama.ipynb, typo fix lama -> llama line 127 by cselip in https://github.com/meta-llama/llama-recipes/pull/635
* Update hello_llama_cloud.ipynb by MrDlt in https://github.com/meta-llama/llama-recipes/pull/584
* Update hello_llama_cloud.ipynb by MrDlt in https://github.com/meta-llama/llama-recipes/pull/638
* Add preprocessor to patch PromptGuard scores for inserted characters by cynikolai in https://github.com/meta-llama/llama-recipes/pull/636
* Eval reproduce recipe using lm-evaluation-harness and our 3.1 evals datasets by wukaixingxp in https://github.com/meta-llama/llama-recipes/pull/627

Documentation update
* Update readme text to be version-agnostic by subramen in https://github.com/meta-llama/llama-recipes/pull/614
* Move supported features table to main README by subramen in https://github.com/meta-llama/llama-recipes/pull/616
* document less obvious training config parameters by kjslag in https://github.com/meta-llama/llama-recipes/pull/522

Misc fixes
* Enable users to trust remote code in samsum dataset by mreso in https://github.com/meta-llama/llama-recipes/pull/628
* Use new get_model_state_dict api for save_pretrained peft model by mreso in https://github.com/meta-llama/llama-recipes/pull/629
* Fix version number in Python example by wstnmssr in https://github.com/meta-llama/llama-recipes/pull/643
* Fix checkpoint saving by mreso in https://github.com/meta-llama/llama-recipes/pull/650
* Adding custom dataset file by goswamig in https://github.com/meta-llama/llama-recipes/pull/659
* Make gradio and langchain optional dependencies by mreso in https://github.com/meta-llama/llama-recipes/pull/676
* Update get_default_finetune_args.py by edamamez in https://github.com/meta-llama/llama-recipes/pull/662
* Fix/custom dataset chat template by mreso in https://github.com/meta-llama/llama-recipes/pull/665
* Create v0.0.4 release by mreso in https://github.com/meta-llama/llama-recipes/pull/678

New Contributors
* cynikolai made their first contribution in https://github.com/meta-llama/llama-recipes/pull/609
* BakungaBronson made their first contribution in https://github.com/meta-llama/llama-recipes/pull/602
* init27 made their first contribution in https://github.com/meta-llama/llama-recipes/pull/632
* cselip made their first contribution in https://github.com/meta-llama/llama-recipes/pull/635
* MrDlt made their first contribution in https://github.com/meta-llama/llama-recipes/pull/584
* wstnmssr made their first contribution in https://github.com/meta-llama/llama-recipes/pull/643
* goswamig made their first contribution in https://github.com/meta-llama/llama-recipes/pull/659
* edamamez made their first contribution in https://github.com/meta-llama/llama-recipes/pull/662

**Full Changelog**: https://github.com/meta-llama/llama-recipes/compare/v0.0.3...v0.0.4

We would like to thank all who contributed to this release and are looking forward to future contributions!

0.0.4.post1

This release import bug fixes and some doc changes.

What's Changed
* Improve discoverability of 3.2 recipes by subramen in https://github.com/meta-llama/llama-recipes/pull/684
* fix readme by wukaixingxp in https://github.com/meta-llama/llama-recipes/pull/679
* fix AutoModel and bump transformers version to 4.45 by wukaixingxp in https://github.com/meta-llama/llama-recipes/pull/686
* post1 release version bump by mreso in https://github.com/meta-llama/llama-recipes/pull/687

0.0.4

This release accompanies the release of [Llama 3.2](https://llama.meta.com/) which included new versions of the Llama models in sizes of 1B, 3B, 11B and 90B. To get started with the new models you can find information in the [official documentation](https://llama.meta.com/docs/overview) or the on the[ HuggingFace hub](https://huggingface.co/collections/meta-llama/llama-32-66f448ffc8c32f949b04c8cf). Further details can also be found in the [model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD.md) and the [The Llama 3 Herd of Models](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/) paper. For this release we updated the documentation and made sure all components work with the new models including multimodal finetuning.

What's Changed

0.0.3

Llama 3.1 Integration
This release accompanies the release of [Llama 3.1](https://llama.meta.com/) which included new versions of the Llama 8B and 70B models as well as the new 405B version. To get started with the new models you can find information in the [official documentation](https://llama.meta.com/docs/overview) or the on the[ HuggingFace hub](https://huggingface.co/collections/meta-llama/llama-31-669fc079a0c406a149a5738f). Further details can also be found in the [model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md) and the [Llama 3.1 paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). For this release we updated the documentation and made sure all components work with the new models.
* Release update by albertodepaola cynikolai mreso subramen tryrobbo varunfb in [603](https://github.com/meta-llama/llama-recipes/pull/603)
New Features
We also added new features like FSDP + QLoRA fine-tuning and H2O algorithm for long context inference.
* Implement H2O for long context inference on summarization tasks by [Kyriection](https://github.com/Kyriection) in [#411](https://github.com/meta-llama/llama-recipes/pull/411)
* Resume the fine-tuning process from the previous PEFT checkpoint folder by [wukaixingxp](https://github.com/wukaixingxp) in [#531](https://github.com/meta-llama/llama-recipes/pull/531)
* Update hf weight conversion script to llama 3 by [dongwang218](https://github.com/dongwang218) in [#551](https://github.com/meta-llama/llama-recipes/pull/551)
* Adding support for FSDP+Qlora. by [HamidShojanazeri](https://github.com/HamidShojanazeri) in [#572](https://github.com/meta-llama/llama-recipes/pull/572)
Additional Examples
Besides, we added new examples to get you up and running quickly with the Llama models
* Add Groq/Llama3 recipes (cookbook and command line examples) by [dloman118](https://github.com/dloman118) in [#553](https://github.com/meta-llama/llama-recipes/pull/553)
* [WIP] Peft Finetuning Quickstart Notebook by [mreso](https://github.com/mreso) in [#558](https://github.com/meta-llama/llama-recipes/pull/558)
* 4 notebooks ported from 4 DLAI agent short courses using Llama 3 by [jeffxtang](https://github.com/jeffxtang) in [#560](https://github.com/meta-llama/llama-recipes/pull/560)
* [lamini] Add lamini text2sql memory tuning tutorial by [powerjohnnyli](https://github.com/powerjohnnyli) in [#573](https://github.com/meta-llama/llama-recipes/pull/573)
* colab links fixed for dlai agents notebooks by [jeffxtang](https://github.com/jeffxtang) in [#593](https://github.com/meta-llama/llama-recipes/pull/593)
* Port of DLAI LlamaIndex Agent short course lessons 2-4 to use Llama 3 by [jeffxtang](https://github.com/jeffxtang) in [#594](https://github.com/meta-llama/llama-recipes/pull/594)
Codebase Refactor
We also refactored our codebase to improve discoverability of our documentation and recipes
* New structure and rename for tools, docs and quickstart folder by [pia-papanna](https://github.com/pia-papanna) in [#575](https://github.com/meta-llama/llama-recipes/pull/575)
* Add Langchain agent notebooks to 3P_Integrations by [subramen](https://github.com/subramen) in [#576](https://github.com/meta-llama/llama-recipes/pull/576)
* Updates to benchmarks code by [subramen](https://github.com/subramen) in [#577](https://github.com/meta-llama/llama-recipes/pull/577)
* Add README for quickstart + update to codellama url by [subramen](https://github.com/subramen) in [#578](https://github.com/meta-llama/llama-recipes/pull/578)
* Updating the folder name 3p_integrations by [pia-papanna](https://github.com/pia-papanna) in [#581](https://github.com/meta-llama/llama-recipes/pull/581)
* Deleting Agents folder and adding llamaindex by [pia-papanna](https://github.com/pia-papanna) in [#582](https://github.com/meta-llama/llama-recipes/pull/582)
* Update 3p_integration README.md by [subramen](https://github.com/subramen) in [#586](https://github.com/meta-llama/llama-recipes/pull/586)
* Add experimental folder to README by [subramen](https://github.com/subramen) in [#585](https://github.com/meta-llama/llama-recipes/pull/585)
fix typo by [subramen](https://github.com/subramen) in [#588](https://github.com/meta-llama/llama-recipes/pull/588)
* Updating chatbot folder names by [pia-papanna](https://github.com/pia-papanna) in [#590](https://github.com/meta-llama/llama-recipes/pull/590)
* Move MediaGen notebook to octoai folder by [subramen](https://github.com/subramen) in [#601](https://github.com/meta-llama/llama-recipes/pull/601)
Misc improvements and bugfixes
* Fix hsdp_device_mesh=None when enable HSDP and HYBRID_SHARD by [haozhx23](https://github.com/haozhx23) in [#402](https://github.com/meta-llama/llama-recipes/pull/402)
* bump up version by [mreso](https://github.com/mreso) in [#529](https://github.com/meta-llama/llama-recipes/pull/529)
* Fix config file links for FMBench, update business summary chart. by [aarora79](https://github.com/aarora79) in [#532](https://github.com/meta-llama/llama-recipes/pull/532)
* fixed alpaca dataset evalset length and make sure len(eval_loader)>0 by [wukaixingxp](https://github.com/wukaixingxp) in [#540](https://github.com/meta-llama/llama-recipes/pull/540)
* Fix typo in Getting_to_know_Llama.ipynb by [jenyckee](https://github.com/jenyckee) in [#545](https://github.com/meta-llama/llama-recipes/pull/545)
* replace groq llama 2 with replicate by [jeffxtang](https://github.com/jeffxtang) in [#546](https://github.com/meta-llama/llama-recipes/pull/546)
* Remove pkg_resources.packaging by [mreso](https://github.com/mreso) in [#547](https://github.com/meta-llama/llama-recipes/pull/547)
* Update langgraph tool calling agent, simplify examples and README by [rlancemartin](https://github.com/rlancemartin) in [#549](https://github.com/meta-llama/llama-recipes/pull/549)
* Minor update to README by [rlancemartin](https://github.com/rlancemartin) in [#555](https://github.com/meta-llama/llama-recipes/pull/555)
* Add ToolMessage import by [rlancemartin](https://github.com/rlancemartin) in [#559](https://github.com/meta-llama/llama-recipes/pull/559)
* Make quickstart finetuning notebook ready for T4 by [mreso](https://github.com/mreso) in [#562](https://github.com/meta-llama/llama-recipes/pull/562)
* bug fix by [jarvisDang](https://github.com/jarvisDang) in [#570](https://github.com/meta-llama/llama-recipes/pull/570)
* changed --pure_bf16 to --fsdp_config.pure_bf16 and corrected "examples/" path by [wukaixingxp](https://github.com/wukaixingxp) in [#587](https://github.com/meta-llama/llama-recipes/pull/587)
* Update links in README.md by [subramen](https://github.com/subramen) in [#589](https://github.com/meta-llama/llama-recipes/pull/589)
* Fix broken image link by [subramen](https://github.com/subramen) in [#597](https://github.com/meta-llama/llama-recipes/pull/597)
* Fix relative links to images by [subramen](https://github.com/subramen) in [#596](https://github.com/meta-llama/llama-recipes/pull/596)
* Remove max_length from tokenization by [mreso](https://github.com/mreso) in [#604](https://github.com/meta-llama/llama-recipes/pull/604)
* Update transformers requirements by [mreso](https://github.com/mreso) in [#605](https://github.com/meta-llama/llama-recipes/pull/605)
* Address feedback not possible before launch in LG3 recipe and dataset file by [tryrobbo](https://github.com/tryrobbo) in [#606](https://github.com/meta-llama/llama-recipes/pull/606)
New Contributors
* jenyckee made their first contribution in https://github.com/meta-llama/llama-recipes/pull/545
* dloman118 made their first contribution in https://github.com/meta-llama/llama-recipes/pull/553
* Kyriection made their first contribution in https://github.com/meta-llama/llama-recipes/pull/411
* haozhx23 made their first contribution in https://github.com/meta-llama/llama-recipes/pull/402
* powerjohnnyli made their first contribution in https://github.com/meta-llama/llama-recipes/pull/573
* jarvisDang made their first contribution in https://github.com/meta-llama/llama-recipes/pull/570
* pia-papanna made their first contribution in https://github.com/meta-llama/llama-recipes/pull/575

**Full Changelog**: https://github.com/meta-llama/llama-recipes/compare/v0.0.2...v0.0.3

We would like to thank all who contributed to this release and are looking forward to future contributions!

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.