Torchtune

Latest version: v0.4.0

Safety actively analyzes 687918 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 2

0.1.0

Overview

We are excited to announce the release of torchtune v0.1.0! torchtune is a PyTorch library for easily authoring, fine-tuning and experimenting with LLMs. The library emphasizes 4 key aspects:

* Simplicity and Extensibility. Native-PyTorch, componentized design and easy-to-reuse abstractions
* Correctness. High bar on proving the correctness of components and recipes
* Stability. PyTorch just works. So should torchtune
* Democratizing LLM fine-tuning. Works out-of-the-box on both consumer and professional hardware setups

torchtune is tested with the latest stable PyTorch release (2.2.2) as well as the preview nightly version.

New Features
Here are a few highlights of new features from this release.

Recipes
* Added support for running a LoRA finetune using a single GPU (454)
* Added support for running a QLoRA finetune using a single GPU (478)
* Added support for running a LoRA finetune using multiple GPUs with FSDP (454, 266)
* Added support for running a full finetune using a single GPU (482)
* Added support for running a full finetune using multiple GPUs with FSDP (251, 482)
* Added WIP support for DPO (645)
* Integrated with EleutherAI Eval Harness for an evaluation recipe (549)
* Added support for quantization through integration with torchao (632)
* Added support for single-GPU inference (619)
* Created a config parsing system to interact with recipes through YAML and the command line (406, 456, 468)

Models
* Added support for Llama2 7B (70, 137) and 13B (571)
* Added support for Mistral 7B (571)
* Added support for Gemma [WIP] (630, 668)

Datasets
* Added support for instruction and chat-style datasets (752, 624)
* Included example implementations of datasets (303, 116, 407, 541, 576, 645)
* Integrated with Hugging Face Datasets (70)

Utils
* Integrated with Weights & Biases for metric logging (162, 660)
* Created a checkpointer to handle model files from HF and Meta (442)
* Added a tune CLI tool (396)

Documentation

In addition to documenting torchtune’s public facing APIs, we include several new tutorials and “deep-dives” in our documentation.

* Added LoRA tutorial (368)
* Added “End-to-End Workflow with torchtune” tutorial (690)
* Added datasets tutorial (735)
* Added QLoRA tutorial (693)
* Added deep-dive on the checkpointer (674)
* Added deep-dive on configs (311)
* Added deep-dive on recipes (316)
* Added deep-dive on Weights & Biases integration (660)

Community Contributions

This release of torchtune features some amazing work from the community:

* Gemma 2B model from solitude-alive (630)
* DPO finetuning recipe from yechenzhi (645)
* Weights & Biases updates from tcapelle (660)

Page 2 of 2

Links

Releases

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.