Fms-hf-tuning

Latest version: v0.4.0

Safety actively analyzes 641872 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

0.4.0

Summary of Changes
* Support for LoRA tuning for llama3 and granite (with GPTBigCode) architectures
* Dependencies versions adjustment

What's Changed
* remove merge model for lora tuned adapters by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/197
* Add test coverage by tedhtchang in https://github.com/foundation-model-stack/fms-hf-tuning/pull/171
* Install Acceleration Framework into Training Script by fabianlim in https://github.com/foundation-model-stack/fms-hf-tuning/pull/157
* deps: limit dependency ranges by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/54
* Delete dependabot.yml by tedhtchang in https://github.com/foundation-model-stack/fms-hf-tuning/pull/207
* add dependabot.yml by tedhtchang in https://github.com/foundation-model-stack/fms-hf-tuning/pull/208
* Fix additional callbacks by VassilisVassiliadis in https://github.com/foundation-model-stack/fms-hf-tuning/pull/199
* Update trl by alex-jw-brooks in https://github.com/foundation-model-stack/fms-hf-tuning/pull/213
* deps: cap transformers at 4.40.2 by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/218
* Formatting consolidation main by Ssukriti in https://github.com/foundation-model-stack/fms-hf-tuning/pull/216
* Fix PyPi publish error caused by direct url reference by tedhtchang in https://github.com/foundation-model-stack/fms-hf-tuning/pull/219


**Full Changelog**: https://github.com/foundation-model-stack/fms-hf-tuning/compare/v0.3.0...v0.4.0

0.4.0rc.3

What's Changed
* remove merge model for lora tuned adapters by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/197
* Add test coverage by tedhtchang in https://github.com/foundation-model-stack/fms-hf-tuning/pull/171
* Install Acceleration Framework into Training Script by fabianlim in https://github.com/foundation-model-stack/fms-hf-tuning/pull/157
* deps: limit dependency ranges by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/54
* Delete dependabot.yml by tedhtchang in https://github.com/foundation-model-stack/fms-hf-tuning/pull/207
* add dependabot.yml by tedhtchang in https://github.com/foundation-model-stack/fms-hf-tuning/pull/208
* Fix additional callbacks by VassilisVassiliadis in https://github.com/foundation-model-stack/fms-hf-tuning/pull/199
* Update trl by alex-jw-brooks in https://github.com/foundation-model-stack/fms-hf-tuning/pull/213
* deps: cap transformers at 4.40.2 by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/218
* Formatting consolidation main by Ssukriti in https://github.com/foundation-model-stack/fms-hf-tuning/pull/216
* Fix PyPi publish error caused by direct url reference by tedhtchang in https://github.com/foundation-model-stack/fms-hf-tuning/pull/219


**Full Changelog**: https://github.com/foundation-model-stack/fms-hf-tuning/compare/v0.3.0...v0.4.0-rc.3

0.4.0rc.2

Summary of Changes
* Support for LoRA tuning for llama3 and granite (with GPTBigCode) architectures
* Various dependencies versions adjustment
What's Changed
* remove merge model for lora tuned adapters by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/197
* Add test coverage by tedhtchang in https://github.com/foundation-model-stack/fms-hf-tuning/pull/171
* Install Acceleration Framework into Training Script by fabianlim in https://github.com/foundation-model-stack/fms-hf-tuning/pull/157
* deps: limit dependency ranges by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/54
* Delete dependabot.yml by tedhtchang in https://github.com/foundation-model-stack/fms-hf-tuning/pull/207
* add dependabot.yml by tedhtchang in https://github.com/foundation-model-stack/fms-hf-tuning/pull/208
* Fix additional callbacks by VassilisVassiliadis in https://github.com/foundation-model-stack/fms-hf-tuning/pull/199
* Update trl by alex-jw-brooks in https://github.com/foundation-model-stack/fms-hf-tuning/pull/213
* deps: cap transformers at 4.40.2 by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/218


**Full Changelog**: https://github.com/foundation-model-stack/fms-hf-tuning/compare/v0.3.0...v0.4.0-rc.2

0.4.0rc.1

What's Changed
* remove merge model for lora tuned adapters by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/197
* Add test coverage by tedhtchang in https://github.com/foundation-model-stack/fms-hf-tuning/pull/171
* Install Acceleration Framework into Training Script by fabianlim in https://github.com/foundation-model-stack/fms-hf-tuning/pull/157
* deps: limit dependency ranges by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/54
* Delete dependabot.yml by tedhtchang in https://github.com/foundation-model-stack/fms-hf-tuning/pull/207
* add dependabot.yml by tedhtchang in https://github.com/foundation-model-stack/fms-hf-tuning/pull/208
* Fix additional callbacks by VassilisVassiliadis in https://github.com/foundation-model-stack/fms-hf-tuning/pull/199
* Update trl by alex-jw-brooks in https://github.com/foundation-model-stack/fms-hf-tuning/pull/213


**Full Changelog**: https://github.com/foundation-model-stack/fms-hf-tuning/compare/v0.3.0...v0.4.0-rc.1

0.3.0

Summary of Changes
- Switch to multistage dockerfile which greatly reduced the size of the image
- Refactor image scripts to remove `launch_training` and call `sft_trainer` directly.
- Note that this affects the error codes returned from `sft_trainer` to user error code _1_ and internal error code _203_.
- In addition, this affects the logging as parameter parsing logging is moved into `sft_trainer` which is harder to view.

What's Changed
* Switch to multistage dockerfile by tharapalanivel in https://github.com/foundation-model-stack/fms-hf-tuning/pull/154
* refactor: remove launch_training and call sft_trainer directly by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/164
* docs: consolidate configs, add kfto config by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/170
* fix: bloom model can't run with flash-attn by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/173
* Update README.md for Lora modules by Ssukriti in https://github.com/foundation-model-stack/fms-hf-tuning/pull/174


**Full Changelog**: https://github.com/foundation-model-stack/fms-hf-tuning/compare/v0.2.0...v0.3.0

0.2.0

Summary of Changes
* Adds a new `data_formatter_template` field to format data while training from a JSON with custom fields. Eliminating the need to do preprocessing and format data to alpaca style. Find details in [README](https://github.com/foundation-model-stack/fms-hf-tuning?tab=readme-ov-file#format-jsonjsonl-on-the-fly)
* Update `evaluation_strategy` flag to `eval_strategy`
* Add evaluation data format scripts to use as reference

What's Changed
* fix: check if output dir exists by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/160
* tests for fixing full fine tuning by Ssukriti in https://github.com/foundation-model-stack/fms-hf-tuning/pull/162
* Evaluation Data Format Scripts by alex-jw-brooks in https://github.com/foundation-model-stack/fms-hf-tuning/pull/115
* Refactor tests explicit params by Ssukriti in https://github.com/foundation-model-stack/fms-hf-tuning/pull/163
* update eval_strategy flag used in transformers by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/168
* remove unused python39 from dockerfile by jbusche in https://github.com/foundation-model-stack/fms-hf-tuning/pull/167
* Add formatting function alpaca by Ssukriti in https://github.com/foundation-model-stack/fms-hf-tuning/pull/161

[Pip package](https://pypi.org/project/fms-hf-tuning/0.2.0/): `pip install fms-hf-tuning==0.2.0`

**Full Changelog**: https://github.com/foundation-model-stack/fms-hf-tuning/compare/v0.1.0...v0.2.0

Page 1 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.