Fms-hf-tuning

Latest version: v2.1.2

Safety actively analyzes 688578 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 6

0.4.0rc.1

What's Changed
* remove merge model for lora tuned adapters by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/197
* Add test coverage by tedhtchang in https://github.com/foundation-model-stack/fms-hf-tuning/pull/171
* Install Acceleration Framework into Training Script by fabianlim in https://github.com/foundation-model-stack/fms-hf-tuning/pull/157
* deps: limit dependency ranges by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/54
* Delete dependabot.yml by tedhtchang in https://github.com/foundation-model-stack/fms-hf-tuning/pull/207
* add dependabot.yml by tedhtchang in https://github.com/foundation-model-stack/fms-hf-tuning/pull/208
* Fix additional callbacks by VassilisVassiliadis in https://github.com/foundation-model-stack/fms-hf-tuning/pull/199
* Update trl by alex-jw-brooks in https://github.com/foundation-model-stack/fms-hf-tuning/pull/213


**Full Changelog**: https://github.com/foundation-model-stack/fms-hf-tuning/compare/v0.3.0...v0.4.0-rc.1

0.3.0

Summary of Changes
- Switch to multistage dockerfile which greatly reduced the size of the image
- Refactor image scripts to remove `launch_training` and call `sft_trainer` directly.
- Note that this affects the error codes returned from `sft_trainer` to user error code _1_ and internal error code _203_.
- In addition, this affects the logging as parameter parsing logging is moved into `sft_trainer` which is harder to view.

What's Changed
* Switch to multistage dockerfile by tharapalanivel in https://github.com/foundation-model-stack/fms-hf-tuning/pull/154
* refactor: remove launch_training and call sft_trainer directly by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/164
* docs: consolidate configs, add kfto config by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/170
* fix: bloom model can't run with flash-attn by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/173
* Update README.md for Lora modules by Ssukriti in https://github.com/foundation-model-stack/fms-hf-tuning/pull/174


**Full Changelog**: https://github.com/foundation-model-stack/fms-hf-tuning/compare/v0.2.0...v0.3.0

0.2.0

Summary of Changes
* Adds a new `data_formatter_template` field to format data while training from a JSON with custom fields. Eliminating the need to do preprocessing and format data to alpaca style. Find details in [README](https://github.com/foundation-model-stack/fms-hf-tuning?tab=readme-ov-file#format-jsonjsonl-on-the-fly)
* Update `evaluation_strategy` flag to `eval_strategy`
* Add evaluation data format scripts to use as reference

What's Changed
* fix: check if output dir exists by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/160
* tests for fixing full fine tuning by Ssukriti in https://github.com/foundation-model-stack/fms-hf-tuning/pull/162
* Evaluation Data Format Scripts by alex-jw-brooks in https://github.com/foundation-model-stack/fms-hf-tuning/pull/115
* Refactor tests explicit params by Ssukriti in https://github.com/foundation-model-stack/fms-hf-tuning/pull/163
* update eval_strategy flag used in transformers by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/168
* remove unused python39 from dockerfile by jbusche in https://github.com/foundation-model-stack/fms-hf-tuning/pull/167
* Add formatting function alpaca by Ssukriti in https://github.com/foundation-model-stack/fms-hf-tuning/pull/161

[Pip package](https://pypi.org/project/fms-hf-tuning/0.2.0/): `pip install fms-hf-tuning==0.2.0`

**Full Changelog**: https://github.com/foundation-model-stack/fms-hf-tuning/compare/v0.1.0...v0.2.0

0.2.0rc.1

0.1.0

Summary of Changes
* Supports and validated tuning technique: full fine tuning using single-GPU and multi-GPU
* Multi-GPU training using HuggingFace accelerate library, focused on FSDP
* Experimental tuning techniques:
* Single GPU Prompt tuning
* Single GPU LoRA tuning
* Scripts to allow local inference and evaluation of tuned models
* Build scripts for containerization of library
* Initial trainer controller framework for controlling the trainer loop using user-defined rules and metrics

[Pip package](https://pypi.org/project/fms-hf-tuning/0.1.0/): `pip install fms-hf-tuning==0.1.0`

What's Changed
* Init by raghukiran1224 in https://github.com/foundation-model-stack/fms-hf-tuning/pull/1
* allows disable flash attn and torch dtype param by Ssukriti in https://github.com/foundation-model-stack/fms-hf-tuning/pull/2
* First refactor train by Ssukriti in https://github.com/foundation-model-stack/fms-hf-tuning/pull/3
* fix : the way args are passed by Ssukriti in https://github.com/foundation-model-stack/fms-hf-tuning/pull/10
* fix full param tuning by lchu-ibm in https://github.com/foundation-model-stack/fms-hf-tuning/pull/14
* fix import of aim_loader by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/13
* fix: set model max length to either passed in or tokenizer value by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/17
* fix: do not set model max length when loading model by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/21
* add EOS token to dataset by Ssukriti in https://github.com/foundation-model-stack/fms-hf-tuning/pull/15
* Local inference by alex-jw-brooks in https://github.com/foundation-model-stack/fms-hf-tuning/pull/27
* feat: add validation dataset to train by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/26
* feat: support str in target_modules for LoraConfig by VassilisVassiliadis in https://github.com/foundation-model-stack/fms-hf-tuning/pull/39
* Add formatting tools by hickeyma in https://github.com/foundation-model-stack/fms-hf-tuning/pull/31
* Enable code formatting by hickeyma in https://github.com/foundation-model-stack/fms-hf-tuning/pull/40
* Enable daily dependabot updates by hickeyma in https://github.com/foundation-model-stack/fms-hf-tuning/pull/41
* Add file logger callback & export train loss json file by alex-jw-brooks in https://github.com/foundation-model-stack/fms-hf-tuning/pull/22
* Merge models by alex-jw-brooks in https://github.com/foundation-model-stack/fms-hf-tuning/pull/32
* Local inference merged models by alex-jw-brooks in https://github.com/foundation-model-stack/fms-hf-tuning/pull/43
* feat: track validation loss in logs file by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/51
* Add linting capability by hickeyma in https://github.com/foundation-model-stack/fms-hf-tuning/pull/52
* Add PR/Issue templates by tedhtchang in https://github.com/foundation-model-stack/fms-hf-tuning/pull/65
* Add sample unit tests by tedhtchang in https://github.com/foundation-model-stack/fms-hf-tuning/pull/61
* Initial commit for trainer image by tharapalanivel in https://github.com/foundation-model-stack/fms-hf-tuning/pull/69
* Adding copyright notices by tharapalanivel in https://github.com/foundation-model-stack/fms-hf-tuning/pull/77
* Enable pylint in the github workflow by tedhtchang in https://github.com/foundation-model-stack/fms-hf-tuning/pull/63
* Bump aim from 3.17.5 to 3.18.1 by dependabot in https://github.com/foundation-model-stack/fms-hf-tuning/pull/42
* Add Contributing file by jbusche in https://github.com/foundation-model-stack/fms-hf-tuning/pull/58
* docs: lora and getting modules list by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/46
* Allow SFT_TRAINER_CONFIG_JSON_ENV_VAR to be encoded json string by kellyaa in https://github.com/foundation-model-stack/fms-hf-tuning/pull/82
* Document lint by tedhtchang in https://github.com/foundation-model-stack/fms-hf-tuning/pull/84
* Let Huggingface Properly Initialize Arguments, and Fix FSDP-LORA Checkpoint-Saves and Resumption by fabianlim in https://github.com/foundation-model-stack/fms-hf-tuning/pull/53
* Unit tests by tharapalanivel in https://github.com/foundation-model-stack/fms-hf-tuning/pull/83
* Update CONTRIBUTING.md by Ssukriti in https://github.com/foundation-model-stack/fms-hf-tuning/pull/86
* Update input args to max_seq_length and training_data_path by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/94
* feat: move to accelerate launch for distributed training by kmehant in https://github.com/foundation-model-stack/fms-hf-tuning/pull/92
* Update README.md by Ssukriti in https://github.com/foundation-model-stack/fms-hf-tuning/pull/95
* Modify copyright notice by tharapalanivel in https://github.com/foundation-model-stack/fms-hf-tuning/pull/96
* Switches dependencies from txt file to toml file by jbusche in https://github.com/foundation-model-stack/fms-hf-tuning/pull/68
* fix: use attn_implementation="flash_attention_2" by kmehant in https://github.com/foundation-model-stack/fms-hf-tuning/pull/101
* fix: not passing PEFT argument should default to full parameter finetuning by kmehant in https://github.com/foundation-model-stack/fms-hf-tuning/pull/100
* feat: update launch training with accelerate for multi-gpu by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/98
* Setting default values in training job config by tharapalanivel in https://github.com/foundation-model-stack/fms-hf-tuning/pull/104
* add refactored build utils into docker image by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/108
* feat: combine train and eval loss into one file by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/109
* docs: add note on ephemeral storage by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/106
* Move accelerate launch args parsing by tharapalanivel in https://github.com/foundation-model-stack/fms-hf-tuning/pull/107
* Docs improvements by Ssukriti in https://github.com/foundation-model-stack/fms-hf-tuning/pull/111
* feat: add env var SET_NUM_PROCESSES_TO_NUM_GPUS by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/110
* feat: Trainer controller framework by seshapad in https://github.com/foundation-model-stack/fms-hf-tuning/pull/45
* Copying logs file by tharapalanivel in https://github.com/foundation-model-stack/fms-hf-tuning/pull/113
* Fix copying over logs by tharapalanivel in https://github.com/foundation-model-stack/fms-hf-tuning/pull/114
* Add eval script by alex-jw-brooks in https://github.com/foundation-model-stack/fms-hf-tuning/pull/102
* Lint tests by tharapalanivel in https://github.com/foundation-model-stack/fms-hf-tuning/pull/112
* Move sklearn to optional, install optionals for linting by alex-jw-brooks in https://github.com/foundation-model-stack/fms-hf-tuning/pull/117
* Build Wheel Action by jbusche in https://github.com/foundation-model-stack/fms-hf-tuning/pull/105
* rstrip eos in evaluation by alex-jw-brooks in https://github.com/foundation-model-stack/fms-hf-tuning/pull/121
* Fix eos token suffix removal by alex-jw-brooks in https://github.com/foundation-model-stack/fms-hf-tuning/pull/125
* Make use of instruction field optional by alex-jw-brooks in https://github.com/foundation-model-stack/fms-hf-tuning/pull/123
* Deprecating the requirements.txt for dependencies management by tedhtchang in https://github.com/foundation-model-stack/fms-hf-tuning/pull/116
* Add unit tests for various edge cases by alex-jw-brooks in https://github.com/foundation-model-stack/fms-hf-tuning/pull/97
* fix typo in build gha by jbusche in https://github.com/foundation-model-stack/fms-hf-tuning/pull/138
* Install whl in Dockerfile by tedhtchang in https://github.com/foundation-model-stack/fms-hf-tuning/pull/126
* feat: add flash attn to inference and eval scripts by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/132
* OS update in dockerfile by jbusche in https://github.com/foundation-model-stack/fms-hf-tuning/pull/127
* fix: ignore the build output and auto-generated files by HarikrishnanBalagopal in https://github.com/foundation-model-stack/fms-hf-tuning/pull/140
* Propose ADR for Training Acceleration by fabianlim in https://github.com/foundation-model-stack/fms-hf-tuning/pull/119
* feat: new format for the controller metrics and operations by HarikrishnanBalagopal in https://github.com/foundation-model-stack/fms-hf-tuning/pull/130
* adr: Format change to the trainer controller configuration by seshapad in https://github.com/foundation-model-stack/fms-hf-tuning/pull/128
* Generic tracker API and implementation of Aimstack tracker by dushyantbehl in https://github.com/foundation-model-stack/fms-hf-tuning/pull/89
* fix: Allow makefile to run test independent of fmt/lint by dushyantbehl in https://github.com/foundation-model-stack/fms-hf-tuning/pull/145
* feat: Trainer state as a trainer controller metric by seshapad in https://github.com/foundation-model-stack/fms-hf-tuning/pull/150
* Bump aim from 3.18.1 to 3.19.0 by dependabot in https://github.com/foundation-model-stack/fms-hf-tuning/pull/93
* fix: launch_training.py arguments with new tracker api by dushyantbehl in https://github.com/foundation-model-stack/fms-hf-tuning/pull/153
* feat: Exposed the evaluation metrics for rules within trainer controller by seshapad in https://github.com/foundation-model-stack/fms-hf-tuning/pull/146
* Comment out aim in dockerfile by jbusche in https://github.com/foundation-model-stack/fms-hf-tuning/pull/155
* fix: replace eval with a safer alternative by HarikrishnanBalagopal in https://github.com/foundation-model-stack/fms-hf-tuning/pull/147
* docs: ADR for moving from `eval` to `simpleeval` for evaluating trainer controller rules by HarikrishnanBalagopal in https://github.com/foundation-model-stack/fms-hf-tuning/pull/151
* Add exception catching / writing to termination log by kellyaa in https://github.com/foundation-model-stack/fms-hf-tuning/pull/149
* fix: merging of model for multi-gpu by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/158
* add .complete file to output dir when done by kellyaa in https://github.com/foundation-model-stack/fms-hf-tuning/pull/159

**Full Changelog**: https://github.com/foundation-model-stack/fms-hf-tuning/commits/v0.1.0

0.1.0rc.1

What's Changed
* fix: replace eval with a safer alternative by HarikrishnanBalagopal in https://github.com/foundation-model-stack/fms-hf-tuning/pull/147
* docs: ADR for moving from `eval` to `simpleeval` for evaluating trainer controller rules by HarikrishnanBalagopal in https://github.com/foundation-model-stack/fms-hf-tuning/pull/151
* Add exception catching / writing to termination log by kellyaa in https://github.com/foundation-model-stack/fms-hf-tuning/pull/149
* fix: merging of model for multi-gpu by anhuong in https://github.com/foundation-model-stack/fms-hf-tuning/pull/158
* add .complete file to output dir when done by kellyaa in https://github.com/foundation-model-stack/fms-hf-tuning/pull/159


**Full Changelog**: https://github.com/foundation-model-stack/fms-hf-tuning/compare/v0.0.2rc2...v0.1.0-rc.1

Page 5 of 6

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.