Ooba

Latest version: v0.0.22

Safety actively analyzes 682382 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 2

1.3

Changes

* **Llama-v2**: add instruction template, autodetect the truncation length, add conversion documentation
* [GGML] Support for customizable RoPE by randoentity in https://github.com/oobabooga/text-generation-webui/pull/3083
* Optimize llamacpp_hf (a bit)
* Add Airoboros-v1.2 template
* Disable "Autoload the model" by default
* Disable auto-loading at startup when only one model is available by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3187
* Don't unset the LoRA menu when loading a model
* Bump accelerate to 0.21.0
* Bump bitsandbytes to 0.40.2 (Windows wheels provided by jllllll in 3186)
* Bump AutoGPTQ to 0.3.0 (loading LoRAs is now supported out of the box)
* Update LLaMA-v1 documentation

Bug fixes

* Use 'torch.backends.mps.is_available' to check if mps is supported by appe233 in https://github.com/oobabooga/text-generation-webui/pull/3164

1.2

Changes

* Create llamacpp_HF loader by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3062
* Make it possible to evaluate exllama perplexity by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3138
* Add support for logits processors in extensions by cyberfox in https://github.com/oobabooga/text-generation-webui/pull/3029
* Bump bitsandbytes to 0.40.1.post1 by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3156
* Bump llama cpp version by ofirkris in https://github.com/oobabooga/text-generation-webui/pull/3160
* Increase alpha value limit for NTK RoPE scaling for exllama/exllama_HF by Panchovix in https://github.com/oobabooga/text-generation-webui/pull/3149
* Decrease download timeout

Bug fixes

* Fix reload screen background color in dark mode

Extensions

* Color tokens by probability and/or perplexity by SeanScripts in https://github.com/oobabooga/text-generation-webui/pull/3078

1.1.1

Bug fixes

* Fix output path when downloading models through the UI

1.1

Changes

* Bump bitsandbytes Windows wheel by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3097 -- `--load-in-4bit` is now a lot faster
* Add support low vram mode on llama.cpp module by gabriel-pena in https://github.com/oobabooga/text-generation-webui/pull/3076
* Add links/reference to new multimodal instructblip-pipeline in multimodal readme by kjerk in https://github.com/oobabooga/text-generation-webui/pull/2947
* Add token authorization for downloading model by fahadh4ilyas in https://github.com/oobabooga/text-generation-webui/pull/3067
* Add default environment variable values to docker compose file by Josh-XT in https://github.com/oobabooga/text-generation-webui/pull/3102
* models/config.yaml: +platypus/gplatty, +longchat, +vicuna-33b, +Redmond-Hermes-Coder, +wizardcoder, +more by matatonic in https://github.com/oobabooga/text-generation-webui/pull/2928
* Add context_instruct to API. Load default model instruction template … by atriantafy in https://github.com/oobabooga/text-generation-webui/pull/2688
* Chat history download creates more detailed file names by UnskilledWolf in https://github.com/oobabooga/text-generation-webui/pull/3051
* Disable wandb remote HTTP requests
* Add Feature to Log Sample of Training Dataset for Inspection by practicaldreamer in https://github.com/oobabooga/text-generation-webui/pull/1711
* Add ability to load all text files from a subdirectory for training by kizinfo in https://github.com/oobabooga/text-generation-webui/pull/1997
* Add Tensorboard/Weights and biases integration for training by kabachuha in https://github.com/oobabooga/text-generation-webui/pull/2624
* Fix: Fixed the tokenization process of a raw dataset and improved its efficiency by Nan-Do in https://github.com/oobabooga/text-generation-webui/pull/3035
* More robust and error prone training by FartyPants in https://github.com/oobabooga/text-generation-webui/pull/3058

Bug fixes

* [Fixed] wbits and groupsize values from model not shown by set-soft in https://github.com/oobabooga/text-generation-webui/pull/2977
* Fix API example for loading models by vadi2 in https://github.com/oobabooga/text-generation-webui/pull/3101
* google flan T5 tokenizer download fix by FartyPants in https://github.com/oobabooga/text-generation-webui/pull/3080
* Changed FormComponent to IOComponent by ricardopinto in https://github.com/oobabooga/text-generation-webui/pull/3017
* respect model dir for downloads by micsthepick in https://github.com/oobabooga/text-generation-webui/pull/3079

Extensions

* Fix send_pictures extension
* Elevenlabs tts fixes by set-soft in https://github.com/oobabooga/text-generation-webui/pull/2959
* [extensions/openai]: Major openai extension updates & fixes by matatonic in https://github.com/oobabooga/text-generation-webui/pull/3049
* substitu superboog Beatiful Soup Parser by juhenriquez in https://github.com/oobabooga/text-generation-webui/pull/2996

1.0

Let's call it "version 1.0".

installers
1-click installers for Windows, Linux, MacOS, and WSL. Just download the zip, extract it, and double click on "start". The web UI and all its dependencies will be installed in the same folder.

The source codes and more information can be found at: https://github.com/oobabooga/one-click-installers

**This is now obsolete! The one-click installers have been merged into the repository.**

0.1.22

What's Changed
* fix start cmd for WSL by merlinfrombelgium in https://github.com/KillianLucas/ooba/pull/3

New Contributors
* merlinfrombelgium made their first contribution in https://github.com/KillianLucas/ooba/pull/3

**Full Changelog**: https://github.com/KillianLucas/ooba/commits/v0.1.22

Page 2 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.