Ooba

Latest version: v0.0.22

Safety actively analyzes 682382 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

1.7

What's Changed
* Check '--model-dir' for no models warning in one-click-installer by jllllll in https://github.com/oobabooga/text-generation-webui/pull/4067
* Supercharging superbooga by HideLord in https://github.com/oobabooga/text-generation-webui/pull/3272
* Fix old install migration for WSL installer by jllllll in https://github.com/oobabooga/text-generation-webui/pull/4093
* Expand MacOS llama.cpp support in requirements.txt by jllllll in https://github.com/oobabooga/text-generation-webui/pull/4094
* Bump exllamav2 to 0.0.4 and use pre-built wheels by jllllll in https://github.com/oobabooga/text-generation-webui/pull/4095
* Enable NUMA feature for llama_cpp_python by StoyanStAtanasov in https://github.com/oobabooga/text-generation-webui/pull/4040
* fix: add missing superboogav2 dep by sammcj in https://github.com/oobabooga/text-generation-webui/pull/4099
* Delete extensions/Training_PRO/readme.md by missionfloyd in https://github.com/oobabooga/text-generation-webui/pull/4112
* Bump llama-cpp-python to 0.2.7 by jllllll in https://github.com/oobabooga/text-generation-webui/pull/4110
* fix: update superboogav2 `requirements.txt` by wangcx18 in https://github.com/oobabooga/text-generation-webui/pull/4100
* Update one_click.py to initialize site_packages_path variable by Psynbiotik in https://github.com/oobabooga/text-generation-webui/pull/4118
* Let model downloader download *.tiktoken as well by happyme531 in https://github.com/oobabooga/text-generation-webui/pull/4121
* Bump llama-cpp-python to 0.2.11 by jllllll in https://github.com/oobabooga/text-generation-webui/pull/4142
* Add grammar to transformers and _HF loaders by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/4091
* Ignoring custom changes to CMD_FLAGS.txt on update. by berkut1 in https://github.com/oobabooga/text-generation-webui/pull/4181
* Fix off-by-one error in exllama_hf caching logic by tdrussell in https://github.com/oobabooga/text-generation-webui/pull/4145
* AutoAWQ: initial support by cal066 in https://github.com/oobabooga/text-generation-webui/pull/3999
* Bump ExLlamaV2 to 0.0.5 by turboderp in https://github.com/oobabooga/text-generation-webui/pull/4186
* Bump AutoAWQ to v0.1.4 by casper-hansen in https://github.com/oobabooga/text-generation-webui/pull/4203
* Fix python wheels for avx requirements by AG-w in https://github.com/oobabooga/text-generation-webui/pull/4189
* Bump to pytorch 11.8 by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/4209
* Use GPTQ wheels compatible with Pytorch 2.1 by jllllll in https://github.com/oobabooga/text-generation-webui/pull/4210
* Fix CFG init with Llamacpp_HF by bdashore3 in https://github.com/oobabooga/text-generation-webui/pull/4219
* Text Generation: Abort if EOS token is reached by bdashore3 in https://github.com/oobabooga/text-generation-webui/pull/4213
* README for superboogav2 by jamesbraza in https://github.com/oobabooga/text-generation-webui/pull/4212
* Move import in llama_attn_hijack.py by Ph0rk0z in https://github.com/oobabooga/text-generation-webui/pull/4231

New Contributors
* StoyanStAtanasov made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/4040
* Psynbiotik made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/4118
* turboderp made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/4186
* casper-hansen made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/4203
* AG-w made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/4189
* bdashore3 made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/4219

**Full Changelog**: https://github.com/oobabooga/text-generation-webui/compare/1.6.1...v1.7

1.6.1

What's Changed
* Use call for conda deactivate in Windows installer by jllllll in https://github.com/oobabooga/text-generation-webui/pull/4042
* [extensions/openai] Fix error when preparing cache for embedding models by wangcx18 in https://github.com/oobabooga/text-generation-webui/pull/3995
* Create alternative requirements.txt with AMD and Metal wheels by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/4052
* Add a grammar editor to the UI by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/4061
* Avoid importing torch in one-click-installer by jllllll in https://github.com/oobabooga/text-generation-webui/pull/4064


**Full Changelog**: https://github.com/oobabooga/text-generation-webui/compare/v1.6...1.6.1

1.6

The [one-click-installers](https://github.com/oobabooga/one-click-installers) have been merged into the repository. Migration instructions can be found [here](https://github.com/oobabooga/text-generation-webui/wiki/Migrating-an-old-one%E2%80%90click-install).

The updated one-click install features an installation size several GB smaller and a more reliable update procedure.

What's Changed

* sd_api_pictures: Widen sliders for image size minimum and maximum by GuizzyQC in https://github.com/oobabooga/text-generation-webui/pull/3326
* Bump exllama module to 0.0.9 by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3338
* Add an extension that makes chat replies longer by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3363
* add chat instruction config for BaiChuan-chat model by CrazyShipOne in https://github.com/oobabooga/text-generation-webui/pull/3332
* [extensions/openai] +Array input (batched) , +Fixes by matatonic in https://github.com/oobabooga/text-generation-webui/pull/3309
* Add a scrollbar to notebook/default textboxes, improve chat scrollbar style by jparmstr in https://github.com/oobabooga/text-generation-webui/pull/3403
* Add auto_max_new_tokens parameter by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3419
* Add the --cpu option for llama.cpp to prevent CUDA from being used by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3432
* Use character settings from API properties if present by rafa-9 in https://github.com/oobabooga/text-generation-webui/pull/3428
* Add standalone Dockerfile for NVIDIA Jetson by toolboc in https://github.com/oobabooga/text-generation-webui/pull/3336
* More models: +StableBeluga2 by matatonic in https://github.com/oobabooga/text-generation-webui/pull/3415
* [extensions/openai] include content-length for json replies by matatonic in https://github.com/oobabooga/text-generation-webui/pull/3416
* Fix llama.cpp truncation by jparmstr in https://github.com/oobabooga/text-generation-webui/pull/3400
* Remove unnecessary chat.js by missionfloyd in https://github.com/oobabooga/text-generation-webui/pull/3445
* Add back silero preview by missionfloyd by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3446
* Add SSL certificate support by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3453
* Bump bitsandbytes to 0.41.1 by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3457
* [Bug fix] Remove html tags form the Prompt sent to Stable Diffusion by SodaPrettyCold in https://github.com/oobabooga/text-generation-webui/pull/3151
* Fix: Mirostat fails on models split across multiple GPUs. by Ph0rk0z in https://github.com/oobabooga/text-generation-webui/pull/3465
* Bump exllama wheels to 0.0.10 by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3467
* Create logs dir if missing when saving history by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3462
* Fix chat message order by missionfloyd in https://github.com/oobabooga/text-generation-webui/pull/3461
* Add Classifier Free Guidance (CFG) for Transformers/ExLlama by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3325
* Refactor everything by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3481
* Use chat_instruct_command in API by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3482
* Make dockerfile respect specified cuda version by sammcj in https://github.com/oobabooga/text-generation-webui/pull/3474
* Fixed a typo when displaying parameters on the llamm.cpp model did not correctly display "rms_norm_eps" by berkut1 in https://github.com/oobabooga/text-generation-webui/pull/3494
* Add option for named cloudflare tunnels by Fredddi43 in https://github.com/oobabooga/text-generation-webui/pull/3364
* Fix superbooga when using regenerate by oderwat in https://github.com/oobabooga/text-generation-webui/pull/3362
* Added the logic for starchat model series by giprime in https://github.com/oobabooga/text-generation-webui/pull/3185
* Streamline GPTQ-for-LLaMa support by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3526
* Add Vicuna-v1.5 detection by berkut1 in https://github.com/oobabooga/text-generation-webui/pull/3524
* ctransformers: another attempt by cal066 in https://github.com/oobabooga/text-generation-webui/pull/3313
* Bump ctransformers wheel version by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3558
* ctransformers: move thread and seed parameters by cal066 in https://github.com/oobabooga/text-generation-webui/pull/3543
* Unify the 3 interface modes by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3554
* Various ctransformers fixes by netrunnereve in https://github.com/oobabooga/text-generation-webui/pull/3556
* Add "save defaults to settings.yaml" button by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3574
* Add the --disable_exllama option for AutoGPTQ by clefever in https://github.com/oobabooga/text-generation-webui/pull/3545
* ctransformers: Fix up model_type name consistency by cal066 in https://github.com/oobabooga/text-generation-webui/pull/3567
* Add a "Show controls" button to chat UI by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3590
* Improved chat scrolling by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3601
* fixes error when not specifying tunnel id by ausboss in https://github.com/oobabooga/text-generation-webui/pull/3606
* Fix print CSS by missionfloyd in https://github.com/oobabooga/text-generation-webui/pull/3608
* Bump llama-cpp-python by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3610
* Bump llama_cpp_python_cuda to 0.1.78 by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3614
* Refactor the training tab by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3619
* llama.cpp: make Stop button work with streaming disabled by Cebtenzzre in https://github.com/oobabooga/text-generation-webui/pull/3620
* Unescape last message by missionfloyd in https://github.com/oobabooga/text-generation-webui/pull/3623
* Improve readability of download-model.py by Thutmose3 in https://github.com/oobabooga/text-generation-webui/pull/3497
* Add probability dropdown to perplexity_colors extension by SeanScripts in https://github.com/oobabooga/text-generation-webui/pull/3148
* Add a simple logit viewer by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3636
* Fix whitespace formatting in perplexity_colors extension. by tdrussell in https://github.com/oobabooga/text-generation-webui/pull/3643
* ctransformers: add mlock and no-mmap options by cal066 in https://github.com/oobabooga/text-generation-webui/pull/3649
* Update requirements.txt by tkbit in https://github.com/oobabooga/text-generation-webui/pull/3651
* Add missing extensions to Dockerfile by sammcj in https://github.com/oobabooga/text-generation-webui/pull/3544
* Implement CFG for ExLlama_HF by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3666
* Add CFG to llamacpp_HF (second attempt) by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3678
* ctransformers: gguf support by cal066 in https://github.com/oobabooga/text-generation-webui/pull/3685
* Fix ctransformers threads auto-detection by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3688
* Use separate llama-cpp-python packages for GGML support by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3697
* GGUF by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3695
* Fix ctransformers model unload by marella in https://github.com/oobabooga/text-generation-webui/pull/3711
* Add ffmpeg to the Docker image by kelvie in https://github.com/oobabooga/text-generation-webui/pull/3664
* accept floating-point alpha value on the command line by Cebtenzzre in https://github.com/oobabooga/text-generation-webui/pull/3712
* Bump llama-cpp-python to 0.1.81 by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3716
* Make it possible to scroll during streaming by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3721
* Bump llama-cpp-python to 0.1.82 by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3730
* Bump ctransformers to 0.2.25 by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3740
* Add max_tokens_second param by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3533
* Update requirements.txt by VishwasKukreti in https://github.com/oobabooga/text-generation-webui/pull/3725
* Update llama.cpp.md by q5sys in https://github.com/oobabooga/text-generation-webui/pull/3702
* Bump llama-cpp-python to 0.1.83 by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3745
* Update download-model.py (Allow single file download) by bet0x in https://github.com/oobabooga/text-generation-webui/pull/3732
* Allow downloading single file from UI by missionfloyd in https://github.com/oobabooga/text-generation-webui/pull/3737
* Bump exllama to 0.0.14 by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3758
* Bump llama-cpp-python to 0.1.84 by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3854
* Update transformers requirement from ==4.32.* to ==4.33.* by dependabot in https://github.com/oobabooga/text-generation-webui/pull/3865
* Bump exllama to 0.1.17 by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3847
* Exllama new rope settings by Ph0rk0z in https://github.com/oobabooga/text-generation-webui/pull/3852
* fix lora training with alpaca_lora_4bit by johnsmith0031 in https://github.com/oobabooga/text-generation-webui/pull/3853
* Improve instructions for CPUs without AVX2 by netrunnereve in https://github.com/oobabooga/text-generation-webui/pull/3786
* improve docker builds by sammcj in https://github.com/oobabooga/text-generation-webui/pull/3715
* Read GGUF metadata by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3873
* Add ExLlamaV2 and ExLlamav2_HF loaders by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3881
* silero_tts: Add language option by missionfloyd in https://github.com/oobabooga/text-generation-webui/pull/3878
* Bump optimum from 1.12.0 to 1.13.1 by dependabot in https://github.com/oobabooga/text-generation-webui/pull/3872
* Handle Chunked Transfer Encoding in `openai` Extension for Streaming Requests by mcc311 in https://github.com/oobabooga/text-generation-webui/pull/3870
* add pygmalion-2 and mythalion support by netrunnereve in https://github.com/oobabooga/text-generation-webui/pull/3821
* Read more GGUF metadata (scale_linear and freq_base) by berkut1 in https://github.com/oobabooga/text-generation-webui/pull/3877
* Bump llama-cpp-python to 0.1.85 by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3887
* Bump ctransformers to 0.2.27 by cal066 in https://github.com/oobabooga/text-generation-webui/pull/3893
* Bump exllamav2 from 0.0.0 to 0.0.1 by dependabot in https://github.com/oobabooga/text-generation-webui/pull/3896
* Fix NTK (alpha) and RoPE scaling for exllamav2 and exllamav2_HF by Panchovix in https://github.com/oobabooga/text-generation-webui/pull/3897
* Reorganize chat buttons by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3892
* Make the chat input expand upwards by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3920
* Fix TheEncrypted777 theme in light mode by missionfloyd in https://github.com/oobabooga/text-generation-webui/pull/3917
* Fix pydantic version conflict in elevenlabs extension by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3927
* Allow custom tokenizer for llamacpp_HF loader by JohanAR in https://github.com/oobabooga/text-generation-webui/pull/3941
* Add customizable ban tokens by sALTaccount in https://github.com/oobabooga/text-generation-webui/pull/3899
* Better solution to chat UI by missionfloyd in https://github.com/oobabooga/text-generation-webui/pull/3947
* Fix exllama tokenizers by sALTaccount in https://github.com/oobabooga/text-generation-webui/pull/3954
* Adjust model variable if it includes a hf URL already by kalomaze in https://github.com/oobabooga/text-generation-webui/pull/3919
* Fix issue 3822 and 3839 by Touch-Night in https://github.com/oobabooga/text-generation-webui/pull/3827
* Add whisper api support for OpenAI extension by wizd in https://github.com/oobabooga/text-generation-webui/pull/3958
* Add speechrecognition dependency for OpenAI extension by fablerq in https://github.com/oobabooga/text-generation-webui/pull/3959
* token probs for non HF loaders by sALTaccount in https://github.com/oobabooga/text-generation-webui/pull/3957
* Training PRO extension by FartyPants in https://github.com/oobabooga/text-generation-webui/pull/3961
* Training extension - added target selector by FartyPants in https://github.com/oobabooga/text-generation-webui/pull/3969
* Fix unexpected extensions load after gradio restart by Touch-Night in https://github.com/oobabooga/text-generation-webui/pull/3965
* Update requirements.txt - Bump ExLlamav2 to v0.0.2 by Thireus in https://github.com/oobabooga/text-generation-webui/pull/3970
* Simplified ExLlama cloning instructions and failure message by jamesbraza in https://github.com/oobabooga/text-generation-webui/pull/3972
* Move hover menu shortcuts to right side by missionfloyd in https://github.com/oobabooga/text-generation-webui/pull/3951
* [extensions/openai] load extension settings via `settings.yaml` by wangcx18 in https://github.com/oobabooga/text-generation-webui/pull/3953
* Update accelerate requirement from ==0.22.* to ==0.23.* by dependabot in https://github.com/oobabooga/text-generation-webui/pull/3981
* llama.cpp: fix ban_eos_token by Cebtenzzre in https://github.com/oobabooga/text-generation-webui/pull/3987
* Bump llama-cpp-python to 0.2.6 by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3982
* Stops the generation immediately when using the "Maximum number of tokens/second" setting by BadisG in https://github.com/oobabooga/text-generation-webui/pull/3952
* Multiple histories for each character by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/4022
* Various one-click-installer updates and fixes by jllllll in https://github.com/oobabooga/text-generation-webui/pull/4029
* Move one-click-installers into the repository by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/4028
* Training PRO extension update by FartyPants in https://github.com/oobabooga/text-generation-webui/pull/4036

New Contributors
* CrazyShipOne made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3332
* jparmstr made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3403
* rafa-9 made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3428
* toolboc made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3336
* SodaPrettyCold made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3151
* sammcj made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3474
* berkut1 made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3494
* Fredddi43 made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3364
* oderwat made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3362
* giprime made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3185
* cal066 made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3313
* clefever made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3545
* ausboss made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3606
* Thutmose3 made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3497
* tdrussell made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3643
* tkbit made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3651
* marella made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3711
* kelvie made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3664
* VishwasKukreti made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3725
* q5sys made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3702
* bet0x made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3732
* johnsmith0031 made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3853
* mcc311 made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3870
* JohanAR made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3941
* sALTaccount made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3899
* kalomaze made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3919
* Touch-Night made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3827
* wizd made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3958
* fablerq made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3959
* jamesbraza made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3972
* wangcx18 made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3953
* BadisG made their first contribution in https://github.com/oobabooga/text-generation-webui/pull/3952

**Full Changelog**: https://github.com/oobabooga/text-generation-webui/compare/v1.5...v1.6

1.5

What's Changed

* Add a detailed extension example and update the [extension docs](https://github.com/oobabooga/text-generation-webui/blob/main/docs/Extensions.md). The example can be found here: [example/script.py](https://github.com/oobabooga/text-generation-webui/blob/main/extensions/example/script.py).
* Introduce a new `chat_input_modifier` extension function and deprecate the old `input_hijack`.
* Change rms_norm_eps to 5e-6 for ~llama-2-70b ggml~ all llama-2 models -- this value reduces the perplexities of the models.
* Remove FlexGen support. It has been made obsolete by the lack of Llama support and the emergence of llama.cpp and 4-bit quantization. I can add it back if it ever gets updated.
* Use the dark theme by default.
* Set the correct instruction template for the model when switching from default/notebook modes to chat mode.

Bug fixes

* [extensions/openai] Fixes for: embeddings, tokens, better errors. +Docs update, +Images, +logit_bias/logprobs, +more. by matasonic in 3122
* Fix typo in README.md by eltociear in https://github.com/oobabooga/text-generation-webui/pull/3286
* README updates and improvements by netrunnereve in https://github.com/oobabooga/text-generation-webui/pull/3198
* Ignore values in training.py which are not string by Foxtr0t1337 in https://github.com/oobabooga/text-generation-webui/pull/3287

1.4

What's Changed

* Add llama-2-70b GGML support by oobabooga in https://github.com/oobabooga/text-generation-webui/pull/3285
* Bump bitsandbytes to 0.41.0 by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3258 -- faster speeds
* Bump exllama module to 0.0.8 by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3256 -- expanded LoRA support

Bug fixes
* Add checks for ROCm and unsupported architectures to llama_cpp_cuda loading by jllllll in https://github.com/oobabooga/text-generation-webui/pull/3225

Extensions
* [extensions/openai] Fixes for: embeddings, tokens, better errors. +Docs update, +Images, +logit_bias/logprobs, +more. by matatonic in https://github.com/oobabooga/text-generation-webui/pull/3122

1.3.1

Changes

* Add missing EOS and BOS tokens to Llama-2 template
* Bump transformers for better Llama-2 support
* Bump llama-cpp-python for better unicode support (untested)

Page 1 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.