Llm-llama-cpp

Latest version: v0.3

Safety actively analyzes 683530 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

0.3

New mechanism for running GGUF files directly, using `llm -m gguf`. Example:
bash
llm -m gguf \
-o path una-cybertron-7b-v2-bf16.Q8_0.gguf \
'Instruction: Five reasons to get a pet walrus
Response:'

This makes it much easier to try out new GGUF files, for example those released by [TheBloke](https://huggingface.co/TheBloke) on Hugging Face. [#26](https://github.com/simonw/llm-llama-cpp/issues/26)

0.2b0

- Support for new GGUF format model files. Thanks, [Andrew Mshar](https://github.com/programmylife). [#16](https://github.com/simonw/llm-llama-cpp/issues/16)
- Output from this model now streams. Thanks, [Michael Hamann](https://github.com/michitux). [#11](https://github.com/simonw/llm-llama-cpp/issues/11)
- Support for compiling with METAL GPU acceleration on Apple Silicon. Thanks, [vividfog](https://github.com/vividfog). [#14](https://github.com/simonw/llm-llama-cpp/issues/14)

0.1a0

- Initial alpha release. Can download, register and run GGML models, including Llama 2 Chat models. [1](https://github.com/simonw/llm-llama-cpp/issues/1), [#2](https://github.com/simonw/llm-llama-cpp/issues/2), [#3](https://github.com/simonw/llm-llama-cpp/issues/3), [#4](https://github.com/simonw/llm-llama-cpp/issues/4)

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.