Llama-cpp-python

Latest version: v0.3.8

Safety actively analyzes 723882 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 14 of 22

0.2.21

- Update llama.cpp to ggerganov/llama.cpp64e64aa2557d97490b2fe1262b313e2f4a1607e3
- Make building llava optional by setting `CMAKE_ARGS="-DLLAVA_BUILD=OFF"` and using `LLAVA_CPP_LIB` to specify alternative path to shared library by abetlen in e3941d9c674dbd9891dc3ceda390daeb21f05fd1

0.2.20

Not secure
- Update llama.cpp to ggerganov/llama.cppb38a16dfcff88d547f78f52d1bea31b84a05aff7
- Add `zephyr` chat format by fakerybakery in 938
- Add `baichuan` chat format by caiyesd in 938
- Add `baichuan-2` chat format by caiyesd in 936
- Improve documentation for server chat formats by jooray in 934
- Fix typo in README by antonvice in 940
- Fix typo in the Open Orca chat format by gardner in 947

0.2.19

Not secure
- Update llama.cpp to ggerganov/llama.cpp0b871f1a04ef60e114bbe43004fd9c21114e802d
- Fix 569: stop parameter in chat completion api should accept str by abetlen in 128dc4731fa846ead7e684a137ca57d8931b8899
- Document server host and port parameters by jamesbraza in 768
- Do not set grammar to None when initializing LlamaGrammar by mthuurne in 834
- Add mistrallite, intel, and openchat formats by fakerybakery in 927
- Add support for min_p parameter by tk-master in 921
- Fix 929: tokenizer adding leading space when generating from empty prompt by abetlen in a34d48014192771d2e308a76c22f33bc0318d983
- Fix low level api example by zocainViken in 925
- Fix missing package in openblas docker image by ZisisTsatsas in 920

0.2.18

Not secure
- Update llama.cpp to ggerganov/llama.cpp6bb4908a17150b49373b5f977685b2e180a04f6f

0.2.17

Not secure
- Update llama.cpp to ggerganov/llama.cppdf9d1293defe783f42bc83af732d3c670552c541
- Hotfix: Set `CUDA_ARCHITECTURES=OFF` for `llava_shared` target on Windows by abetlen in 4388f3341413110217b98c4f097ac5c590bdf40b

0.2.16

Not secure
- Update llama.cpp to ggerganov/llama.cpa75fa576abba9d37f463580c379e4bbf1e1ad03c
- Add `set_seed` to `Llama` class by abetlen in fd41ed3a908761d286102a019a34c2938a15118d
- Fix server doc arguments by kjunggithub in 892
- Fix response_format handler in llava chat handler by abetlen in b62c44983921197ed10a7d29dc4ba920e9979380
- Fix default max_tokens, chat completion is now unlimited (to context length) and completion is 16 tokens to match OpenAI defaults by abetlen in e7962d2c733cbbeec5a37392c81f64185a9a39e8
- Fix json_schema_to_gbnf helper so that it takes a json schema string as input instead by abetlen in faeae181b1e868643c0dc28fcf039f077baf0829
- Add support for $ref and $def in json_schema_to_gbnf to handle more complex function schemas by abetlen in 770df344369c0630df1be14be9f9e301e7c56d24
- Update functionary chat handler for new OpenAI api by abetlen in 1b376c62b775b401653facf25a519d116aafe99a
- Fix add default stop sequence to chatml chat format by abetlen in b84d76a844149216d511cfd8cdb9827148a1853c
- Fix sampling bug when logits_all=False by abetlen in 6f0b0b1b840af846938ed74d0e8170a91c40e617

Page 14 of 22

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.