Llama-cpp-python

Latest version: v0.3.5

Safety actively analyzes 688587 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 14 of 22

0.2.18

Not secure
- Update llama.cpp to ggerganov/llama.cpp6bb4908a17150b49373b5f977685b2e180a04f6f

0.2.17

Not secure
- Update llama.cpp to ggerganov/llama.cppdf9d1293defe783f42bc83af732d3c670552c541
- Hotfix: Set `CUDA_ARCHITECTURES=OFF` for `llava_shared` target on Windows by abetlen in 4388f3341413110217b98c4f097ac5c590bdf40b

0.2.16

Not secure
- Update llama.cpp to ggerganov/llama.cpa75fa576abba9d37f463580c379e4bbf1e1ad03c
- Add `set_seed` to `Llama` class by abetlen in fd41ed3a908761d286102a019a34c2938a15118d
- Fix server doc arguments by kjunggithub in 892
- Fix response_format handler in llava chat handler by abetlen in b62c44983921197ed10a7d29dc4ba920e9979380
- Fix default max_tokens, chat completion is now unlimited (to context length) and completion is 16 tokens to match OpenAI defaults by abetlen in e7962d2c733cbbeec5a37392c81f64185a9a39e8
- Fix json_schema_to_gbnf helper so that it takes a json schema string as input instead by abetlen in faeae181b1e868643c0dc28fcf039f077baf0829
- Add support for $ref and $def in json_schema_to_gbnf to handle more complex function schemas by abetlen in 770df344369c0630df1be14be9f9e301e7c56d24
- Update functionary chat handler for new OpenAI api by abetlen in 1b376c62b775b401653facf25a519d116aafe99a
- Fix add default stop sequence to chatml chat format by abetlen in b84d76a844149216d511cfd8cdb9827148a1853c
- Fix sampling bug when logits_all=False by abetlen in 6f0b0b1b840af846938ed74d0e8170a91c40e617

0.2.15

Not secure
- Update llama.cpp to ggerganov/llama.cpp0a7c980b6f94a049cb804573df2d8092a34df8e4
- Add support for Llava1.5 multimodal models by damian0815 and abetlen in 821
- Update OpenAI API compatibility to match dev day update by abetlen in 821
- Add seed parameter to completion and chat_completion functions of Llama class by abetlen in 86aeb9f3a14808575d2bb0076e6acb4a30907e6a
- Add JSON mode support to constrain chat completion to JSON objects by abetlen in b30b9c338bf9af316d497ea501d39f5c246900db

0.2.14

Not secure
- Update llama.cpp to ggerganov/llama.cppf0b30ef7dc1360922ccbea0a8cd3918ecf15eaa7
- Add support for Huggingface Autotokenizer Chat Formats by bioshazard and abetlen in 790 and bbffdaebaa7bb04b543dbf683a07276087251f86
- Fix llama-2 chat format by earonesty in 869
- Add support for functionary chat format by abetlen in 784
- Migrate inference from deprecated `llama_eval`API to `llama_batch` and `llama_decode` by abetlen in 795

0.2.13

Not secure
- Update llama.cpp to ggerganov/llama.cpp51b2fc11f7f605fff49725a4540e9a6ef7b51b70
- Fix name 'open' is not defined exception when deleting model by abetlen in 011b95d7f34cbfc528af75a892757bd9a20838ab
- Fix tokenization of special characters by antoine-lizee in 850

Page 14 of 22

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.