Llama-cpp-python

Latest version: v0.3.5

Safety actively analyzes 688600 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 7 of 22

0.2.60

Not secure
- feat: Update llama.cpp to ggerganov/llama.cpp75cd4c77292034ecec587ecb401366f57338f7c0
- fix: Always embed metal library by abetlen in b3bfea6dbfb6ed9ce18f9a2723e0a9e4bd1da7ad
- fix: missing logprobs in response, incorrect response type for functionary by abetlen in 1ae3abbcc3af7f4a25a3ffc40b246f18039565e8
- fix(docs): incorrect tool_choice example by CISC in 1330

0.2.59

Not secure
- feat: Update llama.cpp to ggerganov/llama.cppba0c7c70ab5b15f1f2be7fb0dfbe0366dda30d6c
- feat: Binary wheels for CPU, CUDA (12.1 - 12.3), Metal by abetlen, jllllll, and oobabooga in 1247
- fix: segfault when logits_all=False by abetlen in 8649d7671bd1a7c0d9cc6a5ad91c6ca286512ab3
- fix: last tokens passing to sample_repetition_penalties function by ymikhailov in 1295

0.2.58

Not secure
- feat: Update llama.cpp to ggerganov/llama.cppba0c7c70ab5b15f1f2be7fb0dfbe0366dda30d6c
- feat: add support for KV cache quantization options by Limour-dev in 1307
- feat: Add logprobs support to chat completions by windspirit95 in 1311
- fix: set LLAMA_METAL_EMBED_LIBRARY=on on MacOS arm64 by bretello in 1289
- feat: Add tools/functions variables to Jinja2ChatFormatter, add function response formatting for all simple chat formats by CISC in 1273
- fix: Changed local API doc references to hosted by by lawfordp2017 in 1317

0.2.57

Not secure
- feat: Update llama.cpp to ggerganov/llama.cppac9ee6a4ad740bc1ee484ede43e9f92b5af244c1
- fix: set default embedding pooling type to unspecified by abetlen in 4084aabe867b8ec2aba1b22659e59c9318b0d1f3
- fix: Fix and optimize functionary chat handler by jeffrey-fong in 1282
- fix: json mode for basic chat formats by abetlen in 20e6815252d0efd9f015f7adbf108faaf36e3f3c

0.2.56

Not secure
- feat: Update llama.cpp to ggerganov/llama.cppc2101a2e909ac7c08976d414e64e96c90ee5fa9e
- feat(server): Add endpoints for tokenize, detokenize and count tokens by felipelo in 1136
- feat: Switch embed to llama_get_embeddings_seq by iamlemec in 1263
- fix: Fixed json strings grammar by blacklisting character control set by ExtReMLapin in d02a9cf16ff88ad011e2eb1ce29f4d9400f13cd1
- fix: Check for existence of clip model path by kejcao in 1264

0.2.55

Not secure
- feat: Update llama.cpp to ggerganov/llama.cpp9731134296af3a6839cd682e51d9c2109a871de5
- docs: fix small typo in README: 'model know how' -> 'model knows how' by boegel in 1244

Page 7 of 22

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.