Llama-cpp-python

Latest version: v0.3.5

Safety actively analyzes 688587 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 22

0.2.72

- fix(security): Remote Code Execution by Server-Side Template Injection in Model Metadata by retr0reg in b454f40a9a1787b2b5659cd2cb00819d983185df
- fix(security): Update remaining jinja chat templates to use immutable sandbox by CISC in 1441

0.2.71

Not secure
- feat: Update llama.cpp to ggerganov/llama.cpp911b3900dded9a1cfe0f0e41b82c7a29baf3a217
- fix: Make leading bos_token optional for image chat formats, fix nanollava system message by abetlen in 77122638b4153e31d9f277b3d905c2900b536632
- fix: free last image embed in llava chat handler by abetlen in 3757328b703b2cd32dcbd5853271e3a8c8599fe7

0.2.70

Not secure
- feat: Update llama.cpp to ggerganov/llama.cppc0e6fbf8c380718102bd25fcb8d2e55f8f9480d1
- feat: fill-in-middle support by CISC in 1386
- fix: adding missing args in create_completion for functionary chat handler by skalade in 1430
- docs: update README.md eltociear in 1432
- fix: chat_format log where auto-detected format prints None by balvisio in 1434
- feat(server): Add support for setting root_path by abetlen in 0318702cdc860999ee70f277425edbbfe0e60419
- feat(ci): Add docker checks and check deps more frequently by Smartappli in 1426
- fix: detokenization case where first token does not start with a leading space by noamgat in 1375
- feat: Implement streaming for Functionary v2 + Bug fixes by jeffrey-fong in 1419
- fix: Use memmove to copy str_value kv_override by abetlen in 9f7a85571ae80d3b6ddbd3e1bae407b9f1e3448a
- feat(server): Remove temperature bounds checks for server by abetlen in 0a454bebe67d12a446981eb16028c168ca5faa81
- fix(server): Propagate flash_attn to model load by dthuerck in 1424

0.2.69

Not secure
- feat: Update llama.cpp to ggerganov/llama.cpp6ecf3189e00a1e8e737a78b6d10e1d7006e050a2
- feat: Add llama-3-vision-alpha chat format by abetlen in 31b1d95a6c19f5b615a3286069f181a415f872e8
- fix: Change default verbose value of verbose in image chat format handlers to True to match Llama by abetlen in 4f01c452b6c738dc56eacac3758119b12c57ea94
- fix: Suppress all logs when verbose=False, use hardcoded fileno's to work in colab notebooks by abetlen in f116175a5a7c84569c88cad231855c1e6e59ff6e
- fix: UTF-8 handling with grammars by jsoma in 1415

0.2.68

Not secure
- feat: Update llama.cpp to ggerganov/llama.cpp77e15bec6217a39be59b9cc83d6b9afb6b0d8167
- feat: Add option to enable flash_attn to Lllama params and ModelSettings by abetlen in 22d77eefd2edaf0148f53374d0cac74d0e25d06e
- fix(ci): Fix build-and-release.yaml by Smartappli in 1413

0.2.67

Not secure
- fix: Ensure image renders before text in chat formats regardless of message content order by abetlen in 3489ef09d3775f4a87fb7114f619e8ba9cb6b656
- fix(ci): Fix bug in use of upload-artifact failing to merge multiple artifacts into a single release by abetlen in d03f15bb73a1d520970357b702a9e7d4cc2a7a62

Page 5 of 22

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.