Mlipy

Latest version: v0.1.57

Safety actively analyzes 714815 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 9 of 10

0.1.9

Changed:
- Client: Allow to access endpoints without checking for SSL, `verify_ssl=False`.

0.1.8

Changed:
- Server: ENDPOINT needs to end with suffix like `/api/1.0`.

Fixed:
- Server: disabled `traceback` module usage.

0.1.7

Added:
- Server: `LlamaCppParams` support for `chatml` parameter.
- Server: `CandleParams` support for `cpu` parameter.

Changed:
- Example: `sync_demo.py` now uses kwargs instead of unpacking dict.
- Server: Raise Error in case model does not exist.
- mli: `params.py` types in separate module.

Fixed:
- Server: Fixed memory leak on WebSocket routes.
- Server: Wrong first characters of prompt output.
- Client: IPv4 with PORT gets `http://` prefix.

Security:
- Removed auto-download using `hf_hub_download` models because of securty risks.

0.1.6

Added:
- Print `stderr` to debug output of ML engines.

Changed:
- Examples: sync_demo.py, try `quantized` and `use_flash_attn`.

0.1.5

Added:
- Server: `CandleParams` support now `quantized` and `use_flash_attn`.
- Examples: Default ENDPOINT to `http://127.0.0.1:5000`.

0.1.4

Changed:
- Client: auto-prefix (http / https) for BaseMLIClient's argument `endpoint`.

Fixed:
- Removing echo-ed prompt to stdout for both `candle` and `llama.cpp`.

Page 9 of 10

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.