Turnkeyml

Latest version: v6.1.4

Safety actively analyzes 724004 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 7

6.1.4

What's Changed

- Add chat templates to the llm-prompt tool (amd-pworfolk)
- Replace Conda with embeddable Python in Lemonade Server installer (jeremyfowers)
- Add `--trust-remote-load` option to oga-load tool (amd-pworfolk)
- Fix a Ryzen AI specific bug related to context length (jeremyfowers)
- Uplift `Continue` instructions to Continue v1.0 changes (jeremyfowers)
- Ensure Lemonade Server shortcuts do not exit immediately when the server is already running (danielholanda)

6.1.3

What's Changed

- Fix default model selection on devices that do not support Hybrid (danielholanda)
- Add a new `lemonade-server` CLI for starting server and checking status (danielholanda)

**Full Changelog**: https://github.com/onnx/turnkeyml/compare/v6.1.1...v6.1.3

6.1.1

What's Changed
- Upgrade Ryzen AI SW to version 1.4.0 (amd-pworfolk, jeremyfowers)
- Add DeepSeek Hybrid models to Lemonade Server (danielholanda)
- Refactor the oga-load tool and oga.py (ramkrishna2910)
- Documentation overhaul (vgodsoe)
- New Lemonade Server demos:
- CodeGPT (vgodsoe)
- Microsoft AI Toolkit (danielholanda)
- Fixes:
- Make sure that OGA models use their chat template in Lemonade Server (danielholanda)
- Lemonade API can load checkpoints from folders on disk using `lemonade.api.from_pretrained()` (amd-pworfolk)


**Full Changelog**: https://github.com/onnx/turnkeyml/compare/v6.0.3...v6.1.1

6.0.3

Breaking Changes

OpenAI-Compatible Server Model Selection

Lemonade's server now requires models to be downloaded at install time. Apps that use our installer in silent mode now have to specify which models to download. See [docs/lemonade/server_integration.md](https://github.com/onnx/turnkeyml/blob/release_603/docs/lemonade/server_integration.md) for details.

Summary of Contributions

- Add guide on how to use Continue app with Lemonade Server (jeremyfowers)
- Overhaul the lemonade help menu (jeremyfowers)
- Stop importing tkml CLI in lemonade CLI (jeremyfowers)
- Only show hybrid models when Hybrid is available (danielholanda)
- Fix oga seed and avoid default params from being overwriten (jeremyfowers)
- Improve Server Integration Documentation (danielholanda)
- Add exception handler for server's generate thread (jeremyfowers)
- Improve server logger in debug mode (jeremyfowers)
- Added mmlu accuracy test command format (vgodsoe)

6.0.2

What's Changed

- Add the "echo" parameter to OpenAI completions (danielholanda)
- New dedicated report tool for LLM CSVs, as well as ASCII tables (amd-pworfolk)
- Properly raise and transmit server model load failures (jeremyfowers)
- Add documentation for Lemonade_Server_Installer.exe (jeremyfowers)
- Add telemetry to server: performance, input tokens, output tokens, and prompt tracing (danielholanda)
- Ensure that Ryzen AI Hybrid support is not installed on incompatible devices (danielholanda)


**Full Changelog**: https://github.com/onnx/turnkeyml/compare/v6.0.1...v6.0.2

6.0.1

Summary

This update extends OpenAI-compatible endpoints and enhances server reliability.

Summary of Contributions

- Significantly improve server reliability by avoiding race conditions (danielholanda)
- Curate list of Hybrid models shared in /models server endpoint (danielholanda)
- Increase the server's max new tokens default value to 1500 (jeremyfowers)
- Avoid sudden closure on server startup (danielholanda )
- Extend OpenAI-compatible endpoints: `stop` parameter and `/completions` endpoint (danielholanda )
- Fix server test name collision, add no-op test, and update black (jeremyfowers)

**Full Changelog**: https://github.com/onnx/turnkeyml/compare/v6.0.0...v6.0.1

Page 1 of 7

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.