Llama-stack-client

Latest version: v0.1.9

Safety actively analyzes 722491 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

0.1.9

Along with release in https://github.com/meta-llama/llama-stack/releases/tag/v0.1.9

**Full Changelog**: https://github.com/meta-llama/llama-stack-client-python/compare/v0.1.8...v0.1.9

0.1.8

Published along with https://github.com/meta-llama/llama-stack/releases/tag/v0.1.8


What's Changed
* feat: datasets api updates by yanxi0830 in https://github.com/meta-llama/llama-stack-client-python/pull/203
* Sync updates from stainless branch: hardikjshah/dev by hardikjshah in https://github.com/meta-llama/llama-stack-client-python/pull/204
* fix(agent): better error handling by ehhuang in https://github.com/meta-llama/llama-stack-client-python/pull/207
* feat: add weighted_average aggregation function support by SLR722 in https://github.com/meta-llama/llama-stack-client-python/pull/208
* fix: fix duplicate model get help text by reidliu41 in https://github.com/meta-llama/llama-stack-client-python/pull/188
* simplify import paths; Sync updates from stainless branch: ehhuang/dev by ehhuang in https://github.com/meta-llama/llama-stack-client-python/pull/205
* Sync updates from stainless branch: yanxi0830/dev by yanxi0830 in https://github.com/meta-llama/llama-stack-client-python/pull/209

New Contributors
* reidliu41 made their first contribution in https://github.com/meta-llama/llama-stack-client-python/pull/188

**Full Changelog**: https://github.com/meta-llama/llama-stack-client-python/compare/v0.1.7...v0.1.8

0.1.7

Published along with https://github.com/meta-llama/llama-stack/releases/tag/v0.1.7

What's Changed
* Add `-h` help flag support to all CLI commands by alinaryan in https://github.com/meta-llama/llama-stack-client-python/pull/185
* feat: update react with new agent api by yanxi0830 in https://github.com/meta-llama/llama-stack-client-python/pull/189
* feat: autogen llama-stack-client CLI reference doc by yanxi0830 in https://github.com/meta-llama/llama-stack-client-python/pull/190
* chore: remove litellm type conversion by ehhuang in https://github.com/meta-llama/llama-stack-client-python/pull/193
* chore: AsyncAgent should use ToolResponse instead of ToolResponseMessage by ehhuang in https://github.com/meta-llama/llama-stack-client-python/pull/197
* fix: validate endpoint url by cdoern in https://github.com/meta-llama/llama-stack-client-python/pull/196
* fix: react agent by ehhuang in https://github.com/meta-llama/llama-stack-client-python/pull/200
* Sync updates from stainless branch: main by yanxi0830 in https://github.com/meta-llama/llama-stack-client-python/pull/198
* chore: Sync updates from stainless branch: ehhuang/dev by ehhuang in https://github.com/meta-llama/llama-stack-client-python/pull/199
* feat(agent): support multiple tool calls by ehhuang in https://github.com/meta-llama/llama-stack-client-python/pull/192
* feat: `llama-stack-client providers inspect PROVIDER_ID` by cdoern in https://github.com/meta-llama/llama-stack-client-python/pull/181
* chore: Sync updates from stainless branch: main by yanxi0830 in https://github.com/meta-llama/llama-stack-client-python/pull/201
* fix: `llama-stack-client provider inspect` should use retrieve by cdoern in https://github.com/meta-llama/llama-stack-client-python/pull/202

New Contributors
* alinaryan made their first contribution in https://github.com/meta-llama/llama-stack-client-python/pull/185

**Full Changelog**: https://github.com/meta-llama/llama-stack-client-python/compare/v0.1.6...v0.1.7

0.1.6

Published along with [Llama Stack v0.1.6](https://github.com/meta-llama/llama-stack/releases/tag/v0.1.6)

What's Changed
* feat: unify max infer iters with server/client tools by yanxi0830 in https://github.com/meta-llama/llama-stack-client-python/pull/173
* chore: api sync, deprecate allow_resume_turn + rename task_config->benchmark_config (Sync updates from stainless branch: yanxi0830/dev) by yanxi0830 in https://github.com/meta-llama/llama-stack-client-python/pull/176
* fix: remove the alpha suffix in run_benchmark.py by SLR722 in https://github.com/meta-llama/llama-stack-client-python/pull/179
* chore: Sync updates from stainless branch: ehhuang/dev by ehhuang in https://github.com/meta-llama/llama-stack-client-python/pull/182
* feat: support client tool output metadata by ehhuang in https://github.com/meta-llama/llama-stack-client-python/pull/180
* chore: use rich to format logs by ehhuang in https://github.com/meta-llama/llama-stack-client-python/pull/177
* feat: new Agent API by ehhuang in https://github.com/meta-llama/llama-stack-client-python/pull/178
* fix: react agent with custom tool parser n_iters by yanxi0830 in https://github.com/meta-llama/llama-stack-client-python/pull/184
* fix(agent): initialize toolgroups/client_tools by ehhuang in https://github.com/meta-llama/llama-stack-client-python/pull/186
* feat: async agent wrapper by yanxi0830 in https://github.com/meta-llama/llama-stack-client-python/pull/169
* feat(agent): support plain function as client_tool by ehhuang in https://github.com/meta-llama/llama-stack-client-python/pull/187


**Full Changelog**: https://github.com/meta-llama/llama-stack-client-python/compare/v0.1.5...v0.1.6

0.1.5

Published along with [Llama Stack v0.1.5](https://github.com/meta-llama/llama-stack/releases/tag/v0.1.5)

What's Changed
* feat: add support for chat sessions by cdoern in https://github.com/meta-llama/llama-stack-client-python/pull/167
* fix react agent by hardikjshah in https://github.com/meta-llama/llama-stack-client-python/pull/172
* fix: React Agent for non-llama models by hardikjshah in https://github.com/meta-llama/llama-stack-client-python/pull/174


**Full Changelog**: https://github.com/meta-llama/llama-stack-client-python/compare/v0.1.4...v0.1.5

0.1.4

What's Changed
* feat: Sync updates from stainless branch: ehhuang/dev by ehhuang in https://github.com/meta-llama/llama-stack-client-python/pull/149
* Update CODEOWNERS by SLR722 in https://github.com/meta-llama/llama-stack-client-python/pull/151
* chore: deprecate eval task (Sync updates from stainless branch: main) by yanxi0830 in https://github.com/meta-llama/llama-stack-client-python/pull/150
* refine the benchmark eval UX by SLR722 in https://github.com/meta-llama/llama-stack-client-python/pull/156
* fix: React agent should be able to work with provided config by hardikjshah in https://github.com/meta-llama/llama-stack-client-python/pull/146
* feat (1/n): agents resume turn (Sync updates from stainless branch: yanxi0830/dev) by yanxi0830 in https://github.com/meta-llama/llama-stack-client-python/pull/157
* v0.1.4 - Sync updates from stainless branch: yanxi0830/dev by yanxi0830 in https://github.com/meta-llama/llama-stack-client-python/pull/164

New Contributors
* hardikjshah made their first contribution in https://github.com/meta-llama/llama-stack-client-python/pull/146

**Full Changelog**: https://github.com/meta-llama/llama-stack-client-python/compare/v0.1.3...v0.1.4

Page 1 of 2

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.