Published along with [Llama Stack v0.1.6](https://github.com/meta-llama/llama-stack/releases/tag/v0.1.6)
What's Changed
* feat: unify max infer iters with server/client tools by yanxi0830 in https://github.com/meta-llama/llama-stack-client-python/pull/173
* chore: api sync, deprecate allow_resume_turn + rename task_config->benchmark_config (Sync updates from stainless branch: yanxi0830/dev) by yanxi0830 in https://github.com/meta-llama/llama-stack-client-python/pull/176
* fix: remove the alpha suffix in run_benchmark.py by SLR722 in https://github.com/meta-llama/llama-stack-client-python/pull/179
* chore: Sync updates from stainless branch: ehhuang/dev by ehhuang in https://github.com/meta-llama/llama-stack-client-python/pull/182
* feat: support client tool output metadata by ehhuang in https://github.com/meta-llama/llama-stack-client-python/pull/180
* chore: use rich to format logs by ehhuang in https://github.com/meta-llama/llama-stack-client-python/pull/177
* feat: new Agent API by ehhuang in https://github.com/meta-llama/llama-stack-client-python/pull/178
* fix: react agent with custom tool parser n_iters by yanxi0830 in https://github.com/meta-llama/llama-stack-client-python/pull/184
* fix(agent): initialize toolgroups/client_tools by ehhuang in https://github.com/meta-llama/llama-stack-client-python/pull/186
* feat: async agent wrapper by yanxi0830 in https://github.com/meta-llama/llama-stack-client-python/pull/169
* feat(agent): support plain function as client_tool by ehhuang in https://github.com/meta-llama/llama-stack-client-python/pull/187
**Full Changelog**: https://github.com/meta-llama/llama-stack-client-python/compare/v0.1.5...v0.1.6