What's Changed
* [FIX] Proxy - Set different locations per vertex ai deployment on litellm proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2234
* fix(proxy_server.py): introduces a beta endpoint for admin to view global spend by krrishdholakia in https://github.com/BerriAI/litellm/pull/2236
* [FEAT] Track which models support function calling by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2241
* [FIX] Race Condition with Custom Callbacks where Async Streaming got triggered twice by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2240
* [WIP] Allow proxy admin to add others to view global spend by krrishdholakia in https://github.com/BerriAI/litellm/pull/2231
* 👉 Support for Mistral AI Tool Calling Live now https://docs.litellm.ai/docs/providers/mistral
Check if a model supports function calling, parallel function calling https://docs.litellm.ai/docs/completion/function_call
![codeimage-snippet_29 (1) 1](https://github.com/BerriAI/litellm/assets/29436595/8e591e0c-6ee9-49d7-92e9-7a69e69fca18)
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.27.14...v1.27.15