Magentic

Latest version: v0.26.0

Safety actively analyzes 633992 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 7

0.26.0

What's Changed
* Return usage stats on AssistantMessage by jackmpcollins in https://github.com/jackmpcollins/magentic/pull/214

Example of non-streamed response with usage immediately available

python
from magentic import OpenaiChatModel, UserMessage

chat_model = OpenaiChatModel("gpt-3.5-turbo", seed=42)
message = chat_model.complete(messages=[UserMessage("Say hello!")])

print(message.usage)
> Usage(input_tokens=10, output_tokens=9)


Example of streamed response where usage only becomes available after the stream has been processed

python
from magentic import OpenaiChatModel, UserMessage
from magentic.streaming import StreamedStr

chat_model = OpenaiChatModel("gpt-3.5-turbo", seed=42)
message = chat_model.complete(messages=[UserMessage("Say hello!")], output_types=[StreamedStr])

print(message.usage)
> `None` because stream has not be processed yet

Process the stream (convert StreamedStr to str)
str(message.content)

print(message.usage)
> Usage(input_tokens=10, output_tokens=9)


**Full Changelog**: https://github.com/jackmpcollins/magentic/compare/v0.25.0...v0.26.0

0.25.0

What's Changed
* Switch AnthropicChatModel to use streaming by jackmpcollins in https://github.com/jackmpcollins/magentic/pull/215 `StreamedStr` now streams correctly, but object streaming is waiting on Anthropic support for streaming array responses.
python
from magentic import prompt, StreamedStr
from magentic.chat_model.anthropic_chat_model import AnthropicChatModel


prompt(
"Tell me about {topic}.",
model=AnthropicChatModel("claude-3-opus-20240229"),
)
def tell_me_about(topic: str) -> StreamedStr: ...


for chunk in tell_me_about("chocolate"):
print(chunk, end="", flush=True)

* add optional custom_llm_provider param for litellm by entropi in https://github.com/jackmpcollins/magentic/pull/221
* Add tests for LiteLLM async callbacks by jackmpcollins in https://github.com/jackmpcollins/magentic/pull/223
* Tidy up: Combine openai streamed_tool_call functions by jackmpcollins in https://github.com/jackmpcollins/magentic/pull/225

New Contributors
* entropi made their first contribution in https://github.com/jackmpcollins/magentic/pull/221

**Full Changelog**: https://github.com/jackmpcollins/magentic/compare/v0.24.0...v0.25.0

0.25.0a0

Prerelease for testing PR 214

0.24.0

> [!WARNING]
> The default model for magentic is now gpt-4o instead of gpt-4-turbo. See [Configuration](https://magentic.dev/configuration/) for how to change this.

What's Changed
* docs: update README.md by eltociear in https://github.com/jackmpcollins/magentic/pull/206
* Make GPT-4o the default OpenAI model by jackmpcollins in https://github.com/jackmpcollins/magentic/pull/212
* Skip validation for message serialization by jackmpcollins in https://github.com/jackmpcollins/magentic/pull/213

New Contributors
* eltociear made their first contribution in https://github.com/jackmpcollins/magentic/pull/206

**Full Changelog**: https://github.com/jackmpcollins/magentic/compare/v0.23.0...v0.24.0

0.23.0

What's Changed

- 🦙 Ollama can now return structured outputs / function calls (it takes a little prompting to make it reliable).

python
from magentic import prompt
from magentic.chat_model.litellm_chat_model import LitellmChatModel


prompt(
"Count to {n}. Use the tool to return in the format [1, 2, 3, ...]",
model=LitellmChatModel("ollama_chat/llama2", api_base="http://localhost:11434")
)
def count_to(n: int) -> list[int]: ...


count_to(5)
> [1, 2, 3, 4, 5]


PRs
* poetry update by jackmpcollins in https://github.com/jackmpcollins/magentic/pull/202
* Support ollama structured outputs / function calling by jackmpcollins in https://github.com/jackmpcollins/magentic/pull/204


**Full Changelog**: https://github.com/jackmpcollins/magentic/compare/v0.22.0...v0.23.0

0.22.0

What's Changed

- 🚀 Forced function calling using the new `tool_choice: "required"` argument from OpenAI. This means no more `StructuredOutputError` caused by the model returning a string when it was not in the return annotation (for prompt-functions with a union return type. Single return types were already forced).

PRs

* Use tool_choice required for OpenaiChatModel by jackmpcollins in https://github.com/jackmpcollins/magentic/pull/201
* Bump tqdm from 4.66.2 to 4.66.3 by dependabot in https://github.com/jackmpcollins/magentic/pull/200
* Bump mkdocs from 1.5.3 to 1.6.0 by dependabot in https://github.com/jackmpcollins/magentic/pull/198
* Bump pytest from 8.1.1 to 8.2.0 by dependabot in https://github.com/jackmpcollins/magentic/pull/197
* Bump mypy from 1.9.0 to 1.10.0 by dependabot in https://github.com/jackmpcollins/magentic/pull/196

**Full Changelog**: https://github.com/jackmpcollins/magentic/compare/v0.21.1...v0.22.0

Page 1 of 7

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.