Magentic

Latest version: v0.32.0

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 10

0.27.0

What's Changed
* Add peek, apeek, adropwhile functions by jackmpcollins in https://github.com/jackmpcollins/magentic/pull/229
* Update anthropic_chat_model.py to conform with latest anthropic package by myousefi in https://github.com/jackmpcollins/magentic/pull/239
* Bump requests from 2.31.0 to 2.32.0 by dependabot in https://github.com/jackmpcollins/magentic/pull/218
* Bump jinja2 from 3.1.3 to 3.1.4 by dependabot in https://github.com/jackmpcollins/magentic/pull/203
* Bump urllib3 from 2.2.1 to 2.2.2 by dependabot in https://github.com/jackmpcollins/magentic/pull/238
* Bump tornado from 6.4 to 6.4.1 by dependabot in https://github.com/jackmpcollins/magentic/pull/233

New Contributors
* myousefi made their first contribution in https://github.com/jackmpcollins/magentic/pull/239

**Full Changelog**: https://github.com/jackmpcollins/magentic/compare/v0.26.0...v0.27.0

0.26.0

What's Changed
* Return usage stats on AssistantMessage by jackmpcollins in https://github.com/jackmpcollins/magentic/pull/214

Example of non-streamed response with usage immediately available

python
from magentic import OpenaiChatModel, UserMessage

chat_model = OpenaiChatModel("gpt-3.5-turbo", seed=42)
message = chat_model.complete(messages=[UserMessage("Say hello!")])

print(message.usage)
> Usage(input_tokens=10, output_tokens=9)


Example of streamed response where usage only becomes available after the stream has been processed

python
from magentic import OpenaiChatModel, UserMessage
from magentic.streaming import StreamedStr

chat_model = OpenaiChatModel("gpt-3.5-turbo", seed=42)
message = chat_model.complete(messages=[UserMessage("Say hello!")], output_types=[StreamedStr])

print(message.usage)
> `None` because stream has not be processed yet

Process the stream (convert StreamedStr to str)
str(message.content)

print(message.usage)
> Usage(input_tokens=10, output_tokens=9)


**Full Changelog**: https://github.com/jackmpcollins/magentic/compare/v0.25.0...v0.26.0

0.25.0

What's Changed
* Switch AnthropicChatModel to use streaming by jackmpcollins in https://github.com/jackmpcollins/magentic/pull/215 `StreamedStr` now streams correctly, but object streaming is waiting on Anthropic support for streaming array responses.
python
from magentic import prompt, StreamedStr
from magentic.chat_model.anthropic_chat_model import AnthropicChatModel


prompt(
"Tell me about {topic}.",
model=AnthropicChatModel("claude-3-opus-20240229"),
)
def tell_me_about(topic: str) -> StreamedStr: ...


for chunk in tell_me_about("chocolate"):
print(chunk, end="", flush=True)

* add optional custom_llm_provider param for litellm by entropi in https://github.com/jackmpcollins/magentic/pull/221
* Add tests for LiteLLM async callbacks by jackmpcollins in https://github.com/jackmpcollins/magentic/pull/223
* Tidy up: Combine openai streamed_tool_call functions by jackmpcollins in https://github.com/jackmpcollins/magentic/pull/225

New Contributors
* entropi made their first contribution in https://github.com/jackmpcollins/magentic/pull/221

**Full Changelog**: https://github.com/jackmpcollins/magentic/compare/v0.24.0...v0.25.0

0.25.0a0

Prerelease for testing PR 214

0.24.0

> [!WARNING]
> The default model for magentic is now gpt-4o instead of gpt-4-turbo. See [Configuration](https://magentic.dev/configuration/) for how to change this.

What's Changed
* docs: update README.md by eltociear in https://github.com/jackmpcollins/magentic/pull/206
* Make GPT-4o the default OpenAI model by jackmpcollins in https://github.com/jackmpcollins/magentic/pull/212
* Skip validation for message serialization by jackmpcollins in https://github.com/jackmpcollins/magentic/pull/213

New Contributors
* eltociear made their first contribution in https://github.com/jackmpcollins/magentic/pull/206

**Full Changelog**: https://github.com/jackmpcollins/magentic/compare/v0.23.0...v0.24.0

0.23.0

What's Changed

- 🦙 Ollama can now return structured outputs / function calls (it takes a little prompting to make it reliable).

python
from magentic import prompt
from magentic.chat_model.litellm_chat_model import LitellmChatModel


prompt(
"Count to {n}. Use the tool to return in the format [1, 2, 3, ...]",
model=LitellmChatModel("ollama_chat/llama2", api_base="http://localhost:11434")
)
def count_to(n: int) -> list[int]: ...


count_to(5)
> [1, 2, 3, 4, 5]


PRs
* poetry update by jackmpcollins in https://github.com/jackmpcollins/magentic/pull/202
* Support ollama structured outputs / function calling by jackmpcollins in https://github.com/jackmpcollins/magentic/pull/204


**Full Changelog**: https://github.com/jackmpcollins/magentic/compare/v0.22.0...v0.23.0

Page 3 of 10

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.