L2m2

Latest version: v0.0.50

Safety actively analyzes 722491 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 5

0.0.50

Added

- Support for OpenAI's [updated `gpt-4o` model](https://help.openai.com/en/articles/6825453-chatgpt-release-notes).

0.0.49

Added

- Support for Google's [Gemini 2.5 Pro](https://blog.google/technology/google-deepmind/gemini-model-thinking-updates-march-2025/) model released yesterday.
- Support for [`mistral-saba`](https://mistral.ai/news/mistral-saba) via both Mistral Cloud and Groq.

Removed

- `mixtral-8x22b` has been removed as it was deprecated by Groq in March of 2025.

Changed

- Updated various models' max `temperature` values via Groq to be consistent with Groq's actual limits.
- `mistral-large`: 1.0 → 1.5
- `mistral-small`: 1.0 → 1.5
- `ministral-3b`: 1.0 → 1.5
- `ministral-8b`: 1.0 → 1.5
- `gemma-2-9b`: 1.0 → 1.5
- Updated various models' max `max_tokens` values to be consistent with actual provider limits.
- `gemini-2.0-pro` (via Google) : 8192 → 2<sup>31</sup>-1
- `gemma-2-9b` (via Groq) : 2<sup>16</sup>-1 → 2<sup>13</sup>-1

0.0.48

Added

- Support for OpenAI's [o1-pro](https://platform.openai.com/docs/models/o1-pro) model released this week.

Changed

- Migrated OpenAI calls from the legacy Chat Completion API to the new [Responses API](https://community.openai.com/t/introducing-the-responses-api/1140929).

0.0.47

Added

- Support for [Command-A](https://cohere.com/blog/command-a), Cohere's latest model released today.

0.0.46

Added

- Support for [GPT-4.5](https://openai.com/index/introducing-gpt-4-5/) released on February 27, 2025.

Fixed

- `o1` and `o3-mini` now correctly support native JSON mode.

0.0.45

Fixed

- Patched an error where calls to Anthropic's `claude-3.7-sonnet` with [extended thinking](https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking) would fail.
- Updated the max tokens for `claude-3.7-sonnet` to 128000 and `claude-3.5-sonnet` and `claude-3.5-haiku` to 8192 as per the [Anthropic docs](https://docs.anthropic.com/en/docs/about-claude/models/all-models).

Changed

- Updated the default max tokens for the claude 3.5 and 3.7 models to 4096 and the claude 3 models to 2048 to be more reasonable given the max allowed values.

Page 1 of 5

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.