L2m2

Latest version: v0.0.50

Safety actively analyzes 723158 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 5

0.0.44

Added

- Support for Anthropic's [Claude 3.7 Sonnet](https://www.anthropic.com/news/claude-3-7-sonnet) released today.

0.0.43

Fixed

- `o3-mini` now correctly uses the `developer` system message key rather than `system` as per the OpenAI API spec.

0.0.42

Added

- Suppert for 7 new models:
- `o3-mini` via OpenAI
- `gemini-2.0-pro` and `gemini-2.0-flash-lite` via Google
- `qwen-2.5-32b`, `deepseek-r1-distill-qwen-32b`, and `deepseek-r1-distill-llama-70b` via Groq
- `command-r7b` via Cohere

Changed

- Updated the Cohere API from V1 to V2 ([their docs](https://cohere.com/blog/new-api-v2)).
- Replaced all instances of `ValueError` being raised with a new `L2M2UsageError` exception.
- Moved the `warnings` module to the top level (`l2m2.warnings` instead of `l2m2.client.warnings`).
- Increased the default timeout for LLM calls from 10 seconds to 25 seconds.
- Where possible, pinned l2m2 models to specific versions rather than an alias pointing to the latest version. This is for stability in production; however, I do plan to keep the versions up to date on a regular basis. The following model versions have been updated:
- `o1` → `o1-2024-12-17`
- `o1-preview` → `o1-preview-2024-09-12`
- `o1-mini` → `o1-mini-2024-09-12`
- `claude-3-5-sonnet-latest` → `claude-3-5-sonnet-20241022`
- `claude-3-5-haiku-latest` → `claude-3-5-haiku-20241022`
- `command-r` → `command-r-08-2024`
- `command-r-plus` → `command-r-plus-08-2024`
- `mistral-large-latest` → `mistral-large-2411`
- `ministral-3b-latest` → `ministral-3b-2410`
- `gemini-2.0-flash-exp` → `gemini-2.0-flash-001`
- `gemini-1.5-flash-exp` → `gemini-1.5-flash-001`
Note that this is _not_ a breaking change – the model IDs are purely internal. This doesn't change any behavior, just adds stability.

Fixed

- Previously, the synchronous `call` method in `LLMClient` would throw a gibberish unhandled exception when used in an async context, such as within FastAPI. This has been fixed - it is now handled by an `L2M2UsageError` with a helpful message recommending the use of `AsyncLLMClient` instead. This error is also thrown when instantiating `LLMClient` in an async context.
- For some reason, OpenAI doesn't support either the `system` or `developer` keyword in `o1-mini` and `o1-preview`, effectively making system prompts unusable with them. While I'm not sure why this is, I've properly handled these with `L2M2UsageError` instead of throwing unhandled exceptions.

0.0.41

Added

- **Big update!** Added support for running local llms via [Ollama](https://ollama.ai/). 🎉
See the docs for running local models [here](docs/usage_guide.mdlocal-models).

Changed

- The `providers` parameter to `LLMClient` and `AsyncLLMClient` has been renamed to `api_keys`.
- L2M2 no longer depends on `typing_extensions` and now officially only has a single external dependency: `httpx`.

Removed

- The static method `get_available_models` in `LLMClient` and `AsyncLLMClient` has been deprecated as it is no longer meaningful with the addition of local models (which can be arbitrary). It will be removed in a future release.

0.0.40

> [!CAUTION]
> This release has breaking changes! Please read the changelog carefully.

Removed

- The `call_custom` method has been removed from `LLMClient` and `AsyncLLMClient` due to lack of use and unnecessary complexity. **This is a breaking change!!!** If you need to call a model that is not officially supported by L2M2, please open an issue on the [Github repo](https://github.com/pkelaita/l2m2/issues).

0.0.39

> [!CAUTION]
> This release has breaking changes! Please read the changelog carefully.

Added

- Support for [Llama 3.3 70b](https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_3/) via [Groq](https://console.groq.com/docs/models) and [Cerebras](https://inference-docs.cerebras.ai/introduction).
- Support for OpenAI's [o1 series](https://openai.com/o1/): `o1`, `o1-preview`, and `o1-mini`.
- The `extra_params` parameter to `call` and `call_custom`.

> [!NOTE]
> At the time of this release, you must be on OpenAI's [usage tier](https://platform.openai.com/docs/guides/rate-limits) 5 to use `o1` and tier 1+ to use `o1-preview` and `o1-mini`.

Removed

- `gemma-7b` has been removed as it has been [deprecated](https://console.groq.com/docs/models) by Groq.
- `llama-3.1-70b` has been removed as it has been deprecated by both [Groq](https://console.groq.com/docs/models) and [Cerebras](https://inference-docs.cerebras.ai/introduction).

Page 2 of 5

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.