Llm-claude

Latest version: v0.4.0

Safety actively analyzes 623871 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

0.4.0

Unvendor anthropic as they support pydantic v2 now https://github.com/anthropics/anthropic-sdk-python/issues/115

0.3.1

Bump the version of `anthropic` to 0.3.8.

**Full Changelog**: https://github.com/tomviner/llm-claude/compare/0.3...0.3.1

0.3

Add the ability to set the `max_tokens_to_sample` param. This is described as _The maximum number of tokens to generate before stopping_. If not set, this defaults to a generous 10_000 tokens. https://github.com/tomviner/llm-claude/issues/4

0.2

Initial release of this llm plugin. Allows querying Anthropic's most capable model _Claude 2_ as `llm -m claude`, as well as their faster, cheaper _Claude Instant_ with `llm -m claude-instant`. This is explained at https://docs.anthropic.com/claude/reference/selecting-a-model

Working with `llm`'s streaming and continued conversation modes.

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.