Llm-replicate

Latest version: v0.3.1

Safety actively analyzes 623909 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

0.3.1

- Fix for error listing models. Thanks, [thiswillbeyourgithub](thiswillbeyourgithub). [22](https://github.com/simonw/llm-replicate/issues/22)

0.3

- New command: `llm replicate fetch-predictions`, which fetches all predictions that have been run through Replicate (including for models other than language models queried using this tool) and stores them in a `replicate_predictions` table in the `logs.db` SQLite database. [Documentation here](https://github.com/simonw/llm-replicate/blob/0.3/README.md#fetching-all-replicate-predictions). [11](https://github.com/simonw/llm-replicate/issues/11)
- The `replicate-python` library is no longer bundled with this package, it is installed as a dependency instead. [10](https://github.com/simonw/llm-replicate/issues/10)

0.2

Support for adding chat models using `llm replicate add ... --chat`. These models will then use the `User: ...\nAssistant:` prompt format and can be used for continued conversations.

This means the new [Llama 2](https://ai.meta.com/llama/) model from Meta can be added like this:

bash
llm replicate add a16z-infra/llama13b-v2-chat \
--chat --alias llama2

Then:
bash
llm -m llama2 "Ten great names for a pet pelican"
output here, then to continue the conversation:
llm -c "Five more and make them more nautical"

0.1

- Ability to fetch a [collection of models](https://replicate.com/collections/language-models) hosted on Replicate using `llm replicate fetch-models`, then run prompts against them. [#1](https://github.com/simonw/llm-replicate/issues/1)
- Use `llm replicate add joehoover/falcon-40b-instruct --alias falcon` to add support for additional models, optionally with aliases. [2](https://github.com/simonw/llm-replicate/issues/2)

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.