Llm

Latest version: v0.14

Safety actively analyzes 638730 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 4

0.11

LLM now supports the new OpenAI `gpt-3.5-turbo-instruct` model, and OpenAI completion (as opposed to chat completion) models in general. [284](https://github.com/simonw/llm/issues/284)

bash
llm -m gpt-3.5-turbo-instruct 'Reasons to tame a wild beaver:'

OpenAI completion models like this support a `-o logprobs 3` option, which accepts a number between 1 and 5 and will include the log probabilities (for each produced token, what were the top 3 options considered by the model) in the logged response.

bash
llm -m gpt-3.5-turbo-instruct 'Say hello succinctly' -o logprobs 3

You can then view the `logprobs` that were recorded in the SQLite logs database like this:
bash
sqlite-utils "$(llm logs path)" \
'select * from responses order by id desc limit 1' | \
jq '.[0].response_json' -r | jq

Truncated output looks like this:

[
{
"text": "Hi",
"top_logprobs": [
{
"Hi": -0.13706253,
"Hello": -2.3714375,
"Hey": -3.3714373
}
]
},
{
"text": " there",
"top_logprobs": [
{
" there": -0.96057636,
"!\"": -0.5855763,
".\"": -3.2574513
}
]
}
]

Also in this release:

- The `llm.user_dir()` function, used by plugins, now ensures the directory exists before returning it. [275](https://github.com/simonw/llm/issues/275)
- New `LLM_OPENAI_SHOW_RESPONSES=1` environment variable for displaying the full HTTP response returned by OpenAI compatible APIs. [286](https://github.com/simonw/llm/issues/286)
- The `llm embed-multi` command now has a `--batch-size X` option for setting the batch size to use when processing embeddings - useful if you have limited memory available. [273](https://github.com/simonw/llm/issues/273)
- The `collection.embed_multi()` method also now accepts an optional `batch_size=int` argument.
- Fixed two bugs with `llm embed-multi --files` relating to handling of directories. Thanks, [ealvar3z](https://github.com/ealvar3z). [#274](https://github.com/simonw/llm/issues/274), [#280](https://github.com/simonw/llm/issues/280)

(v0_10)=

0.10

The two major features in this release are the **{ref}`llm chat command <usage-chat>`** and support for **embedding binary data**.

See [Build an image search engine with llm-clip, chat with models with llm chat](https://simonwillison.net/2023/Sep/12/llm-clip-and-chat/) for more background on these features.

llm chat

The new `llm chat` command starts an ongoing chat conversation with a model in your terminal. It works with all models supported by LLM and its {ref}`plugins <plugins>`, including locally installed models such as Llama 2. [231](https://github.com/simonw/llm/issues/231)

This offers a big performance boost for local models, since they don't need to be freshly loaded into memory for each prompt.

Here's an example chat with Llama 2 13B, provided by the [llm-mlc](https://github.com/simonw/llm-mlc) plugin.

bash
llm chat -m mlc-chat-Llama-2-13b-chat-hf-q4f16_1

This starts a chat session:

Type 'exit' or 'quit' to exit
Type '!multi' to enter multiple lines, then '!end' to finish
> Who are you?
Hello! I'm just an AI, here to assist you with any questions you may have.
My name is LLaMA, and I'm a large language model trained to provide helpful
and informative responses to a wide range of questions and topics. I'm here
to help you find the information you need, while ensuring a safe and
respectful conversation. Please feel free to ask me anything, and I'll do my
best to provide a helpful and accurate response.
> Tell me a joke about otters
Sure, here's a joke about otters:

Why did the otter go to the party?

Because he heard it was a "whale" of a time!

(Get it? Whale, like a big sea mammal, but also a "wild" or "fun" time.
Otters are known for their playful and social nature, so it's a lighthearted
and silly joke.)

I hope that brought a smile to your face! Do you have any other questions or
topics you'd like to discuss?
> exit

Chat sessions are {ref}`logged to SQLite <logging>` - use `llm logs` to view them. They can accept system prompts, templates and model options - consult {ref}`the chat documentation <usage-chat>` for details.

Binary embedding support

LLM's {ref}`embeddings feature <embeddings>` has been expanded to provide support for embedding binary data, in addition to text. [254](https://github.com/simonw/llm/pull/254)

This enables models like [CLIP](https://openai.com/research/clip), supported by the new **[llm-clip](https://github.com/simonw/llm-clip)** plugin.

CLIP is a multi-modal embedding model which can embed images and text into the same vector space. This means you can use it to create an embedding index of photos, and then search for the embedding vector for "a happy dog" and get back images that are semantically closest to that string.

To create embeddings for every JPEG in a directory stored in a `photos` collection, run:

bash
llm install llm-clip
llm embed-multi photos --files photos/ '*.jpg' --binary -m clip

Now you can search for photos of racoons using:

llm similar photos -c 'raccoon'

This spits out a list of images, ranked by how similar they are to the string "raccoon":

{"id": "IMG_4801.jpeg", "score": 0.28125139257127457, "content": null, "metadata": null}
{"id": "IMG_4656.jpeg", "score": 0.26626441704164294, "content": null, "metadata": null}
{"id": "IMG_2944.jpeg", "score": 0.2647445926996852, "content": null, "metadata": null}
...


Also in this release

- The {ref}`LLM_LOAD_PLUGINS environment variable <llm-load-plugins>` can be used to control which plugins are loaded when `llm` starts running. [256](https://github.com/simonw/llm/issues/256)
- The `llm plugins --all` option includes builtin plugins in the list of plugins. [259](https://github.com/simonw/llm/issues/259)
- The `llm embed-db` family of commands has been renamed to `llm collections`. [229](https://github.com/simonw/llm/issues/229)
- `llm embed-multi --files` now has an `--encoding` option and defaults to falling back to `latin-1` if a file cannot be processed as `utf-8`. [225](https://github.com/simonw/llm/issues/225)

(v0_10_a1)=

0.10a1

- Support for embedding binary data. [254](https://github.com/simonw/llm/pull/254)
- `llm chat` now works for models with API keys. [247](https://github.com/simonw/llm/issues/247)
- `llm chat -o` for passing options to a model. [244](https://github.com/simonw/llm/issues/244)
- `llm chat --no-stream` option. [248](https://github.com/simonw/llm/issues/248)
- `LLM_LOAD_PLUGINS` environment variable. [256](https://github.com/simonw/llm/issues/256)
- `llm plugins --all` option for including builtin plugins. [259](https://github.com/simonw/llm/issues/259)
- `llm embed-db` has been renamed to `llm collections`. [229](https://github.com/simonw/llm/issues/229)
- Fixed bug where `llm embed -c` option was treated as a filepath, not a string. Thanks, [mhalle](https://github.com/mhalle). [#263](https://github.com/simonw/llm/pull/263)

(v0_10_a0)=

0.10a0

- New {ref}`llm chat <usage-chat>` command for starting an interactive terminal chat with a model. [231](https://github.com/simonw/llm/issues/231)
- `llm embed-multi --files` now has an `--encoding` option and defaults to falling back to `latin-1` if a file cannot be processed as `utf-8`. [225](https://github.com/simonw/llm/issues/225)

(v0_9)=

0.9

The big new feature in this release is support for **embeddings**. See [LLM now provides tools for working with embeddings](https://simonwillison.net/2023/Sep/4/llm-embeddings/) for additional details.

{ref}`Embedding models <embeddings>` take a piece of text - a word, sentence, paragraph or even a whole article, and convert that into an array of floating point numbers. [185](https://github.com/simonw/llm/issues/185)

This embedding vector can be thought of as representing a position in many-dimensional-space, where the distance between two vectors represents how semantically similar they are to each other within the content of a language model.

Embeddings can be used to find **related documents**, and also to implement **semantic search** - where a user can search for a phrase and get back results that are semantically similar to that phrase even if they do not share any exact keywords.

LLM now provides both CLI and Python APIs for working with embeddings. Embedding models are defined by plugins, so you can install additional models using the {ref}`plugins mechanism <installing-plugins>`.

The first two embedding models supported by LLM are:

- OpenAI's [ada-002](https://platform.openai.com/docs/guides/embeddings) embedding model, available via an inexpensive API if you set an OpenAI key using `llm keys set openai`.
- The [sentence-transformers](https://www.sbert.net/) family of models, available via the new [llm-sentence-transformers](https://github.com/simonw/llm-sentence-transformers) plugin.

See {ref}`embeddings-cli` for detailed instructions on working with embeddings using LLM.

The new commands for working with embeddings are:

- **{ref}`llm embed <embeddings-cli-embed>`** - calculate embeddings for content and return them to the console or store them in a SQLite database.
- **{ref}`llm embed-multi <embeddings-cli-embed-multi>`** - run bulk embeddings for multiple strings, using input from a CSV, TSV or JSON file, data from a SQLite database or data found by scanning the filesystem. [215](https://github.com/simonw/llm/issues/215)
- **{ref}`llm similar <embeddings-cli-similar>`** - run similarity searches against your stored embeddings - starting with a search phrase or finding content related to a previously stored vector. [190](https://github.com/simonw/llm/issues/190)
- **{ref}`llm embed-models <embeddings-cli-embed-models>`** - list available embedding models.
- `llm embed-db` - commands for inspecting and working with the default embeddings SQLite database.

There's also a new {ref}`llm.Collection <embeddings-python-collections>` class for creating and searching collections of embedding from Python code, and a {ref}`llm.get_embedding_model() <embeddings-python-api>` interface for embedding strings directly. [191](https://github.com/simonw/llm/issues/191)

(v0_8_1)=

0.8.1

- Fixed bug where first prompt would show an error if the `io.datasette.llm` directory had not yet been created. [193](https://github.com/simonw/llm/issues/193)
- Updated documentation to recommend a different `llm-gpt4all` model since the one we were using is no longer available. [195](https://github.com/simonw/llm/issues/195)

(v0_8)=

Page 2 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.