Blendsql

Latest version: v0.0.31

Safety actively analyzes 688917 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 4

7523897.45

Given a set of values from a database, answer the question row-by-row, in order.
Your outputs should be separated by ';'.

Question: Size in km2?
Source table: parks
Source column: Area

Values:

3674529.33

Of course, the effects of this async processing will be felt more when we need to pass *many* values to the `LLMMap` function.

**Full Changelog**: https://github.com/parkervg/blendsql/compare/v0.0.27...v0.0.28

0.0.31

Bugfixes from previous release.

We need to set `--prerelease=allow` to use the `guidance` AzurePhi serverside control.

**Full Changelog**: https://github.com/parkervg/blendsql/compare/v0.0.30...v0.0.31

0.0.30

🧠 Smarter `LLMQA` with `modifier` Arg
As described in [blendsql-by-example.ipynb](https://github.com/parkervg/blendsql/blob/main/examples/blendsql-by-example.ipynb), `LLMQA` can now generate constrained lists. This means the following query is valid:

python
SELECT * FROM People
WHERE People.Name IN {{LLMQA('First 3 presidents of the U.S?')}}


Or, even pseudo-agent-based processing like this:

python
WITH letter_agent_output AS (
SELECT * FROM (VALUES {{LLMQA('List some greek letters', modifier='{3}')}})
) SELECT {{
LLMQA(
'What is the first letter of the alphabet?',
options=(SELECT * FROM letter_agent_output)
)}}


Additionally, the `AzurePhi` model allows for easy constrained decoding with a larger model, powered by guidance's server-side Azure AI integration: https://github.com/guidance-ai/guidance?tab=readme-ov-file#azure-ai

What's Changed
* `_dialect.py` Re-Work, `modifier` Argument for `LLMQA`, Documentation updates by parkervg in https://github.com/parkervg/blendsql/pull/35


**Full Changelog**: https://github.com/parkervg/blendsql/compare/v0.0.29...v0.0.30

0.0.29

Added the ability to configure maximum concurrent async OpenAI/Anthropic calls via:

python
import blendql

Optionally set how many async calls to allow concurrently
This depends on your OpenAI/Anthropic/etc. rate limits
blendsql.config.set_async_limit(10)


The default is 10.

**Full Changelog**: https://github.com/parkervg/blendsql/compare/v0.0.28...v0.0.29

0.0.28

âš¡ Async Batch Calls for `LLMMap`
This release adds async batch processing by default for the `LLMMap` ingredient. Currently, this means that usage of `OpenaiLLM` and `AnthropicLLM` classes in a `LLMMap` call will be much quicker, especially when the database context is large, or our `batch_size` is small.

For example, taking this query from the README:
sql
SELECT "Name",
{{ImageCaption('parks::Image')}} as "Image Description",
{{
LLMMap(
question='Size in km2?',
context='parks::Area'
)
}} as "Size in km" FROM parks
WHERE "Location" = 'Alaska'
ORDER BY "Size in km" DESC LIMIT 1


And assuming we've initialized our LLMMap ingredient via `LLMMap.from_args(batch_size=1, k=0)`, meaning we are retrieving 0 few-shot examples per prompt (i.e. zero-shot learning), then we have 2 total values to map onto, since 2 parks meet our criteria where `"Location" = 'Alaska'`.

With this update, we pass the two prompts into our OpenAI or Anthropic endpoint asynchronously:


Given a set of values from a database, answer the question row-by-row, in order.
Your outputs should be separated by ';'.

Question: Size in km2?
Source table: parks
Source column: Area

Values:

Page 1 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.