âš¡ Async Batch Calls for `LLMMap`
This release adds async batch processing by default for the `LLMMap` ingredient. Currently, this means that usage of `OpenaiLLM` and `AnthropicLLM` classes in a `LLMMap` call will be much quicker, especially when the database context is large, or our `batch_size` is small.
For example, taking this query from the README:
sql
SELECT "Name",
{{ImageCaption('parks::Image')}} as "Image Description",
{{
LLMMap(
question='Size in km2?',
context='parks::Area'
)
}} as "Size in km" FROM parks
WHERE "Location" = 'Alaska'
ORDER BY "Size in km" DESC LIMIT 1
And assuming we've initialized our LLMMap ingredient via `LLMMap.from_args(batch_size=1, k=0)`, meaning we are retrieving 0 few-shot examples per prompt (i.e. zero-shot learning), then we have 2 total values to map onto, since 2 parks meet our criteria where `"Location" = 'Alaska'`.
With this update, we pass the two prompts into our OpenAI or Anthropic endpoint asynchronously:
Given a set of values from a database, answer the question row-by-row, in order.
Your outputs should be separated by ';'.
Question: Size in km2?
Source table: parks
Source column: Area
Values: