Proud to announce we now have model settings in ChainForge. 🥳
You can now compare across different versions of the same model, in addition to nicknaming models and choosing more specific models.
To install, do `pip install chainforge --upgrade`. Full changelog below.
More supported models 🤖
Along with model settings, we now have support for all OpenAI, Anthropic, Google PaLM (chat and text), Dalai-hosted models. For instance, you can now compare Llama.65B to PaLM text completions, if you were so inclined. For the full list, see [models.py](https://github.com/ianarawjo/ChainForge/blob/main/chainforge/promptengine/models.py).
Here is comparing Google PaLM's text-bison to chat-bison for the same prompt:
<img width="808" alt="Screen Shot 2023-06-01 at 2 27 54 PM" src="https://github.com/ianarawjo/ChainForge/assets/5251713/eb581e22-1dfb-4c55-8a93-8e595ea5fbaf">
Customizable model settings (and emojis! 😑)
Once you add a model to a `PromptNode`, now you can tap the 'settings' icon on a `PromptNode` to bring up a form with all settings for that base model. You can adjust the exact model used (for instance, `text-bison-001` in PaLM, or Dalai-hosted `llama.30B`):
<img width="1298" alt="Screen Shot 2023-06-01 at 2 09 49 PM" src="https://github.com/ianarawjo/ChainForge/assets/5251713/66215673-4615-4fe6-92cd-70daaa922f2e">
Temperature appears next to model names by default. For ease of reference, temperature is displayed on a sliding color scale from cyan `00ffff` (coldest) to violet `ff00ff` (lukewarm) to red `ff0000` (hottest). The percentage respects min and max temperature settings for individual models.
You can now also nickname models in `PromptNode`s. Names must be unique. Each nickname will appear elsewhere in Chainforge (e.g. in plots). You can also set the Emoji used. For instance, here is a comparison between two ChatGPT models at different temperatures, which I've renamed `hotgpt` and `coldgpt` with the emojis 🔥 and 🥶:
<img width="333" alt="Screen Shot 2023-06-01 at 2 07 54 PM" src="https://github.com/ianarawjo/ChainForge/assets/5251713/55ff7e24-5530-4f95-bbbd-0274a0b321a5">
Note about importing previous flows
Unfortunately, this code rewrite involved a _*breaking change*_ for how flows are imported and exported (`.cforge` file format). You may still be able to import old flows, but you need to re-populate each model list and re-query LLMs. I hope to avoid this, but in this case it was necessary to store model settings information and redo how the backend cache's responses.
Note about Dalai-hosted models
Currently, you cannot query multiple Dalai models/settings at once, since a locally run model can only take one request at a time. We're working on fixing this for the next minor release; for now, just choose one model at a time, and if you want more than one, add it to the list and re-query the prompt node (it will use the previously cache'd responses from the first Dalai model).
Encounter any bugs?
There was a _lot_ to change for this release, and it's likely that at least one thing broke in the process that we haven't detected. If you encounter a bug or problem, open an Issue or respond to the Discussion about this release! 👍