Chainforge

Latest version: v0.3.2.0

Safety actively analyzes 641102 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 4

0.2.6.5

We added a 🔗**Join Node**, our first Processor node, which lets you concatenate responses and/or input data, within or across LLMs.

<img width="673" alt="Screen Shot 2023-10-23 at 3 10 49 PM" src="https://github.com/ianarawjo/ChainForge/assets/5251713/ee91cb2f-2b86-4a6e-8506-c9fe17ae8d81">

For instance, consider:
<img width="1731" alt="Screen Shot 2023-10-23 at 3 29 26 PM" src="https://github.com/ianarawjo/ChainForge/assets/5251713/bbaa40c8-b0f0-4e93-b2c1-a3efcce38a6e">

We translate words one-by-one in the first Prompt Node:

<img width="329" alt="Screen Shot 2023-10-23 at 3 29 41 PM" src="https://github.com/ianarawjo/ChainForge/assets/5251713/a135a00b-2ad8-4055-ab97-78227109936e">

Then we can join the responses by category, fruit or dessert. Here I've opted for "double newline" formatting:

<img width="665" alt="Screen Shot 2023-10-23 at 3 29 46 PM" src="https://github.com/ianarawjo/ChainForge/assets/5251713/72ee5a48-d287-4b4b-bb12-65f75aea2033">

Finally we chain these lists of items into another Prompt Node, to have an LLM tell us which one is the sweetest item of the list:

<img width="659" alt="Screen Shot 2023-10-23 at 3 30 06 PM" src="https://github.com/ianarawjo/ChainForge/assets/5251713/503f885e-51ad-4ad4-a63d-80864cf53416">

Questions? Comments?

The Join Node is a bit of an experimental node. It does a few things, but, please let us know if it doesn't fit your use case or is too limited. And, as always, you can just implement the changes you want, and submit a Pull Request --this will be faster if the change is minor (e.g., adding another formatting option to the Join Node).

0.2.6

For weeks, many of you have asked for the ability to query custom models or providers in ChainForge. Given how fast this space is evolving --and also how idiosyncratic some of these APIs are --we decided it best to make ChainForge extensible.

You can now [add custom providers](https://chainforge.ai/docs/custom_providers/) by writing simple completion functions in Python. Custom providers will be added to the list of providers in Prompt, Chat Turn, and LLM Scorer nodes. Added provider scripts are automatically cached, and persist across runs of ChainForge.

Here's [an example script to add the Cohere API](https://github.com/ianarawjo/ChainForge/blob/main/chainforge/examples/custom_provider_cohere.py), complete with a JSON schema defining custom settings options. You add this script by simply dropping it into the new "Custom Providers" tab in the ChainForge Settings window:

![custom-providers](https://github.com/ianarawjo/ChainForge/assets/5251713/70f363d0-1a59-47aa-bea9-650738c4e3e0)

You can then query the custom provider like normal:

![custom-provider-query](https://github.com/ianarawjo/ChainForge/assets/5251713/0fc6e042-75e5-43c8-b7ac-6fd33b538217)

Note that only the local version of ChainForge (via `pip install`) supports custom providers.

[For extensive information, see the new "Adding a custom provider" page in the docs.](https://chainforge.ai/docs/custom_providers/)
As always, let us know if you encounter any problems! :)


docs
ChainForge now has documentation! Go here:

https://chainforge.ai/docs/

Let us know what you think!

<img width="1592" alt="Screen Shot 2023-08-05 at 11 30 59 AM" src="https://github.com/ianarawjo/ChainForge/assets/5251713/bcf82540-e2c9-4c76-b22b-4f2157653f7a">

0.2.5

We're excited to release two new nodes: **Chat Turns** and **LLM Scorers**. These nodes came from feedback during user sessions:
- Some users wanted to first tell chat models 'how to act', and then wanted to put their real prompt in the second turn.
- Some users wanted a quicker, cheaper way to 'evaluate' responses and visualize results.

We describe these new nodes below, as well as a few quality-of-life improvements.

🗣️ Chat Turn nodes
Chat models are all the rage (in fact, they are so important that [OpenAI announced it would no longer support plain-old text generation models going forward](https://openai.com/blog/gpt-4-api-general-availability).) Yet strikingly, very few prompt engineering tools let you evaluate LLM outputs beyond a prompt.

Now with Chat Turn nodes, you can continue conversations beyond a single prompt. In fact, you can:

Continue multiple conversations simultaneously across multiple LLMs

Just connect the Chat Turn to your initial Prompt Node, and voilà:

<img width="1421" alt="Screen Shot 2023-07-25 at 6 39 45 PM" src="https://github.com/ianarawjo/ChainForge/assets/5251713/9039ce6b-a16d-4694-89fa-47a22636cd8a">

Here, I've first prompted four chat models: GPT3.5, GPT-4, Claude-2, and PaLM with the question: "What was the first {game} game?". Then I ask a follow-up question, "What was the second?" By default, Chat Turns continue the conversation with all LLMs that were used before, allowing you to follow-up on LLM responses in parallel. (You can also toggle that off, if you want to query different models --more details below).

Template chat messages, just like prompts

You can do everything you can with Chat Turns that you could with Prompt Nodes, including prompt templating and adding input variables. For instance, here's a prompt template as a follow-up message:

<img width="1184" alt="Screen Shot 2023-07-25 at 1 22 15 PM" src="https://github.com/ianarawjo/ChainForge/assets/5251713/497b5c6d-a830-4af6-b7fe-f9c5b5b6a132">

> **Note**
> In fact, Chat Turns are merely modified Prompt Nodes, and use the underlying `PromptNode` class.

Start a conversation with one LLM, and continue it with a different LLM

Chat Turns include a toggle of whether you'd like to continue chatting with the same LLMs, or query different ones, passing chat context to the new models. With this, you can start a conversation with one LLM and continue it with another (or several):

<img width="1146" alt="Screen Shot 2023-07-25 at 12 46 52 PM" src="https://github.com/ianarawjo/ChainForge/assets/5251713/17e96f80-3344-49ff-b236-5a2cea017efd">

Supported chat models

Simple in concept, chat turns were the result of 2 weeks' work, revising many parts of the ChainForge backend to store and carry chat context. Chat history is automatically translated to the appropriate format for a number of providers:
- OpenAI chat models
- Anthropic models (Claude)
- Google PaLM2 chat
- HuggingFace (you need to set 'Model Type' in Settings to 'chat', and choose a Conversation model or custom endpoint. Currently there's only one chat model listed in ChainForge dropdown: `microsoft/DialoGPT`. Go to the HuggingFace site to find more!)

> **Warning**
> If you use a non-chat, text completions model like GPT-2, chat turns will still function, but the chat context won't be passed into the text completions model.

Let us know what you think!

🤖 LLM Scorer nodes

More commonly called "LLM evaluators", LLM scorer nodes allow you to use an LLM to 'grade'/score outputs of other LLMs:

<img width="342" alt="Screen Shot 2023-07-25 at 6 44 01 PM" src="https://github.com/ianarawjo/ChainForge/assets/5251713/a48d458c-9383-4040-888d-24a7c37a8f47">

Although ChainForge supported this functionality before via prompt chaining, it was not straightforward and required an additional chain to a code evaluator node for postprocessing. You can now connect the output of the scorer directly to a Vis Node to plot outputs. For instance, here's GPT-4 scoring whether different LLM responses apologized for a mistake:

<img width="1640" alt="Screen Shot 2023-07-25 at 12 31 52 PM" src="https://github.com/ianarawjo/ChainForge/assets/5251713/da7acda9-1d26-4fbf-ad73-a3b422455876">

Note that LLM scores are finicky --if one score isn't in the right format (true/false), visualization nodes won't work properly, because they'll think the outputs are notof boolean type but categorical. We'll work on improving this, but, for now, enjoy LLM scorers!

❗ Why we're not calling LLM scorers 'LLM evaluators'
We thought long and hard about what to call LLMs that score outputs of other LLMs. Ultimately, using LLMs to score outputs is helpful, and can save time when it's hard to write code to achieve the same effect. However, LLMs are imperfect. Although the AI community currently uses the term 'LLM evaluator,' we ultimately decided not to use that term, for a few reasons:
1. LLM scores should not be blindly trusted. They are helpful if you already have a sense of what you're looking for, and want to grade hundreds of responses and don't care about picture-perfect accuracy. This is especially true after playing with LLM scorer nodes for a while and finding that small tweaks to the scoring prompt can result in vast differences in results.
2. Evaluators, like 'graders' or 'annotations,' is a term that has connotations with humans (i.e. human evaluator). We want to avoid anthropomorphizing LLMs, which contributes to peoples' over-trust in them. 'Scorers' still has human connotations, but arguably less so, and less authoritative ones than 'evaluator'.
3. Evaluators is a term in ChainForge that refers to programs that score responses. Calling LLM scorers 'evaluators' loosely equates them with programmatic evaluators, suggesting they carry the same authority. Although code can be wrong or incorrect, the scoring process for code is inspectable and auditable --not so with LLMs.

Fundamentally, then, we disagree with the positions taken by projects like LangChain, which tend to emphasize LLM scorers as the go-to solution for evaluation. We believe this is a massive mistake that tends to mislead people and causing them to over-trust AI outputs, including [ML researchers at MIT](https://news.ycombinator.com/item?id=36370685). In choosing the term Scorers, we aim to --at the very least --distance ourselves from such positions.

Other changes

* Inspecting true/false scored responses (in Evaluators or LLM scorers) will now show false in red, to easily eyeball failure cases:
<img width="1575" alt="Screen Shot 2023-07-25 at 6 33 00 PM" src="https://github.com/ianarawjo/ChainForge/assets/5251713/29c6886b-7fab-4c8f-adf8-f5f76db4eede">

* In Response Inspectors, the term "Hierarchy" has been replaced with "Grouped List". Grouped Lists are again the default.
* In table view of the response inspector, you can now choose what variable to use for columns. With this method you can compare across prompt templates or indeed anything of interest:
<img width="1583" alt="Screen Shot 2023-07-25 at 6 48 45 PM" src="https://github.com/ianarawjo/ChainForge/assets/5251713/023e2809-1b2b-4bdf-8848-cdd334e699f9">

Future Work

Chat Turns opened up a whole new can of worms, both for the UI, and for evaluation. Some open questions are:
* How can we display Chat History in response inspectors? Right now, you'll only see the latest response from the LLM. There's more design work to do such that you can view the chat context of specific responses.
* Should there be a Chat History node so you can predefine/preset chat histories to test on, without needing to query an LLM?

We hope to prioritize such features based on user feedback. If you use Chat Turns or LLM Scorers, let us know what you think --open an Issue or start a Discussion! 👍

0.2.1.3

I've added `--host` and `--port` flags when you're running ChainForge locally. You can specify what hostname and port to run it on like so:


chainforge serve --host 0.0.0.0 --port 3400


The front-end app also knows you're running it from Flask (locally) regardless of what the hostname and port is.

0.2.1.2

There's two minor, but important quality-of-life improvements in this release.

Table view

Now in response inspectors, you can elect to see a table, rather than a hierarchical grouping of prompt variables:

<img width="1460" alt="Screen Shot 2023-07-19 at 5 03 55 PM" src="https://github.com/ianarawjo/ChainForge/assets/5251713/6aca2bd7-7820-4256-9e8b-3a87795f3e50">

Columns are prompt variables, followed by LLMs. We might add the ability to change columns in the future, if there's interest.

Persistent state in response inspectors

Response inspectors' state will, to an extent, persist across runs. For instance, say you were inspecting a specific response grouping:

<img width="901" alt="Screen Shot 2023-07-19 at 5 04 21 PM" src="https://github.com/ianarawjo/ChainForge/assets/5251713/3e9d8bb6-a0ea-4f21-bc91-c498879208f4">

Imagine you now close the inspector window, delete one of the models and then increase num generations per prompt to 2. You will now see:

<img width="903" alt="Screen Shot 2023-07-19 at 5 04 41 PM" src="https://github.com/ianarawjo/ChainForge/assets/5251713/776f7e30-7437-4c5d-9378-f7803ec8caec">

Right where you left off, with the updated responses. It also keeps track if you've selected Table view, and retains the view you last selected.

0.2.1

We've made several quality-of-life improvements from 0.2 to this release.

Prompt previews

You can now inspect what generated prompts will be sent off to LLMs. For a quick glance, simply hover over the 'list' icon on Prompt Nodes:

![hover-over-prompt-preview](https://github.com/ianarawjo/ChainForge/assets/5251713/32e47b32-38f0-4354-9c20-2f6f31c99806)

For full inspection, just click the button to bring up a popup inspector.

Thanks to Issue https://github.com/ianarawjo/ChainForge/issues/90 raised by profplum700 !

Ability To Enable/Disable Prompt Variables in Text Fields Without Deleting Them

You can now enable/disable prompt variables selectively:

https://github.com/ianarawjo/ChainForge/assets/5251713/92f9c869-8201-43d0-a4a5-8aee7524319e

Thanks to Issue https://github.com/ianarawjo/ChainForge/issues/93 raised by profplum700 !

Anthropic model Claude-2

We've also added the newest Claude model, Claude-2. All prior models remain supported; however, strangely, Claude-1 and 100k context models have disappeared from the Anthropic API documentation. So, if you are using earlier Claude models, just know that they may stop working at some future point.

Bug fixes

There have also been numerous bug fixes, including:
- braces { and } inside Tabular Data tables are now escaped by default when data is pulled from the nodes, so that they are never treated as prompt templates
- escaping template braces \{ and \} now removes the escape slash when generating prompts for models
- outputs of Prompt Nodes, when chained into other Prompt Nodes, now escape the braces in LLM responses by default. Note that whenever prompts are generated, the escaped braces are cleaned up to just { and }. In response inspectors, input variables will appear with escaped braces, as input variables in ChainForge may themselves be templates.

Future Goals

We've been running pilot studies internally at Harvard HCI and getting some informal feedback.
- One point that keeps coming up echoes Issue https://github.com/ianarawjo/ChainForge/issues/56 , raised by jjordanbaird : the ability to keep chat context and evaluate multiple chatbot turns. We are thinking to implement this as a `Chat Turn Node`, where optionally, one can provide "past conversation" context as input. The overall structure will be similar to Prompt Nodes, except that only Chat Models will be available. See https://github.com/ianarawjo/ChainForge/issues/56 for more details.
- Another issue we're aware of is the need for better documentation on what you can do with ChainForge, particularly on the rather unique feature of chaining prompt templates together.

As always, if you have any feedback or comments, open an Issue or start a Discussion.

Page 2 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.