Prompttools

Latest version: v0.0.46

Safety actively analyzes 685507 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

0.0.45

Launch of PromptTools Observability (Private Beta)

We're excited to announce the addition of observability features on our hosted platform. It allows your teams to monitor and evaluate your production usages of LLMs with just one line of code change!

python
import prompttools.logger


The new features are integrated with our open-source library as well as the [PromptTools playground](https://github.com/hegelai/prompttools/releases/tag/v0.0.41). Our goal is to enable reliable deployments of LLM usages more quickly and observes any issues in real-time.

If you are interested to try out platform, please [reach out to us](mailto:teamhegel-ai.com).

We remain committed to expanding this open source library. We look forward to build more development tools that enable you to iterate faster with AI models. Please have a look at our open issues to what features are coming.

Major Features Updates

OpenAI API Updates
- We have updated various experiments and examples to use OpenAI's latest features and Python API
- Make sure you are using `openai` version 1.0+

Moderation API
- We have integrated with OpenAI's moderation API as an eval function
- This allows you to check if your experiments' responses (from any LLMs) violate content moderation policy (such as violence, harassment).

Hosted APIs
- Production logging API
- [Contact us](mailto:teamhegel-ai.com) if you would like to get started with our hosted observability features!

Community

If you have suggestions on the API or use cases you'd like to be covered, please open a GitHub issue. We'd love to hear thoughts and feedback. As always, we welcome new contributors to our repo and we have a few good first issues to get you started.

**Full Changelog**: https://github.com/hegelai/prompttools/compare/v0.0.41...v0.0.45

0.0.41

Launch of PromptTools Playground (Private Beta)

We're excited to announce the private beta of PromptTools Playground! It is a hosted platform integrated with our open-source library. It persists your experiments with version control and provides collaboration features suited for teams.

If you are interested to try out platform, please reach out to [us](mailto:teamhegel-ai.com). We remain committed to expanding this open source library. We look forward to build more development tools that enable you to iterate faster with AI models.

Major Features Updates

New Harnesses
- ChatPromptTemplateExperimentationHarness
- ModelComparisonHarness

Experimental APIs
- `run_one` and `run_partial` for `OpenAIChatExperiment
- You no longer have to re-run the entire experiment! You can now partial execute parameters that you care about.

Hosted APIs
- Save, load, and share your experiments through our hosted playground
- `save_experiment`
- `load_experiment`

Community

If you have suggestions on the API or use cases you'd like to be covered, please open a GitHub issue. We'd love to hear thoughts and feedback. As always, we welcome new contributors to our repo and we have a few good first issues to get you started.

0.0.35

Major Features Updates

New APIs
- Google Vertex AI
- Azure OpenAI Service
- Replicate
- Stable Diffusion
- Pinecone
- Qdrant
- Retrieval-Augmented Generation (RAG)

Utility Functions
- `chunk_text`
- `autoeval_with_documents`
- `structural_similarity`

Community
Shout out to HashemAlsaket, bweber-rebellion, imalsky, kacperlukawski for actively participating and contributing new features!

If you have suggestions on the API or use cases you'd like to be covered, please open a GitHub issue. We'd love to hear thoughts and feedback. As always, we welcome new contributors to our repo and we have a few good first issues to get you started.

If you are interested in a hosted version of `prompttools` with more features for your team, please reach out.

0.0.33

Major Features Updates

New APIs
- `OpenAIChatExperiment` - it can now call functions.
- LangChain Sequential Chain
- LangChain Router Chain
- LanceDB
- Initial support for benchmarking (with HellaSwag)

Other Improvements
There are also many fixes and improvements we made to different experiments. Notably, we refactored how `evaluate` works. In this version, the evaluation function being passed into `experiment.evaluate()` should handle a row of data plus other optional keyword arguments. Please see our updated example notebooks as references.

Playground

The playground now supports shareable links. You can use the `Share` button to create a link and share your experiment setup with your teammates.


Community
Shout out to HashemAlsaket, AyushExel, pramitbhatia25 mmmaia actively participating and contributing new features!

If you have suggestions on the API or use cases you'd like to be covered, please open a GitHub issue. We'd love to hear thoughts and feedback. As always, we welcome new contributors to our repo and we have a few good first issues to get you started.

0.0.22

Major features added recently:

New APIs:
- Anthropic Claude
- Google PaLM
- Chroma
- Weaviate
- MindsDB

Playground

If you would like to execute your experiments in a StreamLit UI rather than in a notebook, you can do that with:

python
pip install prompttools
git clone https://github.com/hegelai/prompttools.git
cd prompttools && streamlit run prompttools/playground/playground.py


Community
Shout out to HashemAlsaket actively participating and contributing new features!

If you have suggestions on the API or use cases you'd like to be covered, please open a GitHub issue. We'd love to hear thoughts and feedback. As always, we welcome new contributors to our repo and we have a few good first issues to get you started.

Links

Releases

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.