Dfcx-scrapi

Latest version: v1.12.5

Safety actively analyzes 681866 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 7

1.12.5

What's Changed
* Fix/lang code nlu evals by kmaphoenix in https://github.com/GoogleCloudPlatform/dfcx-scrapi/pull/255


**Full Changelog**: https://github.com/GoogleCloudPlatform/dfcx-scrapi/compare/1.12.4...1.12.5

1.12.4

What's Changed
* Fix/tool call quality bug by kmaphoenix in https://github.com/GoogleCloudPlatform/dfcx-scrapi/pull/253


**Full Changelog**: https://github.com/GoogleCloudPlatform/dfcx-scrapi/compare/1.12.3...1.12.4

1.12.3

What's Changed
* Feature/conversation rebase by kmaphoenix in https://github.com/GoogleCloudPlatform/dfcx-scrapi/pull/240
* Fix/default creds inheritance by kmaphoenix in https://github.com/GoogleCloudPlatform/dfcx-scrapi/pull/246
* Prevent IndexError in collect_playbook_responses when not in playbook by SeanScripts in https://github.com/GoogleCloudPlatform/dfcx-scrapi/pull/244
* feat: add support for flow invoke; clean up creds passing in evals by kmaphoenix in https://github.com/GoogleCloudPlatform/dfcx-scrapi/pull/248
* Fix/optional tool call metrics evals by kmaphoenix in https://github.com/GoogleCloudPlatform/dfcx-scrapi/pull/250
* Fix/support lang code conversation by kmaphoenix in https://github.com/GoogleCloudPlatform/dfcx-scrapi/pull/251


**Full Changelog**: https://github.com/GoogleCloudPlatform/dfcx-scrapi/compare/1.12.2...1.12.3

1.12.2

Enhancements
- Added support for `language_code` on all applicable methods in Flows class
- Added support for `parameters` when using the Datastore Evaluations class and notebook
- Added support for Playbook Versions
- New notebook to check status of datastores and search for Datastore IDs, Doc IDs, and URLs
- Added helper methods for Search to make listing urls / doc ids / documents much easier users

Bug Fix
- Fixed bug in CopyUtil class that was causing the `create_entity_type` method to fail
- Fixed a bug in Dataframe Functions which was causing scopes to not be inherited properly
- Fixed new Vertex Agents Evals notebook links for Github and GCP workbench launching to point to correct location

What's Changed
* fix: add support for language_code on applicable methods by kmaphoenix in https://github.com/GoogleCloudPlatform/dfcx-scrapi/pull/222
* fix: update copy_util to resolve bug issue 192 by my3sons in https://github.com/GoogleCloudPlatform/dfcx-scrapi/pull/205
* Feat/parameter support datastore evals by kmaphoenix in https://github.com/GoogleCloudPlatform/dfcx-scrapi/pull/225
* feat: add support for playbook versions by kmaphoenix in https://github.com/GoogleCloudPlatform/dfcx-scrapi/pull/226
* Fix/scopes dataframe functions by kmaphoenix in https://github.com/GoogleCloudPlatform/dfcx-scrapi/pull/228
* Update vertex_agents_evals.ipynb by YuncongZhou in https://github.com/GoogleCloudPlatform/dfcx-scrapi/pull/231
* Feature/datastoreindexurls by agutta in https://github.com/GoogleCloudPlatform/dfcx-scrapi/pull/235
* Feat/add vais search methods by kmaphoenix in https://github.com/GoogleCloudPlatform/dfcx-scrapi/pull/237
* chore: update notebook to use latest scrapi code by kmaphoenix in https://github.com/GoogleCloudPlatform/dfcx-scrapi/pull/238

New Contributors
* my3sons made their first contribution in https://github.com/GoogleCloudPlatform/dfcx-scrapi/pull/205
* YuncongZhou made their first contribution in https://github.com/GoogleCloudPlatform/dfcx-scrapi/pull/231
* agutta made their first contribution in https://github.com/GoogleCloudPlatform/dfcx-scrapi/pull/235

**Full Changelog**: https://github.com/GoogleCloudPlatform/dfcx-scrapi/compare/1.12.1...1.12.2

1.12.1

Bug
- Patch to require `google-cloud-aiplatform` as part of the setuptools
- The lack of `google-cloud-aiplatform` in setuptools was causing import errors in some classes that rely on `vertexai` as an import

**Full Changelog**: https://github.com/GoogleCloudPlatform/dfcx-scrapi/compare/1.12.0...1.12.1

1.12.0

New Features
Evaluations are here! πŸŽ‰
---

What are Evaluations? πŸ“ πŸ“ˆ
We know that building an Agent is only part of the journey.
Understanding how that Agent responds to real-world queries is a key indicator of how it will perform in Production.
Running evaluations, or "evals", allows Agent developers to quickly identify "losses", or areas of opportunities for improving Agent design.

Evals can provide answers to questions like:
* What is the current performance baseline for my Agent?
* How is my Agent performing after the most recent changes?
* If I switch to a new LLM, how does that change my Agent's performance?

Evaluation Toolsets in SCRAPI πŸ› οΈπŸ
For this latest release, we have included 2 specific Eval setups for developers to use with Agent Builder and Dialogflow CX Agents.
1. [DataStore Evaluations](https://github.com/GoogleCloudPlatform/dfcx-scrapi/blob/main/examples/vertex_ai_conversation/evaluation_tool__autoeval__colab.ipynb)
2. [Multi-turn, Multi-Agent w/ Tool Calling Evaluations](https://github.com/GoogleCloudPlatform/dfcx-scrapi/blob/main/examples/vertex_ai_conversation/vertex_agents_evals.ipynb)

These are offered as two distinct evaluations toolsets because of a few reasons:
* They support different build architectures in DFCX vs. Agent Builder
* They support different metrics based on the task you are trying to evaluate
* They support different tool calling setups: Native DataStores vs. arbitrary custom tools

Metrics by Toolset. πŸ“
The following metrics are currently supported for each toolset.
Additional metrics will be added over time to support various other evaluation needs.
- DataStore Evaluations
- `Url Match`
- `Context Recall`
- `Faithfulness`
- `Answer Correctness`
- `RougeL`
- Multi-Turn, Multi-Agent w/ Tool Callling Evaluations
- `Semantic Similarity`
- `Exact Match Tool Quality`

Getting Started with Evaluations 🏁
1. Start by choosing your Eval toolset based on the Agent architecture you are evaluating
- [DataStore Evaluations](https://github.com/GoogleCloudPlatform/dfcx-scrapi/blob/main/examples/vertex_ai_conversation/evaluation_tool__autoeval__colab.ipynb)
- [Multi-turn, Multi-Agent w/ Tool Calling Evaluations](https://github.com/GoogleCloudPlatform/dfcx-scrapi/blob/main/examples/vertex_ai_conversation/vertex_agents_evals.ipynb)
3. Build an Evaluation Dataset. You can find detailed information about the dataset formats in each of the toolset instructions
4. Run your evals!

Example Eval Setup for Multi-Turn, Mutli-Agent w/ Tools
py
import pandas as pd
from dfcx_scrapi.tools.evaluations import Evaluations
from dfcx_scrapi.tools.evaluations import DataLoader

data = DataLoader()

INPUT_SCHEMA_REQUIRED_COLUMNS = ['eval_id', 'action_id', 'action_type', 'action_input', 'action_input_parameters', 'tool_action', 'notes']

sample_df = pd.DataFrame(columns=INPUT_SCHEMA_REQUIRED_COLUMNS)

sample_df.loc[0] = ["travel-ai-001", 1, "User Utterance", "Paris", "", "", ""]
sample_df.loc[1] = ["travel-ai-001", 2, "Playbook Invocation", "Travel Inspiration", "", "", ""]
sample_df.loc[2] = ["travel-ai-001", 3, "Agent Response", "Paris is a beautiful city! Here are a few things you might enjoy doing there:\n\nVisit the Eiffel Tower\nTake a walk along the Champs-Γ‰lysΓ©es\nVisit the Louvre Museum\nSee the Arc de Triomphe\nTake a boat ride on the Seine River", "", "", ""]

sample_df = data.from_dataframe(sample_df)
agent_id = "projects/your-project/locations/us-central1/agents/11111-2222-33333-44444" Example Agent
evals = Evaluations(agent_id, metrics=["response_similarity", "tool_call_quality"])
eval_results = evals.run_query_and_eval(sample_df.head(10))


What's Changed
* Feat/evaluations by kmaphoenix in https://github.com/GoogleCloudPlatform/dfcx-scrapi/pull/217
* Feat/evals notebook by kmaphoenix in https://github.com/GoogleCloudPlatform/dfcx-scrapi/pull/218


**Full Changelog**: https://github.com/GoogleCloudPlatform/dfcx-scrapi/compare/1.11.2...1.12.0

Page 1 of 7

Β© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.