🔆 Release highlights
Draft queue
We’ve added a new queue in the Feedback Task UI so that you can save your drafts and have them all together in a separate view. This allows you to save your responses and come back to them before submission.
Note that responses won’t be autosaved now and to save your changes you will need to click on “Save as draft” or use the shortcut `command ⌘` + `S` (macOS), `Ctrl` + `S` (other).
Improved shortcuts
We’ve been working to improve the keyboard shortcuts within the Feedback Task UI to make them more productive and user-friendly.
You can now select labels in Label and Multi-label questions using the numerical keys in your keyboard. To know which number corresponds with each label you can simply show or hide helpers by pressing `command ⌘` (MacOS) or `Ctrl` (other) for 2 seconds. You will then see the numbers next to the corresponding labels.
We’ve also simplified shortcuts for navigation and actions, so that they use as few keys as possible.
Check all available shortcuts [here](https://docs.argilla.io/en/latest/practical_guides/annotate_dataset.html#shortcuts).
New `metrics` module
We've added a new module to analyze the annotations, both in terms of agreement between the annotators and in terms of data and model drift monitoring.
Agreement metrics
Easily measure the inter-annotator agreement to explore the quality of the annotation guidelines and consistency between annotators:
python
import argilla as rg
from argilla.client.feedback.metrics import AgreementMetric
feedback_dataset = rg.FeedbackDataset.from_argilla("...", workspace="...")
metric = AgreementMetric(dataset=feedback_dataset, question_name="question_name")
agreement_metrics = metric.compute("alpha")
>>> agreement_metrics
[AgreementMetricResult(metric_name='alpha', count=1000, result=0.467889)]
Read more [here](https://docs.argilla.io/en/latest/practical_guides/collect_responses.html#agreement-metrics).
Model metrics
You can use `ModelMetric` to model monitor performance for data and model drift:
python
import argilla as rg
from argilla.client.feedback.metrics import ModelMetric
feedback_dataset = rg.FeedbackDataset.from_argilla("...", workspace="...")
metric = ModelMetric(dataset=feedback_dataset, question_name="question_name")
annotator_metrics = metric.compute("accuracy")
>>> annotator_metrics
{'00000000-0000-0000-0000-000000000001': [ModelMetricResult(metric_name='accuracy', count=3, result=0.5)], '00000000-0000-0000-0000-000000000002': [ModelMetricResult(metric_name='accuracy', count=3, result=0.25)], '00000000-0000-0000-0000-000000000003': [ModelMetricResult(metric_name='accuracy', count=3, result=0.5)]}
Read more [here](https://docs.argilla.io/en/latest/practical_guides/collect_responses.html#model-metrics).
List aggregation support for `TermsMetadataProperty`
You can now pass a list of terms within a record’s metadata that will be aggregated and filterable as part of a `TermsMetadataProperty`.
Here is an example:
python
import argilla as rg
dataset = rg.FeedbackDataset(
fields = ...,
questions = ...,
metadata_properties = [rg.TermsMetadataProperty(name="annotators")]
)
record = rg.FeedbackRecord(
fields = ...,
metadata = {"annotators": ["user_1", "user_2"]}
)
Reindex from CLI
Reindex all entities in your Argilla instance (datasets, records, responses, etc.) with a simple CLI command.
bash
argilla server reindex
This is useful when you are working with an existing feedback datasets and you want to update the search engine info.
[Changelog 1.21.0](https://github.com/argilla-io/argilla/compare/v1.20.0...v1.21.0)
Added
- Added new draft queue for annotation view ([4334](https://github.com/argilla-io/argilla/pull/4334))
- Added annotation metrics module for the `FeedbackDataset` (`argilla.client.feedback.metrics`). ([4175](https://github.com/argilla-io/argilla/pull/4175)).
- Added strategy to handle and translate errors from the server for `401` HTTP status code` ([4362](https://github.com/argilla-io/argilla/pull/4362))
- Added integration for `textdescriptives` using `TextDescriptivesExtractor` to configure `metadata_properties` in `FeedbackDataset` and `FeedbackRecord`. ([4400](https://github.com/argilla-io/argilla/pull/4400)). Contributed by m-newhauser
- Added `POST /api/v1/me/responses/bulk` endpoint to create responses in bulk for current user. ([4380](https://github.com/argilla-io/argilla/pull/4380))
- Added list support for term metadata properties. (Closes [4359](https://github.com/argilla-io/argilla/issues/4359))
- Added new CLI task to reindex datasets and records into the search engine. ([4404](https://github.com/argilla-io/argilla/pull/4404))
- Added `httpx_extra_kwargs` argument to `rg.init` and `Argilla` to allow passing extra arguments to `httpx.Client` used by `Argilla`. ([4440](https://github.com/argilla-io/argilla/pull/4441))
Changed
- More productive and simpler shortcuts system ([4215](https://github.com/argilla-io/argilla/pull/4215))
- Move `ArgillaSingleton`, `init` and `active_client` to a new module `singleton`. ([4347](https://github.com/argilla-io/argilla/pull/4347))
- Updated `argilla.load` functions to also work with `FeedbackDataset`s. ([4347](https://github.com/argilla-io/argilla/pull/4347))
- [breaking] Updated `argilla.delete` functions to also work with `FeedbackDataset`s. It now raises an error if the dataset does not exist. ([4347](https://github.com/argilla-io/argilla/pull/4347))
- Updated `argilla.list_datasets` functions to also work with `FeedbackDataset`s. ([4347](https://github.com/argilla-io/argilla/pull/4347))
Fixed
- Fixed error in `TextClassificationSettings.from_dict` method in which the `label_schema` created was a list of `dict` instead of a list of `str`. ([4347](https://github.com/argilla-io/argilla/pull/4347))
- Fixed total records on pagination component ([4424](https://github.com/argilla-io/argilla/pull/4424))
Removed
- Removed `draft` auto save for annotation view ([4334](https://github.com/argilla-io/argilla/pull/4334))