🔆 Highlights
New `RankingQuestion` in Feedback Task datasets

Now you will be able to include `RankingQuestion`s in your Feedback datasets. These are specially designed to gather feedback on labeler's preferences, by providing a set of options that labelers can order.
Here's how you can add a `RankingQuestion` to a `FeedbackDataset`:
`python
dataset = FeedbackDataset(
fields=[
rg.TextField(name="prompt"),
rg.TextField(name="reply-1", title="Reply 1"),
rg.TextField(name="reply-2", title="Reply 2"),
rg.TextField(name="reply-3", title="Reply 3"),
],
questions=[
rg.RankingQuestion(
name="ranking",
title="Order replies based on your preference",
description="1 = best, 3 = worst. Ties are allowed.",
required=True,
values={"reply-1": "Reply 1", "reply-2": "Reply 2", "reply-3": "Reply 3"} or ["reply-1", "reply-2", "reply-3"]
]
)
`
More info in [our docs](https://docs.argilla.io/en/latest/guides/llms/practical_guides/create_dataset.html#define-questions).
Extended training support
You can now format responses from `RatingQuestion`, `LabelQuestion` and `MultiLabelQuestion` for your preferred training framework using the `prepare_for_training` method.
Also, we've added support for `spacy-transformers` in our Argilla Trainer.
Here's an example code snippet:
python
import argilla.feedback as rg
dataset = rg.FeedbackDataset.from_huggingface(
repo_id="argilla/stackoverflow_feedback_demo"
)
task_mapping = rg.TrainingTaskMapping.for_text_classification(
text=dataset.field_by_name("question"),
label=dataset.question_by_name("tags")
)
trainer = rg.ArgillaTrainer(
dataset=dataset,
task_mapping=task_mapping,
framework="spacy-transformers",
fetch_records=False
)
trainer.update_config(num_train_epochs=2)
trainer.train(output_dir="my_awesone_model")
`
To learn more about how to use Argilla Trainer check [our docs](https://docs.argilla.io/en/latest/guides/llms/practical_guides/fine_tune_others.html).
[Changelog 1.12.0](https://github.com/argilla-io/argilla/compare/v1.11.0...v1.12.0)
Added
- Added `RankingQuestionSettings` class allowing to create ranking questions in the API using `POST /api/v1/datasets/{dataset_id}/questions` endpoint ([3232](https://github.com/argilla-io/argilla/pull/3232))
- Added `RankingQuestion` in the Python client to create ranking questions ([3275](https://github.com/argilla-io/argilla/issues/3275)).
- Added `Ranking` component in feedback task question form ([3177](https://github.com/argilla-io/argilla/pull/3177) & [#3246](https://github.com/argilla-io/argilla/pull/3246)).
- Added `FeedbackDataset.prepare_for_training` method for generaring a framework-specific dataset with the responses provided for `RatingQuestion`, `LabelQuestion` and `MultiLabelQuestion` ([3151](https://github.com/argilla-io/argilla/pull/3151)).
- Added `ArgillaSpaCyTransformersTrainer` class for supporting the training with `spacy-transformers` ([3256](https://github.com/argilla-io/argilla/pull/3256)).
Changed
- All docker related files have been moved into the `docker` folder ([3053](https://github.com/argilla-io/argilla/pull/3053)).
- `release.Dockerfile` have been renamed to `Dockerfile` ([3133](https://github.com/argilla-io/argilla/pull/3133)).
- Updated `rg.load` function to raise a `ValueError` with a explanatory message for the cases in which the user tries to use the function to load a `FeedbackDataset` ([3289](https://github.com/argilla-io/argilla/pull/3289)).
- Updated `ArgillaSpaCyTrainer` to allow re-using `tok2vec` ([3256](https://github.com/argilla-io/argilla/pull/3256)).
Fixed
- Check available workspaces on Argilla on `rg.set_workspace` (Closes [3262](https://github.com/argilla-io/argilla/issues/3262))
New Contributors
* garimau made their first contribution in https://github.com/argilla-io/argilla/pull/3255
* adurante92 made their first contribution in https://github.com/argilla-io/argilla/pull/3242
**Full Changelog**: https://github.com/argilla-io/argilla/compare/v1.11.0...v1.12.0