We are happy to announce release `0.9.0` of the Averbis Python API.
Evaluation API Support
We now provide experimental support for evaluating text analysis results using the Averbis Python API. If you have two processes, let's say one `gold_process` containing an annotated dataset, and a second process that is the result of processing the documents from the same dataset with a pipeline (e.g. `discharge_process`), then you can compare the created annotations. In a simple scenario, one compares the spans of all Diagnosis annotations with the following code snippet.
python
eval_config = EvaluationConfiguration("de.averbis.types.health.Diagnosis", ["begin", "end"])
eval_process: Process = discharge_process.evaluate_against(gold_process,
process_name=f"{collection_name}-discharge-eval",
evaluation_configurations=[eval_config])
One can then download the results of the `eval_process` and proceed, e.g. by computing metrics over the whole dataset or by inspecting differences in the annotations. It is also possible to configure the `EvaluationConfiguration` to compare specific features or using different matching_criteria (e.g. `OVERLAP_MATCH`) using `eval_config.use_overlap_partial_match()`. The Evaluation API is avaiable in `Health-Discovery version >= 6.11.0`.
What's Changed
* Issue 121: Update URL for list_projects by UWinch in https://github.com/averbis/averbis-python-api/pull/124
* Issue 120: Compare results of textanalysis processes by UWinch in https://github.com/averbis/averbis-python-api/pull/122
* Issue 123 add flag exist ok when creating project by DavidHuebner in https://github.com/averbis/averbis-python-api/pull/125
**Full Changelog**: https://github.com/averbis/averbis-python-api/compare/0.8.0...0.9.0