Chart-review

Latest version: v2.0.1

Safety actively analyzes 688238 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

2.0.1

Fix running on Python 3.9.

2.0.0

This release re-organizes the CLI commands a bit and adds some new ones!

Breaking changes

- `chart-review info` has been removed, as it was in danger of becoming a kitchen sink with no clear purpose. Instead:
- The default `chart-review info` output of annotators and their chart ranges has been moved to the default `chart-review` output when you don't provide a sub-command.
- `chart-review info --ids` has been moved to `chart-review ids` (and now prints a human-readable table by default - pass `--csv` to get the previous CSV-formatted output)
- `chart-review info --labels` has been moved to `chart-review labels`

New features

- You can now see a list of all text mentions and how they were labeled with `chart-review mentions`:

$ chart-review mentions
╭───────────┬──────────┬─────────┬───────╮
│ Annotator │ Chart ID │ Mention │ Label │
├───────────┼──────────┼─────────┼───────┤
│ mike │ 1 │ sigh │ sad │
│ mike │ 1 │ woo │ happy │
╰───────────┴──────────┴─────────┴───────╯


- You can now see a count of how often each text mentions was labeled with `chart-review frequency`:

$ chart-review frequency
╭───────────┬───────┬─────────┬───────╮
│ Annotator │ Label │ Mention │ Count │
├───────────┼───────┼─────────┼───────┤
│ All │ happy │ woo │ 1 │
│ All │ sad │ sigh │ 4 │
├───────────┼───────┼─────────┼───────┤
│ andy │ happy │ woo │ 1 │
│ andy │ sad │ sigh │ 1 │
├───────────┼───────┼─────────┼───────┤
│ april │ happy │ woo │ 1 │
│ april │ sad │ sigh │ 3 │
╰───────────┴───────┴─────────┴───────╯


- You can now pass `chart-review accuracy --verbose` to see a per-label breakdown of the True Positives / False Negatives / etc:

$ chart-review accuracy jill jane --verbose
╭──────────┬──────────┬────────────────╮
│ Chart ID │ Label │ Classification │
├──────────┼──────────┼────────────────┤
│ 1 │ Cough │ TP │
│ 1 │ Fatigue │ TP │
├──────────┼──────────┼────────────────┤
│ 4 │ Cough │ FP │
│ 4 │ Fatigue │ TP │
╰──────────┴──────────┴────────────────╯


- All commands have gained a `--csv` argument to print a CSV-formatted version of the command's output
- You can now see the current version with `chart-review --version`
- You can now use the shortened `-p` flag instead of the long-form `--project-dir`

Other changes:
- When you run `chart-review` in a project that isn't properly set up yet (no config.yaml or label studio export), a much clearer error is printed, pointing to the web documentation
- The `chart-review accuracy --save` option has been deprecated and hidden from the `--help` output. Use the now-standardized `accuracy --csv` option and redirect the output to a file instead

1.2.0

This release adds a new `info` command.

By default, it shows some basic info about annotators and note ranges:

$ chart-review info
╭───────────┬─────────────┬───────────╮
│ Annotator │ Chart Count │ Chart IDs │
├───────────┼─────────────┼───────────┤
│ jane │ 3 │ 1, 3–4 │
│ jill │ 4 │ 1–4 │
│ john │ 3 │ 1–2, 4 │
╰───────────┴─────────────┴───────────╯


But it also supports two other modes:

info --ids
Prints a csv of all ID mappings: Label Studio chart -> original FHIR -> anonymized FHIR.

Example:

$ chart-review info --ids
chart_id,original_fhir_id,anonymized_fhir_id
1,Encounter/ABC-Enc,Encounter/Anon-ABC-Enc
1,DocumentReference/ABC,DocumentReference/Anon-ABC
...


info --labels
Prints a table of how often each label was used by each annotator.

Example:

$ chart-review info --labels
╭───────────┬─────────────┬──────────╮
│ Annotator │ Chart Count │ Label │
├───────────┼─────────────┼──────────┤
│ Any │ 2 │ Cough │
│ Any │ 3 │ Fatigue │
│ Any │ 3 │ Headache │
├───────────┼─────────────┼──────────┤
│ jane │ 1 │ Cough │
│ jane │ 2 │ Fatigue │
│ jane │ 2 │ Headache │
├───────────┼─────────────┼──────────┤
│ jill │ 2 │ Cough │
│ jill │ 3 │ Fatigue │
│ jill │ 0 │ Headache │
├───────────┼─────────────┼──────────┤
│ john │ 1 │ Cough │
│ john │ 2 │ Fatigue │
│ john │ 2 │ Headache │
╰───────────┴─────────────┴──────────╯

1.1.0

This release adds a `Kappa` field to the `accuracy` command's generated scores.

This is [Cohen's Kappa](https://en.wikipedia.org/wiki/Cohen's_kappa), which might be useful in addition to the F1 score.

1.0.0

Initial release!

The only current command is `chart-review accuracy` which will calculate F1 scores, a confusion matrix, and some other stats between two annotators.

Links

Releases

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.