--------------------
* Added JSON report saving (the ``--benchmark-json`` command line arguments). Based on initial work from Dave Collins in
`8 <https://github.com/ionelmc/pytest-benchmark/pull/8>`_.
* Added benchmark data storage(the ``--benchmark-save`` and ``--benchmark-autosave`` command line arguments).
* Added comparison to previous runs (the ``--benchmark-compare`` command line argument).
* Added performance regression checks (the ``--benchmark-compare-fail`` command line argument).
* Added possibility to group by various parts of test name (the ``--benchmark-compare-group-by`` command line argument).
* Added historical plotting (the ``--benchmark-histogram`` command line argument).
* Added option to fine tune the calibration (the ``--benchmark-calibration-precision`` command line argument and
``calibration_precision`` marker option).
* Changed ``benchmark_weave`` to no longer be a context manager. Cleanup is performed automatically.
**BACKWARDS INCOMPATIBLE**
* Added ``benchmark.weave`` method (alternative to ``benchmark_weave`` fixture).
* Added new hooks to allow customization:
* ``pytest_benchmark_generate_machine_info(config)``
* ``pytest_benchmark_update_machine_info(config, info)``
* ``pytest_benchmark_generate_commit_info(config)``
* ``pytest_benchmark_update_commit_info(config, info)``
* ``pytest_benchmark_group_stats(config, benchmarks, group_by)``
* ``pytest_benchmark_generate_json(config, benchmarks, include_data)``
* ``pytest_benchmark_update_json(config, benchmarks, output_json)``
* ``pytest_benchmark_compare_machine_info(config, benchmarksession, machine_info, compared_benchmark)``
* Changed the timing code to:
* Tracers are automatically disabled when running the test function (like coverage tracers).
* Fixed an issue with calibration code getting stuck.
* Added ``pedantic mode`` via ``benchmark.pedantic()``. This mode disables calibration and allows a setup function.