Pytest-benchmark

Latest version: v5.1.0

Safety actively analyzes 685838 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 6

3.0.0b1

--------------------

* Tests are sorted alphabetically in the results table.
* Failing to import ``statistics`` doesn't create hard failures anymore. Benchmarks are automatically skipped if import
failure occurs. This would happen on Python 3.2 (or earlier Python 3).

3.0.0a4

--------------------

* Changed how failures to get commit info are handled: now they are soft failures. Previously it made the whole
test suite fail, just because you didn't have ``git/hg`` installed.

3.0.0a3

--------------------

* Added progress indication when computing stats.

3.0.0a2

--------------------

* Fixed accidental output capturing caused by capturemanager misuse.

3.0.0a1

--------------------

* Added JSON report saving (the ``--benchmark-json`` command line arguments). Based on initial work from Dave Collins in
`8 <https://github.com/ionelmc/pytest-benchmark/pull/8>`_.
* Added benchmark data storage(the ``--benchmark-save`` and ``--benchmark-autosave`` command line arguments).
* Added comparison to previous runs (the ``--benchmark-compare`` command line argument).
* Added performance regression checks (the ``--benchmark-compare-fail`` command line argument).
* Added possibility to group by various parts of test name (the ``--benchmark-compare-group-by`` command line argument).
* Added historical plotting (the ``--benchmark-histogram`` command line argument).
* Added option to fine tune the calibration (the ``--benchmark-calibration-precision`` command line argument and
``calibration_precision`` marker option).

* Changed ``benchmark_weave`` to no longer be a context manager. Cleanup is performed automatically.
**BACKWARDS INCOMPATIBLE**
* Added ``benchmark.weave`` method (alternative to ``benchmark_weave`` fixture).

* Added new hooks to allow customization:

* ``pytest_benchmark_generate_machine_info(config)``
* ``pytest_benchmark_update_machine_info(config, info)``
* ``pytest_benchmark_generate_commit_info(config)``
* ``pytest_benchmark_update_commit_info(config, info)``
* ``pytest_benchmark_group_stats(config, benchmarks, group_by)``
* ``pytest_benchmark_generate_json(config, benchmarks, include_data)``
* ``pytest_benchmark_update_json(config, benchmarks, output_json)``
* ``pytest_benchmark_compare_machine_info(config, benchmarksession, machine_info, compared_benchmark)``

* Changed the timing code to:

* Tracers are automatically disabled when running the test function (like coverage tracers).
* Fixed an issue with calibration code getting stuck.

* Added ``pedantic mode`` via ``benchmark.pedantic()``. This mode disables calibration and allows a setup function.

2.5.0

------------------

* Improved test suite a bit (not using ``cram`` anymore).
* Improved help text on the ``--benchmark-warmup`` option.
* Made ``warmup_iterations`` available as a marker argument (eg: ``pytest.mark.benchmark(warmup_iterations=1234)``).
* Fixed ``--benchmark-verbose``'s printouts to work properly with output capturing.
* Changed how warmup iterations are computed (now number of total iterations is used, instead of just the rounds).
* Fixed a bug where calibration would run forever.
* Disabled red/green coloring (it was kinda random) when there's a single test in the results table.

Page 4 of 6

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.