Hypothesis

Latest version: v6.103.0

Safety actively analyzes 634607 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 192 of 197

1.5.0

Not secure
------------------


Codename: Strategic withdrawal.

The purpose of this release is a radical simplification of the API for building
strategies. Instead of the old approach of strategy.extend and things that
get converted to strategies, you just build strategies directly.

The old method of defining strategies will still work until Hypothesis 2.0,
because it's a major breaking change, but will now emit deprecation warnings.

The new API is also a lot more powerful as the functions for defining strategies
give you a lot of dials to turn. See :doc:`the updated data section <data>` for
details.

Other changes:

* Mixing keyword and positional arguments in a call to given is deprecated as well.
* There is a new setting called 'strict'. When set to True, Hypothesis will raise
warnings instead of merely printing them. Turning it on by default is inadvisable because
it means that Hypothesis minor releases can break your code, but it may be useful for
making sure you catch all uses of deprecated APIs.
* max_examples in settings is now interpreted as meaning the maximum number
of unique (ish) examples satisfying assumptions. A new setting max_iterations
which defaults to a larger value has the old interpretation.
* Example generation should be significantly faster due to a new faster parameter
selection algorithm. This will mostly show up for simple data types - for complex
ones the parameter selection is almost certainly dominated.
* Simplification has some new heuristics that will tend to cut down on cases
where it could previously take a very long time.
* timeout would previously not have been respected in cases where there were a lot
of duplicate examples. You probably wouldn't have previously noticed this because
max_examples counted duplicates, so this was very hard to hit in a way that mattered.
* A number of internal simplifications to the SearchStrategy API.
* You can now access the current Hypothesis version as hypothesis.__version__.
* A top level function is provided for running the stateful tests without the
TestCase infrastructure.

1.4.0

Not secure
------------------

Codename: What a state.

The *big* feature of this release is the new and slightly experimental
stateful testing API. You can read more about that in :doc:`the
appropriate section <stateful>`.

Two minor features the were driven out in the course of developing this:

* You can now set settings.max_shrinks to limit the number of times
Hypothesis will try to shrink arguments to your test. If this is set to
<= 0 then Hypothesis will not rerun your test and will just raise the
failure directly. Note that due to technical limitations if max_shrinks
is <= 0 then Hypothesis will print *every* example it calls your test
with rather than just the failing one. Note also that I don't consider
settings max_shrinks to zero a sensible way to run your tests and it
should really be considered a debug feature.
* There is a new debug level of verbosity which is even *more* verbose than
verbose. You probably don't want this.

Breakage of semi-public SearchStrategy API:

* It is now a required invariant of SearchStrategy that if u simplifies to
v then it is not the case that strictly_simpler(u, v). i.e. simplifying
should not *increase* the complexity even though it is not required to
decrease it. Enforcing this invariant lead to finding some bugs where
simplifying of integers, floats and sets was suboptimal.
* Integers in basic data are now required to fit into 64 bits. As a result
python integer types are now serialized as strings, and some types have
stopped using quite so needlessly large random seeds.

Hypothesis Stateful testing was then turned upon Hypothesis itself, which lead
to an amazing number of minor bugs being found in Hypothesis itself.

Bugs fixed (most but not all from the result of stateful testing) include:

* Serialization of streaming examples was flaky in a way that you would
probably never notice: If you generate a template, simplify it, serialize
it, deserialize it, serialize it again and then deserialize it you would
get the original stream instead of the simplified one.
* If you reduced max_examples below the number of examples already saved in
the database, you would have got a ValueError. Additionally, if you had
more than max_examples in the database all of them would have been
considered.
* given will no longer count duplicate examples (which it never called
your function with) towards max_examples. This may result in your tests
running slower, but that's probably just because they're trying more
examples.
* General improvements to example search which should result in better
performance and higher quality examples. In particular parameters which
have a history of producing useless results will be more aggressively
culled. This is useful both because it decreases the chance of useless
examples and also because it's much faster to not check parameters which
we were unlikely to ever pick!
* integers_from and lists of types with only one value (e.g. [None]) would
previously have had a very high duplication rate so you were probably
only getting a handful of examples. They now have a much lower
duplication rate, as well as the improvements to search making this
less of a problem in the first place.
* You would sometimes see simplification taking significantly longer than
your defined timeout. This would happen because timeout was only being
checked after each *successful* simplification, so if Hypothesis was
spending a lot of time unsuccessfully simplifying things it wouldn't
stop in time. The timeout is now applied for unsuccessful simplifications
too.
* In Python 2.7, integers_from strategies would have failed during
simplification with an OverflowError if their starting point was at or
near to the maximum size of a 64-bit integer.
* flatmap and map would have failed if called with a function without a
__name__ attribute.
* If max_examples was less than min_satisfying_examples this would always
error. Now min_satisfying_examples is capped to max_examples. Note that
if you have assumptions to satisfy here this will still cause an error.

Some minor quality improvements:

* Lists of streams, flatmapped strategies and basic strategies should now
now have slightly better simplification.

1.3.0

Not secure
------------------

New features:

* New verbosity level API for printing intermediate results and exceptions.
* New specifier for strings generated from a specified alphabet.
* Better error messages for tests that are failing because of a lack of enough
examples.

Bug fixes:

* Fix error where use of ForkingTestCase would sometimes result in too many
open files.
* Fix error where saving a failing example that used flatmap could error.
* Implement simplification for sampled_from, which apparently never supported
it previously. Oops.


General improvements:

* Better range of examples when using one_of or sampled_from.
* Fix some pathological performance issues when simplifying lists of complex
values.
* Fix some pathological performance issues when simplifying examples that
require unicode strings with high codepoints.
* Random will now simplify to more readable examples.

1.2.1

Not secure
------------------

A small patch release for a bug in the new executors feature. Tests which require
doing something to their result in order to fail would have instead reported as
flaky.

1.2.0

------------------

Codename: Finders keepers.

A bunch of new features and improvements.

* Provide a mechanism for customizing how your tests are executed.
* Provide a test runner that forks before running each example. This allows
better support for testing native code which might trigger a segfault or a C
level assertion failure.
* Support for using Hypothesis to find examples directly rather than as just as
a test runner.
* New streaming type which lets you generate infinite lazily loaded streams of
data - perfect for if you need a number of examples but don't know how many.
* Better support for large integer ranges. You can now use integers_in_range
with ranges of basically any size. Previously large ranges would have eaten
up all your memory and taken forever.
* Integers produce a wider range of data than before - previously they would
only rarely produce integers which didn't fit into a machine word. Now it's
much more common. This percolates to other numeric types which build on
integers.
* Better validation of arguments to given. Some situations that would
previously have caused silently wrong behaviour will now raise an error.
* Include +/- sys.float_info.max in the set of floating point edge cases that
Hypothesis specifically tries.
* Fix some bugs in floating point ranges which happen when given
+/- sys.float_info.max as one of the endpoints... (really any two floats that
are sufficiently far apart so that x, y are finite but y - x is infinite).
This would have resulted in generating infinite values instead of ones inside
the range.

1.1.1

Not secure
------------------

Codename: Nothing to see here

This is just a patch release put out because it fixed some internal bugs that would
block the Django integration release but did not actually affect anything anyone could
previously have been using. It also contained a minor quality fix for floats that
I'd happened to have finished in time.

* Fix some internal bugs with object lifecycle management that were impossible to
hit with the previously released versions but broke hypothesis-django.
* Bias floating point numbers somewhat less aggressively towards very small numbers

Page 192 of 197

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.