Icepool

Latest version: v2.0.0

Safety actively analyzes 722491 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 15

2.0.0

Major rewrite of multiset handling. Things will be more unstable than usual for a while.

* `MultisetEvaluator.next_state()` now has an explicit parameter with the order in which outcomes are seen.
* Optional `MultisetEvaluator.initial_state()` method.
* `MultisetEvaluator.initial_state()` and `final_state()` now get the following parameters:
* The order in which outcomes are / were seen.
* All outcomes that will be / were seen.
* The sizes of the input multisets, if inferrable with counts being non-negative.
* Non-multiset keyword arguments that were passed to `evaluate()`.
* `ascending` and `descending` variants of `next_state` no longer exist.
* Instead, `raise UnsupportedOrder()` if you don't like the current order. The other order will automatically be tried.
* This can be done in `initial_state()` (recommended), `next_state()`, or `final_outcome()`.
* Multiset operator order is now always attached to the evaluator side rather than the generator side.
* Unless the operator modifies the generator in-place, but in this case both orders will certainly be supported.
* `MultisetEvaluator` can optionally provide a key for persistent caching.
* Some existing expressions and evaluators now take advantage of inferred multiset sizes.
* In particular, `keep()` and `sorted_match()`.
* `MultisetExpression.count()` renamed to `size()`.
* `multiset_function` now implements late binding like a standard Python method. (Though I still recommend using only pure functions.)
* `multiset_function` now accepts variadic arguments.
* `multiset_function` now accepts non-multiset keyword arguments.
* `multiset_function` now works with joint evaluations where some sub-evaluations don't contain parameters.
* Hopefully better `multiset_function` performance.
* `Alignment` class is retired.
* Deprecated `depth=None` is removed from `Die.reroll()`.
* Multiset generators now always produce a single count value, with `MultiDeal` now producing tuple-valued counts rather than taking up multiple argument slots.
* Multiset computations now try to infer multiset sizes if the counts are non-negative. This improves the applicability of `keep` and `sort_match` expressions.
* Cartesian products (e.g. `tupleize`) now return `Reroll` if any argument is `Reroll`.

1.7.2

* Add `Population.append()` and `.remove()` methods.
* Improve `Vector` performance.
* Adjusted typing of mixture expressions.
* Experimental `Wallenius` noncentral hypergeometric.

1.7.1

* Fix joint evaluations and `multiset_function` interaction with `next_state_ascending` and `next_state_descending`.

1.7.0

* Overhauled multiset expressions. This allows expressions that are given to an evaluator to have the evaluation persistently cached. This makes the caching behavior more consistent: a single expression will be cached in the final evaluator (e.g. `(a - b).unique().sum()` would be cached in the `sum` evaluator), and `multiset_function` creates an evaluator like any other. Unfortunately, this did come at some performance cost for `multiset_function`. I have some ideas on how to claw back some of the performance but I haven't decided whether it's worth the complexity.
* Instead of specifying `order()` for an evaluator, you can now implement `next_state_ascending()` and/or `next_state_descending()`.
* `Alignment` now has a denominator of 1.
* `keep()`, `isdisjoint()`, `sort_match()`, and `maximum_match()` operations now treat negative incoming counts as zero rather than raising an error.
* Add `Population.group_by()` method to split a population into a "covering" set of conditional probabilities.
* `Population.group_by[]` can also be used to group by index or slice.
* Move `split()` from `Die` to the base `Population` class.
* Straight-related multiset operations can now choose between prioritizing low and high outcomes.
* Store original names of `multiset_function` parameters.

1.6.2

* Deprecate `depth=None` in favor of `depth='inf'`.
* Add experimental `format_inverse()` option that formats probabilities as "1 in N".

1.6.1

* Add `pointwise_max`, `pointwise_min` arguments to take pointwise maximum or minimum of CDFs.
* Add `Die.time_to_sum()` method.
* Fix identification of absorbing states in the presence of `extra_args` in `map_and_time()`.
* Add `time_limit` parameter to `map()`.
* `repeat` parameter now uses `'inf'` to request the absorbing distribution rather than `None`.

Page 1 of 15

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.