Plateau

Latest version: v4.4.0

Safety actively analyzes 714860 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 8

3.17.1

===========================

Bugfixes
^^^^^^^^

* Fix GitHub 375 by loosening checks of the supplied store argument

3.17.0

===========================

Improvements
^^^^^^^^^^^^
* Improve performance for "in" predicate literals using long object lists as values
* ``kartothek.io.eager.commit_dataset`` now allows to modify the user
metadata without adding new data.

Bugfixes
^^^^^^^^
* Fix an issue where ``kartothek.io.dask.dataframe.collect_dataset_metadata`` would return
improper rowgroup statistics
* Fix an issue where ``kartothek.io.dask.dataframe.collect_dataset_metadata`` would execute
``get_parquet_metadata`` at graph construction time
* Fix a bug in ``kartothek.io.eager_cube.remove_partitions`` where all partitions were removed
instead of non at all.
* Fix a bug in ``kartothek.core.dataset.DatasetMetadataBase.get_indices_as_dataframe`` which would
raise an ``IndexError`` if indices were empty or had not been loaded

3.16.0

===========================

New functionality
^^^^^^^^^^^^^^^^^
* Allow filtering of nans using "==", "!=" and "in" operators

Bugfixes
^^^^^^^^
* Fix a regression which would not allow the usage of non serializable stores even when using factories

3.15.1

===========================
* Fix a packaging issue where `typing_extensions` was not properly specified as
a requirement for python versions below 3.8

3.15.0

===========================

New functionality
^^^^^^^^^^^^^^^^^
* Add ``kartothek.io.dask.dataframe.store_dataset_from_ddf`` to offer write
support of a dask dataframe without update support. This forbids or explicitly
allows overwrites and does not update existing datasets.
* The ``sort_partitions_by`` feature now supports multiple columns. While this
has only marginal effect for predicate pushdown, it may be used to improve the
parquet compression.
* ``build_cube_from_dataframe`` now supports the ``shuffle`` methods offered by
``kartothek.io.dask.dataframe.store_dataset_from_ddf`` and
``kartothek.io.dask.dataframe.update_dataset_from_ddf`` but writes the
output in the cube format

Improvements
^^^^^^^^^^^^
* Reduce memory consumption during index write.
* Allow `simplekv` stores and `storefact` URLs to be passed explicitly as input for the `store` arguments

3.14.0

===========================

New functionality
^^^^^^^^^^^^^^^^^
* Add ``hash_dataset`` functionality

Improvements
^^^^^^^^^^^^

* Expand ``pandas`` version pin to include 1.1.X
* Expand ``pyarrow`` version pin to include 1.x
* Large addition to documentation for multi dataset handling (Kartothek Cubes)

Page 4 of 8

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.