==========================
New functionality
^^^^^^^^^^^^^^^^^
- The partition on shuffle algorithm in ``kartothek.io.dask.dataframe.update_dataset_from_ddf`` now supports
producing deterministic buckets based on hashed input data.
Bug fixes
^^^^^^^^^
- Fix addition of bogus index columns to Parquet files when using `sort_partitions_by`.
- Fix bug where ``partition_on`` in write path drops empty DataFrames and can lead to datasets without tables.