===========================
* Fix an issue where updates on cubes or updates on datatsets using
dask.dataframe might not update all secondary indices, resulting in a corrupt
state after the update
* Expose compression type and row group chunk size in Cube interface via optional
parameter of type ``kartothek.serialization.ParquetSerializer``.
* Add retries to ``kartothek.serialization._parquet.ParquetSerializer.restore_dataframe``
IOErrors on long running ktk + dask tasks have been observed. Until the root cause is fixed,
the serialization is retried to gain more stability.