==========================
* Fork and rename the project as `plateau` (flat files, flat land).
* Fixed a bug where sometimes data partitions in a `plateau` dataset would be read in a non-deterministic order among Python executions (23).
Kartothek 4.0.3 (2021-06-10)
============================
* Pin dask to not use 2021.5.1 and 2020.6.0 (475)
Kartothek 4.0.2 (2021-06-07)
============================
* Fix a bug in ``MetaPartition._reconstruct_index_columns`` that would raise an ``IndexError`` when loading few columns of a dataset with many primary indices.
Kartothek 4.0.1 (2021-04-13)
============================
* Fixed dataset corruption after updates when table names other than "table" are used (445).
Kartothek 4.0.0 (2021-03-17)
============================
This is a major release of kartothek with breaking API changes.
* Removal of complex user input (see gh427)
* Removal of multi table feature
* Removal of `kartothek.io.merge` module
* class ``kartothek.core.dataset.DatasetMetadata`` now has an attribute called `schema` which replaces the previous attribute `table_meta` and returns only a single schema
* All outputs which previously returned a sequence of dictionaries where each key-value pair would correspond to a table-data pair now returns only one :class:`pandas.DataFrame`
* All read pipelines will now automatically infer the table to read such that it is no longer necessary to provide `table` or `table_name` as an input argument
* All writing pipelines which previously supported a complex user input type now expose an argument `table_name` which can be used to continue usage of legacy datasets (i.e. datasets with an intrinsic, non-trivial table name). This usage is discouraged and we recommend users to migrate to a default table name (i.e. leave it None / `table`)
* All pipelines which previously accepted an argument `tables` to select the subset of tables to load no longer accept this keyword. Instead the to-be-loaded table will be inferred
* Trying to read a multi-tabled dataset will now cause an exception telling users that this is no longer supported with kartothek 4.0
* The dict schema for ``kartothek.core.dataset.DatasetMetadataBase.to_dict`` and ``kartothek.core.dataset.DatasetMetadata.from_dict`` changed replacing a dictionary in `table_meta` with the simple `schema`
* All pipeline arguments which previously accepted a dictionary of sequences to describe a table specific subset of columns now accept plain sequences (e.g. `columns`, `categoricals`)
* Remove the following list of deprecated arguments for io pipelines
* label_filter
* central_partition_metadata
* load_dynamic_metadata
* load_dataset_metadata
* concat_partitions_on_primary_index
* Remove `output_dataset_uuid` and `df_serializer` from ``kartothek.io.eager.commit_dataset`` since these arguments didn't have any effect
* Remove `metadata`, `df_serializer`, `overwrite`, `metadata_merger` from ``kartothek.io.eager.write_single_partition``
* ``kartothek.io.eager.store_dataframes_as_dataset`` now requires a list as an input
* Default value for argument `date_as_object` is now universally set to ``True``. The behaviour for `False` will be deprecated and removed in the next major release
* No longer allow to pass `delete_scope` as a delayed object to ``kartothek.io.dask.dataframe.update_dataset_from_ddf``
* ``kartothek.io.dask.dataframe.update_dataset_from_ddf`` and :func:``kartothek.io.dask.dataframe.store_dataset_from_ddf`` now return a `dd.core.Scalar` object. This enables all `dask.DataFrame` graph optimizations by default.
* Remove argument `table_name` from ``kartothek.io.dask.dataframe.collect_dataset_metadata``