Major features and improvements
* Implemented autodiscovery of project pipelines. A pipeline created with `kedro pipeline create <pipeline_name>` can now be accessed immediately without needing to explicitly register it in `src/<package_name>/pipeline_registry.py`, either individually by name (e.g. `kedro run --pipeline=<pipeline_name>`) or as part of the combined default pipeline (e.g. `kedro run`). By default, the simplified `register_pipelines()` function in `pipeline_registry.py` looks like:
python
def register_pipelines() -> Dict[str, Pipeline]:
"""Register the project's pipelines.
Returns:
A mapping from pipeline names to ``Pipeline`` objects.
"""
pipelines = find_pipelines()
pipelines["__default__"] = sum(pipelines.values())
return pipelines
* The Kedro IPython extension should now be loaded with `%load_ext kedro.ipython`.
* The line magic `%reload_kedro` now accepts keywords arguments, e.g. `%reload_kedro --env=prod`.
* Improved resume pipeline suggestion for `SequentialRunner`, it will backtrack the closest persisted inputs to resume.
Bug fixes and other changes
* Changed default `False` value for rich logging `show_locals`, to make sure credentials and other sensitive data isn't shown in logs.
* Rich traceback handling is disabled on Databricks so that exceptions now halt execution as expected. This is a workaround for a [bug in `rich`](https://github.com/Textualize/rich/issues/2455).
* When using `kedro run -n [some_node]`, if `some_node` is missing a namespace the resulting error message will suggest the correct node name.
* Updated documentation for `rich` logging.
* Updated Prefect deployment documentation to allow for reruns with saved versioned datasets.
* The Kedro IPython extension now surfaces errors when it cannot load a Kedro project.
* Relaxed `delta-spark` upper bound to allow compatibility with Spark 3.1.x and 3.2.x.
* Added `gdrive` to list of cloud protocols, enabling Google Drive paths for datasets.
* Added svg logo resource for ipython kernel.