New
- `build_schedule_from_partitioned_job` now supports creating a schedule from a static-partitioned job (Thanks `craustin`!)
- [dagster-pipes] `PipesK8sClient` will now autodetect the namespace when using in-cluster config. (Thanks `aignas`!)
- [dagster-pipes] `PipesK8sClient` can now inject the context in to multiple containers. (Thanks `aignas`!)
- [dagster-snowflake] The Snowflake Pandas I/O manager now uses the `write_pandas` method to load Pandas DataFrames in Snowflake. To support this change, the database connector was switched from `SqlDbConnection` to `SnowflakeConnection` .
- [ui] On the overview sensors page you can now filter sensors by type.
- [dagster-deltalake-polars] Added LazyFrame support (Thanks `ion-elgreco`!)
- [dagster-dbt] When using `dbt_assets` and multiple dbt resources produce the same `AssetKey`, we now display an exception message that highlights the file paths of the misconfigured dbt resources in your dbt project.
- [dagster-k8s] The debug info reported upon failure has been improved to include additional information from the Job. (Thanks `jblawatt`!)
- [dagster-k8s] Changed the Dagster Helm chart to apply `automountServiceAccountToken: false` to the default service account used by the Helm chart, in order to better comply with security policies. (Thanks `MattyKuzyk`!)
Bugfixes
- A unnecessary thread lock has been removed from the sensor daemon. This should improve sensor throughput for users with many sensors who have enabled threading.
- Retry from failure behavior has been improved for cases where dynamic steps were interrupted.
- Previously, when backfilling a set of assets which shared a BackfillPolicy and PartitionsDefinition, but had a non-default partition mapping between them, a run for the downstream asset could be launched at the same time as a separate run for the upstream asset, resulting in inconsistent partition ordering. Now, the downstream asset will only execute after the parents complete. (Thanks `ruizh22`!)
- Previously, asset backfills would raise an exception if the code server became unreachable mid-iteration. Now, the backfill will pause until the next evaluation.
- Fixed a bug that was causing ranged backfills over dynamically partitioned assets to fail.
- [dagster-pipes] `PipesK8sClient` has improved handling for init containers and additional containers. (Thanks `aignas`!)
- Fixed the `last_sensor_start_time` property of the `SensorEvaluationContext`, which would get cleared on ticks after the first tick after the sensor starts.
- [dagster-mysql] Fixed the optional `dagster instance migrate --bigint-migration`, which caused some operational errors on mysql storages.
- [dagster-dbt] Fixed a bug introduced in 1.6.3 that caused errors when ingesting asset checks with multiple dependencies.
Deprecations
- The following methods on `AssetExecutionContext` have been marked deprecated, with their suggested replacements in parenthesis:
- `context.op_config` (`context.op_execution_context.op_config`)
- `context.node_handle` (`context.op_execution_context.node_handle`)
- `context.op_handle` (`context.op_execution_context.op_handle`)
- `context.op` (`context.op_execution_context.op`)
- `context.get_mapping_key` (`context.op_execution_context.get_mapping_key`)
- `context.selected_output_names` (`context.op_execution_context.selected_output_names`)
- `context.dagster_run` (`context.run`)
- `context.run_id` (`context.run.run_id`)
- `context.run_config` (`context.run.run_config`)
- `context.run_tags` (`context.run.tags`)
- `context.has_tag` (`key in context.run.tags`)
- `context.get_tag` (`context.run.tags.get(key)`)
- `context.get_op_execution_context` (`context.op_execution_context`)
- `context.asset_partition_key_for_output` (`context.partition_key`)
- `context.asset_partition_keys_for_output` (`context.partition_keys`)
- `context.asset_partitions_time_window_for_output` (`context.partition_time_window`)
- `context.asset_partition_key_range_for_output` (`context.partition_key_range`)
Experimental
- [asset checks] `asset_check` now has a `blocking` parameter. When this is enabled, if the check fails with severity `ERROR` then any downstream assets in the same run won’t execute.
Documentation
- The Branch Deployment docs have been updated to reflect support for backfills
- Added Dagster’s maximum supported Python version (3.11) to Dagster University and relevant docs
- Added documentation for recommended partition limits (a maximum of 25K per asset).
- References to the Enterprise plan have been renamed to Pro, to reflect recent plan name changes
- Added syntax example for setting environment variables in PowerShell to our dbt with Dagster tutorial
- [Dagster University] Dagster Essentials to Dagster v1.6, and introduced the usage of `MaterializeResult`
- [Dagster University] Fixed a typo in the Dagster University section on adding partitions to an asset (Thanks Brandon Peebles!)
- [Dagster University] Corrected lesson where sensors are covered (Thanks onefloid!)
Dagster Cloud
- Agent tokens can now be locked down to particular deployments. Agents will not be able to run any jobs scheduled for deployments that they are not permitted to access. By default, agent tokens have access to all deployments in an organization. Use the `Edit` button next to an agent token on the `Tokens` tab in `Org Settings` to configure permissions for a particular token. You must be an Organization Admin to edit agent token permissions.