Dagster

Latest version: v1.10.8

Safety actively analyzes 723973 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 24 of 54

1.1.7

Not secure
New

- `Definitions` is no longer marked as experimental and is the preferred API over `repository` for new users of Dagster. Examples, tutorials, and documentation have largely ported to this new API. No migration is needed. Please see GitHub discussion for more details.
- The “Workspace” section of Dagit has been removed. All definitions for your code locations can be accessed via the “Deployment” section of the app. Just as in the old Workspace summary page, each code location will show counts of its available jobs, assets, schedules, and sensors. Additionally, the code locations page is now available at `/locations`.
- Lagged / rolling window partition mappings: `TimeWindowPartitionMapping` now accepts `start_offset` and `end_offset` arguments that allow specifying that time partitions depend on earlier or later time partitions of upstream assets.
- Asset partitions can now depend on earlier time partitions of the same asset. The asset reconciliation sensor will respect these dependencies when requesting runs.
- `dagit` can now accept multiple arguments for the `-m` and `-f` flags. For each argument a new code location is loaded.
- Schedules created by `build_schedule_from_partitioned_job` now execute more performantly - in constant time, rather than linear in the number of partitions.
- The `QueuedRunCoordinator` now supports options `dequeue_use_threads` and `dequeue_num_workers` options to enable concurrent run dequeue operations for greater throughput.
- [dagster-dbt] `load_assets_from_dbt_project`, `load_assets_from_dbt_manifest`, and `load_assets_from_dbt_cloud_job` now support applying freshness policies to loaded nodes. To do so, you can apply `dagster_freshness_policy` config directly in your dbt project, i.e. `config(dagster_freshness_policy={"maximum_lag_minutes": 60})` would result in the corresponding asset being assigned a `FreshnessPolicy(maximum_lag_minutes=60)`.
- The `DAGSTER_RUN_JOB_NAME` environment variable is now set in containerized environments spun up by our run launchers and executor.
- [dagster-airflow] `make_dagster_repo_from_airflow_dags_path` ,`make_dagster_job_from_airflow_dag` and `make_dagster_repo_from_airflow_dag_bag` have a new `connections` parameter which allows for configuring the airflow connections used by migrated dags.

Bugfixes

- Fixed a bug where the `log` property was not available on the `RunStatusSensorContext` context object provided for run status sensors for sensor logging.
- Fixed a bug where the re-execute button on runs of asset jobs would incorrectly show warning icon, indicating that the pipeline code may have changed since you last ran it.
- Fixed an issue which would cause metadata supplied to graph-backed assets to not be viewable in the UI.
- Fixed an issue where schedules often took up to 5 seconds to start after their tick time.
- Fixed an issue where Dagster failed to load a dagster.yaml file that specified the folder to use for sqlite storage in the `dagster.yaml` file using an environment variable.
- Fixed an issue which would cause the k8s/docker executors to unnecessarily reload CacheableAssetsDefinitions (such as those created when using `load_assets_from_dbt_cloud_job`) on each step execution.
- [dagster-airbyte] Fixed an issue where Python-defined Airbyte sources and destinations were occasionally recreated unnecessarily.
- Fixed an issue with `build_asset_reconciliation_sensor` that would cause it to ignore in-progress runs in some cases.

- Fixed a bug where GQL errors would be thrown in the asset explorer when a previously materialized asset had its dependencies changed.
- [dagster-airbyte] Fixed an error when generating assets for normalization table for connections with non-object streams.
- [dagster-dbt] Fixed an error where dbt Cloud jobs with `dbt run` and `dbt run-operation` were incorrectly validated.
- [dagster-airflow] `use_ephemeral_airflow_db` now works when running within a PEX deployment artifact.

Documentation

- New documentation for [Code locations](https://docs.dagster.io/concepts/code-locations) and how to define one using `Definitions`
- Lots of updates throughout the docs to reflect the recommended usage of `Definitions`. Any content not ported to `Definitions` in this release is in the process of being updated.
- New documentation for dagster-airflow on how to start writing dagster code from an airflow background.

1.1.6

Not secure
New

- [dagit] Throughout Dagit, when the default repository name `__repository__` is used for a repo, only the code location name will be shown. This change also applies to URL paths.
- [dagster-dbt] When attempting to generate software-defined assets from a dbt Cloud job, an error is now raised if none are created.
- [dagster-dbt] Software-defined assets can now be generated for dbt Cloud jobs that execute multiple commands.

Bugfixes

- Fixed a bug that caused `load_asset_value` to error with the default IO manager when a `partition_key` argument was provided.
- Previously, trying to access `context.partition_key` or `context.asset_partition_key_for_output` when invoking an asset directly (e.g. in a unit test) would result in an error. This has been fixed.
- Failure hooks now receive the original exception instead of `RetryRequested` when using a retry policy.
- The LocationStateChange GraphQL subscription has been fixed (thanks \***\*[roeij](https://github.com/roeij) !)**
- Fixed a bug where a `sqlite3.ProgrammingError` error was raised when creating an ephemeral `DagsterInstance`, most commonly when `build_resources` was called without passing in an instance parameter.
- [dagstermill] Jupyter notebooks now correctly render in Dagit on Windows machines.
- [dagster-duckdb-pyspark] New `duckdb_pyspark_io_manager` helper to automatically create a DuckDB I/O manager that can store and load PySpark DataFrames.
- [dagster-mysql] Fixed a bug where versions of mysql \< `8.0.31` would raise an error on some run queries.
- [dagster-postgres] connection url param “options“ are no longer overwritten in dagit.
- [dagit] Dagit now allows backfills to be launched for asset jobs that have partitions and required config.
- [dagit] Dagit no longer renders the "Job in repolocation" label incorrectly in Chrome v109.
- [dagit] Dagit's run list now shows improved labels on asset group runs of more than three assets
- [dagit] Dagit's run gantt chart now renders per-step resource initialization markers correctly.
- [dagit] In op and asset descriptions in Dagit, rendered markdown no longer includes extraneous escape slashes.
- Assorted typos and omissions fixed in the docs — thanks [C0DK](https://github.com/C0DK) and [akan72](https://github.com/akan72)!

Experimental

- As an optional replacement of the workspace/repository concepts, a new `Definitions` entrypoint for tools and the UI has been added. A single `Definitions` object per code location may be instantiated, and accepts typed, named arguments, rather than the heterogenous list of definitions returned from an `repository`-decorated function. To learn more about this feature, and provide feedback, please refer to the [Github Discussion](https://github.com/dagster-io/dagster/discussions/10772).
- [dagster-slack] A new `make_slack_on_freshness_policy_status_change_sensor` allows you to create a sensor to alert you when an asset is out of date with respect to its freshness policy (and when it’s back on time!)

Documentation

- Refreshed `dagstermill` guide and reference page [https://docs.dagster.io/integrations/dagstermill](https://docs.dagster.io/integrations/dagstermill)
- New declarative scheduling guide: [https://docs.dagster.io/guides/dagster/scheduling-assets](https://docs.dagster.io/guides/dagster/scheduling-assets)
- New `dagster-snowflake` guide: [https://docs.dagster.io/integrations/snowflake](https://docs.dagster.io/integrations/snowflake)
- Added docs for asset code versioning: [https://docs.dagster.io/concepts/assets/software-defined-assets#asset-code-versions](https://docs.dagster.io/concepts/assets/software-defined-assets#asset-code-versions)
- Added docs for observable source assets: [https://docs.dagster.io/concepts/assets/asset-observations#observable-source-assets](https://docs.dagster.io/concepts/assets/asset-observations#observable-source-assets)

1.1.5

Not secure
Bugfixes

- [dagit] Fixed an issue where the Partitions tab sometimes failed to load for asset jobs.

1.1.4

Not secure
Community Contributions

- Fixed a typo in GCSComputeLogManager docstring (thanks [reidab](https://github.com/reidab))!
- [dagster-airbyte] job cancellation on run termination is now optional. (Thanks [adam-bloom](https://github.com/adam-bloom))!
- [dagster-snowflake] Can now specify snowflake role in config to snowflake io manager (Thanks [binhnefits](https://github.com/binhnefits))!
- [dagster-aws] A new AWS systems manager resource (thanks [zyd14](https://github.com/zyd14))!
- [dagstermill] Retry policy can now be set on dagstermill assets (thanks [nickvazz](https://github.com/nickvazz))!
- Corrected typo in docs on metadata (thanks [C0DK](https://github.com/C0dk))!

New

- Added a `job_name` parameter to `InputContext`
- Fixed inconsistent io manager behavior when using `execute_in_process` on a `GraphDefinition` (it would use the `fs_io_manager` instead of the in-memory io manager)
- Compute logs will now load in Dagit even when websocket connections are not supported.
- [dagit] A handful of changes have been made to our URLs:
- The `/instance` URL path prefix has been removed. E.g. `/instance/runs` can now be found at `/runs`.
- The `/workspace` URL path prefix has been changed to `/locations`. E.g. the URL for job `my_job` in repository `foobar` can now be found at `/locations/foobar/jobs/my_job`.
- [dagit] The “Workspace” navigation item in the top nav has been moved to be a tab under the “Deployment” section of the app, and is renamed to “Definitions”.
- [dagstermill] Dagster events can now be yielded from asset notebooks using `dagstermill.yield_event`.
- [dagstermill] Failed notebooks can be saved for inspection and debugging using the new `save_on_notebook_failure` parameter.
- [dagster-airflow] Added a new option `use_ephemeral_airflow_db` which will create a job run scoped airflow db for airflow dags running in dagster
- [dagster-dbt] Materializing software-defined assets using dbt Cloud jobs now supports partitions.
- [dagster-dbt] Materializing software-defined assets using dbt Cloud jobs now supports subsetting. Individual dbt Cloud models can be materialized, and the proper filters will be passed down to the dbt Cloud job.
- [dagster-dbt] Software-defined assets from dbt Cloud jobs now support configurable group names.
- [dagster-dbt] Software-defined assets from dbt Cloud jobs now support configurable `AssetKey`s.

Bugfixes

- Fixed regression starting in `1.0.16` for some compute log managers where an exception in the compute log manager setup/teardown would cause runs to fail.
- The S3 / GCS / Azure compute log managers now sanitize the optional `prefix` argument to prevent badly constructed paths.
- [dagit] The run filter typeahead no longer surfaces key-value pairs when searching for `tag:`. This resolves an issue where retrieving the available tags could cause significant performance problems. Tags can still be searched with freeform text, and by adding them via click on individual run rows.
- [dagit] Fixed an issue in the Runs tab for job snapshots, where the query would fail and no runs were shown.
- [dagit] Schedules defined with cron unions displayed “Invalid cron string” in Dagit. This has been resolved, and human-readable versions of all members of the union will now be shown.

Breaking Changes

- You can no longer set an output’s asset key by overriding `get_output_asset_key` on the `IOManager` handling the output. Previously, this was experimental and undocumented.

Experimental

- Sensor and schedule evaluation contexts now have an experimental `log` property, which log events that can later be viewed in Dagit. To enable these log views in dagit, navigate to the user settings and enable the `Experimental schedule/sensor logging view` option. Log links will now be available for sensor/schedule ticks where logs were emitted. Note: this feature is not available for users using the `NoOpComputeLogManager`.

1.1.3

Not secure
Bugfixes

- Fixed a bug with the asset reconciliation sensor that caused duplicate runs to be submitted in situations where an asset has a different partitioning than its parents.
- Fixed a bug with the asset reconciliation sensor that caused it to error on time-partitioned assets.
- [dagster-snowflake] Fixed a bug when materializing partitions with the Snowflake I/O manager where sql `BETWEEN` was used to determine the section of the table to replace. `BETWEEN` included values from the next partition causing the I/O manager to erroneously delete those entries.
- [dagster-duckdb] Fixed a bug when materializing partitions with the DuckDB I/O manager where sql `BETWEEN` was used to determine the section of the table to replace. `BETWEEN` included values from the next partition causing the I/O manager to erroneously delete those entries.

1.1.2

Not secure
Bugfixes

- In Dagit, assets that had been materialized prior to upgrading to 1.1.1 were showing as "Stale". This is now fixed.
- Schedules that were constructed with a list of cron strings previously rendered with an error in Dagit. This is now fixed.
- For users running dagit version >= 1.0.17 (or dagster-cloud) with dagster version \< 1.0.17, errors could occur when hitting "Materialize All" and some other asset-related interactions. This has been fixed.

Page 24 of 54

Links

Releases

Has known vulnerabilities

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.