Dagster-cloud

Latest version: v1.9.9

Safety actively analyzes 697876 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 12

1.1.14

New

- Large asset graphs can now be materialized in Dagit without needing to first enter an asset subset. Previously, if you wanted to materialize every asset in such a graph, you needed to first enter `*` as the asset selection before materializing the assets.

1.1.11

New

- Requests that hit a `ReadTimeout` error will now retry in more cases.
- The “User Settings” section now opens in a dialog.
- [beta] Alert policies can now be set to notify on schedule / sensor tick failure. To learn more, check out the docs on alerting: https://docs.dagster.io/dagster-cloud/account/setting-up-alerts

Bugfixes

- [dagit] Fixed search behavior on the Environment Variables page, which was incorrectly case-sensitive. Variables are also now sorted by “last update” time, with most-recently variables listed at the top.

1.1.10

New

- Added support in Dagster Cloud Serverless for code locations using `requirements.txt` files with local package dependencies (for example, `../some/other/package`).

Bugfixes

- Fixed an issue where setting `workspace.securityContext` in the agent Helm chart to override the security context of pods launched by the agent caused an error when starting up the agent.

1.1.8

New

- [kubernetes] `securityContext` can now be set in the `dagsterCloudAgent` section of the Helm chart.
- [kubernetes] The agent Helm chart now includes a `serverK8sConfig` and `runK8sConfig` key that allows you to specify additional Kubernetes config that will be applied to each pod spun up by the agent to run Dagster code. Code locations can also be configured with a `server_k8s_config` or `run_k8s_config` dictionary with additional Kubernetes config in the pods that are spun up by the agent for that code location. See the [Kubernetes agent configuration reference](https://docs.dagster.io/dagster-cloud/deployment/agents/kubernetes/configuration-reference#per-location-configuration) for more information.
- [ecs] The ECS agent can now be configured with a `server_resources` and/or `run_resources` dictionary that will specify CPU and memory values for each task that is spun up by the agent to run Dagster code. Code locations can also be configured with a `server_resources` and/or `run_resources` dictionary that applies to each task spun up by the agent for that code location. See the [ECS agent configuration reference](https://docs.dagster.io/dagster-cloud/deployment/agents/amazon-ecs/configuration-reference) for more information.
- The Dagster Cloud agent will now re-upload information about each code location to Dagster Cloud from every time it starts up. Previously, the agent would only upload changes to Dagster Cloud when a code location was updated, meaning it was possible for the agent to become out of sync with what was shown in Dagit when the agent restarted.
- The `dagster-cloud serverless deploy-python-executable` command now supports a `--build-in-linux-docker` flag that builds the dependencies within a local Linux Docker container. This enables deploying source-only dependencies (sdists) from non Linux environment.
- When the Dagster Cloud agent stops heartbeating (for example, when it is being upgraded), dequeueing runs will pause until the agent is available again.
- Restored some metadata to the Code Locations tab in Dagster Cloud, including image, python file, module name, and commit hash.
- Added an `--asset-key` argument to the `dagster-cloud job launch` CLI command that allows the job launch to only materialize one or more specific assets from the job.
- `max_concurrent_dequeue`config has been added to the `run_queue` section of deployment config to allow slowing the rate at which queued runs are launched.

Bugfixes

- Fixed an issue where the kubernetes agent was sometimes unable to move runs into a failed state when a run worker crashed or was interrupted by the Kubernetes cluster.
- A regression in retry handling for HTTP `429` responses released in `1.1.7` has been resolved.
- Cases where network errors were incorrectly reporting that they had tried and exhausted retries have been corrected.
- Fixed an issue where when adding or updating an environment variable, the change sometimes wasn’t reflected in branch deployments until they were redeployed.

1.1.7

New

- [Non-isolated runs](https://docs.dagster.io/dagster-cloud/deployment/serverless#run-isolation) in Dagster Cloud Serverless now default to running at most 2 ops in parallel at once, to reduce the default memory usage of these runs. This number can be increased from the launchpad by configuring the `execution` key, for example:

python
execution:
config:
multiprocess:
max_concurrent: 4


- Run dequeue operations can now happen concurrently, improving the throughput of starting new runs.

Bugfixes

- Fixed an issue where specifying a dictionary of proxies in the `dagster_cloud_api.proxies` key in an agent’s `dagster.yaml` file raised an error when proxies were also being set using environment variables.

1.1.6

New

- Dagster Cloud Serverless can now deploy changes to your code using PEX files instead of building a new Docker image on each change, resulting in much faster code updates.
- To update your existing GitHub workflows to use the PEX based fast deploys:
1. Replace the YAML files in your `.github/workflows` directory with updated YAML files found in our [quickstart repository](https://github.com/dagster-io/quickstart-etl/tree/e07e944c7504a52b3d252553d51ad2085b4d5914/.github/workflows).
2. Update the new YAML files and set `DAGSTER_CLOUD_URL` to the value in your original YAML files.
- The `dagster-cloud serverless` command now supports two new sub commands for fast deploys using PEX files:

1. `dagster-cloud serverless deploy-python-executable` can be used instead of `dagster-cloud serverless deploy` to use the fast deploys mechanism. The existing `deploy` command is unchanged.
2. `dagster-cloud serverless upload-base-image` can be used to upload a custom base image used to run code deployed using the above `deploy-python-executable` command. Using custom base images is optional.

More details can be found in [our docs](https://docs.dagster.io/dagster-cloud/deployment/serverless).

- Runs that are launched from the Dagit UI in Dagster Cloud serverless can now be configured as either non-isolated or isolated. Non-isolated runs are for iterating quickly and trade off isolation for speed. Isolated runs are for production and compute heavy Assets/Jobs. For more information see [the docs.](https://docs.dagster.io/dagster-cloud/deployment/serverless#run-isolation)
- Email alerts from Dagster Cloud now include the name of the deployment in the email subject.

Page 3 of 12

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.