Databricks-labs-blueprint

Latest version: v0.10.1

Safety actively analyzes 723177 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 6

0.6.2

* Applied type casting & remove empty kwarg for Command ([108](https://github.com/databrickslabs/blueprint/issues/108)). A new method, `get_argument_type`, has been added to the `Command` class in the `cli.py` file to determine the type of a given argument name based on the function's signature. The `_route` method has been updated to remove any empty keyword arguments from the `kwargs` dictionary, and apply type casting based on the argument type using the `get_argument_type` method. This ensures that the `kwargs` passed into `App.command` are correctly typed and eliminates any empty keyword arguments, which were previously passed as empty strings. In the test file for the command-line interface, the `foo` command's keyword arguments have been updated to include `age` (int), `salary` (float), `is_customer` (bool), and `address` (str) types, with the `name` argument remaining and a default value for `address`. The `test_commands` and `test_injects_prompts` functions have been updated accordingly. These changes aim to improve the input validation and type safety of the `App.command` method.

0.6.1

* Made `ProductInfo.version` a `cached_property` to avoid failure when comparing wheel uploads in development ([105](https://github.com/databrickslabs/blueprint/issues/105)). In this release, the `apply` method of a class has been updated to sort upgrade scripts in semantic versioning order before applying them, addressing potential issues with version comparison during development. The implementation of `ProductInfo.version` has been refactored to a `cached_property` called `_version`, which calculates and caches the project version, addressing a failure during wheel upload comparisons in development. The `Wheels` class constructor has also been updated to include explicit keyword-only arguments, and a deprecation warning has been added. These changes aim to improve the reliability and predictability of the upgrade process and the library as a whole.

Dependency updates:

* Bump actions/checkout from 4.1.5 to 4.1.6 ([106](https://github.com/databrickslabs/blueprint/pull/106)).

0.6.0

* Added upstream wheel uploads for Databricks Workspaces without Public Internet access ([99](https://github.com/databrickslabs/blueprint/issues/99)). This commit introduces a new feature for uploading upstream wheel dependencies to Databricks Workspaces without Public Internet access. A new flag has been added to upload functions, allowing users to include or exclude dependencies in the download list. The `WheelsV2` class has been updated with a new method, `upload_wheel_dependencies(prefixes)`, which checks if each wheel's name starts with any of the provided prefixes before uploading it to the Workspace File System (WSFS). This feature also includes two new tests to verify the functionality of uploading the main wheel package and dependent wheel packages, optimizing downloads based on specific use cases. This enables users to more easily use the package in offline environments with restricted internet access, particularly for Databricks Workspaces with extra layers of network security.
* Fixed bug for double-uploading of unreleased wheels in air-gapped setups ([103](https://github.com/databrickslabs/blueprint/issues/103)). In this release, we have addressed a bug in the `upload_wheel_dependencies` method of the `WheelsV2` class, which caused double-uploading of unreleased wheels in air-gapped setups. This issue occurred due to the condition `if wheel.name == self._local_wheel.name` not being met, resulting in undefined behavior. We have introduced a cached property `_current_version` to tackle this bug for unreleased versions uploaded to air-gapped workspaces. We also added a new method, `upload_to_wsfs()`, that uploads files to the workspace file system (WSFS) in the integration test. This release also includes new tests to ensure that only the Databricks SDK is uploaded and that the number of installation files is correct. These changes have resolved the double-uploading issue, and the number of installation files, Databricks SDK, Blueprint, and version.json metadata are now uploaded correctly to WSFS.

0.5.0

* Added content assertion for `assert_file_uploaded` and `assert_file_dbfs_uploaded` in `MockInstallation` ([101](https://github.com/databrickslabs/blueprint/issues/101)). The recent commit introduces a content assertion feature to the `MockInstallation` class, enhancing its testing capabilities. This is achieved by adding an optional `expected` parameter of type `bytes` to the `assert_file_uploaded` and `assert_file_dbfs_uploaded` methods, allowing users to verify the uploaded content's correctness. The `_assert_upload` method has also been updated to accept this new parameter, ensuring the actual uploaded content matches the expected content. Furthermore, the commit includes informative docstrings for the new and updated methods, providing clear explanations of their functionality and usage. To support these improvements, new test cases `test_assert_file_uploaded` and `test_load_empty_data_class` have been added to the `tests/unit/test_installation.py` file, enabling more rigorous testing of the `MockInstallation` class and ensuring that the expected content is uploaded correctly.
* Added handling for partial functions in `parallel.Threads` ([93](https://github.com/databrickslabs/blueprint/issues/93)). In this release, we have enhanced the `parallel.Threads` module with the ability to handle partial functions, addressing issue [#93](https://github.com/databrickslabs/blueprint/issues/93). This improvement includes the addition of a new static method, `_get_result_function_signature`, to obtain the signature of a function or a string representation of its arguments and keywords if it is a partial function. The `_wrap_result` class method has also been updated to log an error message with the function's signature if an exception occurs. Furthermore, we have added a new test case, `test_odd_partial_failed`, to the unit tests, ensuring that the `gather` function handles partial functions that raise errors correctly. The Python version required for this project remains at 3.10, and the `pyproject.toml` file has been updated to include "isort", "mypy", "types-PyYAML", and `types-requests` in the list of dependencies. These adjustments are aimed at improving the functionality and type checking in the `parallel.Threads` module.
* Align configurations with UCX project ([96](https://github.com/databrickslabs/blueprint/issues/96)). This commit brings project configurations in line with the UCX project through various fixes and updates, enhancing compatibility and streamlining collaboration. It addresses pylint configuration warnings, adjusts GitHub Actions workflows, and refines the `pyproject.toml` file. Additionally, the `NiceFormatter` class in `logger.py` has been improved for better code readability, and the versioning scheme has been updated to ensure SemVer and PEP440 compliance, making it easier to manage and understand the project's versioning. Developers adopting the project will benefit from these alignments, as they promote adherence to the project's standards and up-to-date best practices.
* Check backwards compatibility with UCX, Remorph, and LSQL ([84](https://github.com/databrickslabs/blueprint/issues/84)). This release includes an update to the dependabot configuration to check for daily updates in both the pip and github-actions package ecosystems, with a new directory parameter added for the pip ecosystem for more precise update management. Additionally, a new GitHub Actions workflow, "downstreams", has been added to ensure backwards compatibility with UCX, Remorph, and LSQL by running automated downstream checks on pull requests, merge groups, and pushes to the main branch. The workflow has appropriate permissions for writing id-tokens, reading contents, and writing pull-requests, and runs the downstreams action from the databrickslabs/sandbox repository using GITHUB_TOKEN for authentication. These changes improve the security and maintainability of the project by ensuring compatibility with downstream projects and staying up-to-date with the latest package versions, reducing the risk of potential security vulnerabilities and bugs.

Dependency updates:

* Bump actions/setup-python from 4 to 5 ([89](https://github.com/databrickslabs/blueprint/pull/89)).
* Bump softprops/action-gh-release from 1 to 2 ([87](https://github.com/databrickslabs/blueprint/pull/87)).
* Bump actions/checkout from 2.5.0 to 4.1.2 ([88](https://github.com/databrickslabs/blueprint/pull/88)).
* Bump codecov/codecov-action from 1 to 4 ([85](https://github.com/databrickslabs/blueprint/pull/85)).
* Bump actions/checkout from 4.1.2 to 4.1.3 ([95](https://github.com/databrickslabs/blueprint/pull/95)).
* Bump actions/checkout from 4.1.3 to 4.1.5 ([100](https://github.com/databrickslabs/blueprint/pull/100)).

0.4.4

* If `Threads.strict()` raises just one error, don't wrap it with `ManyError` ([79](https://github.com/databrickslabs/blueprint/issues/79)). The `strict` method in the `gather` function of the `parallel.py` module in the `databricks/labs/blueprint` package has been updated to change the way it handles errors. Previously, if any task in the `tasks` sequence failed, the `strict` method would raise a `ManyError` exception containing all the errors. With this change, if only one error occurs, that error will be raised directly without being wrapped in a `ManyError` exception. This simplifies error handling and avoids unnecessary nesting of exceptions. Additionally, the `__tracebackhide__` dunder variable has been added to the method to improve the readability of tracebacks by hiding it from the user. This update aims to provide a more streamlined and user-friendly experience for handling errors in parallel processing tasks.

0.4.3

* Fixed marshalling & unmarshalling edge cases ([76](https://github.com/databrickslabs/blueprint/issues/76)). The serialization and deserialization methods in the code have been updated to improve handling of edge cases during marshalling and unmarshalling of data. When encountering certain edge cases, the `_marshal_list` method will now return an empty list instead of None, and both the `_unmarshal` and `_unmarshal_dict` methods will return None as is if the input is None. Additionally, the `_unmarshal` method has been updated to call `_unmarshal_generic` instead of checking if the type reference is a dictionary or list when it is a generic alias. The `_unmarshal_generic` method has also been updated to handle cases where the input is None. A new test case, `test_load_empty_data_class()`, has been added to the `tests/unit/test_installation.py` file to verify this behavior, ensuring that the correct behavior is maintained when encountering these edge cases during the marshalling and unmarshalling processes. These changes increase the reliability of the serialization and deserialization processes.

Page 3 of 6

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.