Databricks-labs-blueprint

Latest version: v0.6.0

Safety actively analyzes 628499 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 4

0.6.0

* Added upstream wheel uploads for Databricks Workspaces without Public Internet access ([99](https://github.com/databrickslabs/blueprint/issues/99)). This commit introduces a new feature for uploading upstream wheel dependencies to Databricks Workspaces without Public Internet access. A new flag has been added to upload functions, allowing users to include or exclude dependencies in the download list. The `WheelsV2` class has been updated with a new method, `upload_wheel_dependencies(prefixes)`, which checks if each wheel's name starts with any of the provided prefixes before uploading it to the Workspace File System (WSFS). This feature also includes two new tests to verify the functionality of uploading the main wheel package and dependent wheel packages, optimizing downloads based on specific use cases. This enables users to more easily use the package in offline environments with restricted internet access, particularly for Databricks Workspaces with extra layers of network security.
* Fixed bug for double-uploading of unreleased wheels in air-gapped setups ([103](https://github.com/databrickslabs/blueprint/issues/103)). In this release, we have addressed a bug in the `upload_wheel_dependencies` method of the `WheelsV2` class, which caused double-uploading of unreleased wheels in air-gapped setups. This issue occurred due to the condition `if wheel.name == self._local_wheel.name` not being met, resulting in undefined behavior. We have introduced a cached property `_current_version` to tackle this bug for unreleased versions uploaded to air-gapped workspaces. We also added a new method, `upload_to_wsfs()`, that uploads files to the workspace file system (WSFS) in the integration test. This release also includes new tests to ensure that only the Databricks SDK is uploaded and that the number of installation files is correct. These changes have resolved the double-uploading issue, and the number of installation files, Databricks SDK, Blueprint, and version.json metadata are now uploaded correctly to WSFS.

0.5.0

* Added content assertion for `assert_file_uploaded` and `assert_file_dbfs_uploaded` in `MockInstallation` ([101](https://github.com/databrickslabs/blueprint/issues/101)). The recent commit introduces a content assertion feature to the `MockInstallation` class, enhancing its testing capabilities. This is achieved by adding an optional `expected` parameter of type `bytes` to the `assert_file_uploaded` and `assert_file_dbfs_uploaded` methods, allowing users to verify the uploaded content's correctness. The `_assert_upload` method has also been updated to accept this new parameter, ensuring the actual uploaded content matches the expected content. Furthermore, the commit includes informative docstrings for the new and updated methods, providing clear explanations of their functionality and usage. To support these improvements, new test cases `test_assert_file_uploaded` and `test_load_empty_data_class` have been added to the `tests/unit/test_installation.py` file, enabling more rigorous testing of the `MockInstallation` class and ensuring that the expected content is uploaded correctly.
* Added handling for partial functions in `parallel.Threads` ([93](https://github.com/databrickslabs/blueprint/issues/93)). In this release, we have enhanced the `parallel.Threads` module with the ability to handle partial functions, addressing issue [#93](https://github.com/databrickslabs/blueprint/issues/93). This improvement includes the addition of a new static method, `_get_result_function_signature`, to obtain the signature of a function or a string representation of its arguments and keywords if it is a partial function. The `_wrap_result` class method has also been updated to log an error message with the function's signature if an exception occurs. Furthermore, we have added a new test case, `test_odd_partial_failed`, to the unit tests, ensuring that the `gather` function handles partial functions that raise errors correctly. The Python version required for this project remains at 3.10, and the `pyproject.toml` file has been updated to include "isort", "mypy", "types-PyYAML", and `types-requests` in the list of dependencies. These adjustments are aimed at improving the functionality and type checking in the `parallel.Threads` module.
* Align configurations with UCX project ([96](https://github.com/databrickslabs/blueprint/issues/96)). This commit brings project configurations in line with the UCX project through various fixes and updates, enhancing compatibility and streamlining collaboration. It addresses pylint configuration warnings, adjusts GitHub Actions workflows, and refines the `pyproject.toml` file. Additionally, the `NiceFormatter` class in `logger.py` has been improved for better code readability, and the versioning scheme has been updated to ensure SemVer and PEP440 compliance, making it easier to manage and understand the project's versioning. Developers adopting the project will benefit from these alignments, as they promote adherence to the project's standards and up-to-date best practices.
* Check backwards compatibility with UCX, Remorph, and LSQL ([84](https://github.com/databrickslabs/blueprint/issues/84)). This release includes an update to the dependabot configuration to check for daily updates in both the pip and github-actions package ecosystems, with a new directory parameter added for the pip ecosystem for more precise update management. Additionally, a new GitHub Actions workflow, "downstreams", has been added to ensure backwards compatibility with UCX, Remorph, and LSQL by running automated downstream checks on pull requests, merge groups, and pushes to the main branch. The workflow has appropriate permissions for writing id-tokens, reading contents, and writing pull-requests, and runs the downstreams action from the databrickslabs/sandbox repository using GITHUB_TOKEN for authentication. These changes improve the security and maintainability of the project by ensuring compatibility with downstream projects and staying up-to-date with the latest package versions, reducing the risk of potential security vulnerabilities and bugs.

Dependency updates:

* Bump actions/setup-python from 4 to 5 ([89](https://github.com/databrickslabs/blueprint/pull/89)).
* Bump softprops/action-gh-release from 1 to 2 ([87](https://github.com/databrickslabs/blueprint/pull/87)).
* Bump actions/checkout from 2.5.0 to 4.1.2 ([88](https://github.com/databrickslabs/blueprint/pull/88)).
* Bump codecov/codecov-action from 1 to 4 ([85](https://github.com/databrickslabs/blueprint/pull/85)).
* Bump actions/checkout from 4.1.2 to 4.1.3 ([95](https://github.com/databrickslabs/blueprint/pull/95)).
* Bump actions/checkout from 4.1.3 to 4.1.5 ([100](https://github.com/databrickslabs/blueprint/pull/100)).

0.4.4

* If `Threads.strict()` raises just one error, don't wrap it with `ManyError` ([79](https://github.com/databrickslabs/blueprint/issues/79)). The `strict` method in the `gather` function of the `parallel.py` module in the `databricks/labs/blueprint` package has been updated to change the way it handles errors. Previously, if any task in the `tasks` sequence failed, the `strict` method would raise a `ManyError` exception containing all the errors. With this change, if only one error occurs, that error will be raised directly without being wrapped in a `ManyError` exception. This simplifies error handling and avoids unnecessary nesting of exceptions. Additionally, the `__tracebackhide__` dunder variable has been added to the method to improve the readability of tracebacks by hiding it from the user. This update aims to provide a more streamlined and user-friendly experience for handling errors in parallel processing tasks.

0.4.3

* Fixed marshalling & unmarshalling edge cases ([76](https://github.com/databrickslabs/blueprint/issues/76)). The serialization and deserialization methods in the code have been updated to improve handling of edge cases during marshalling and unmarshalling of data. When encountering certain edge cases, the `_marshal_list` method will now return an empty list instead of None, and both the `_unmarshal` and `_unmarshal_dict` methods will return None as is if the input is None. Additionally, the `_unmarshal` method has been updated to call `_unmarshal_generic` instead of checking if the type reference is a dictionary or list when it is a generic alias. The `_unmarshal_generic` method has also been updated to handle cases where the input is None. A new test case, `test_load_empty_data_class()`, has been added to the `tests/unit/test_installation.py` file to verify this behavior, ensuring that the correct behavior is maintained when encountering these edge cases during the marshalling and unmarshalling processes. These changes increase the reliability of the serialization and deserialization processes.

0.4.2

* Fixed edge cases when loading typing.Dict, typing.List and typing.ClassVar ([74](https://github.com/databrickslabs/blueprint/issues/74)). In this release, we have implemented changes to improve the handling of edge cases related to the Python `typing.Dict`, `typing.List`, and `typing.ClassVar` during serialization and deserialization of dataclasses and generic types. Specifically, we have modified the `_marshal` and `_unmarshal` functions to check for the `__origin__` attribute to determine whether the type is a `ClassVar` and skip it if it is. The `_marshal_dataclass` and `_unmarshal_dataclass` functions now check for the `__dataclass_fields__` attribute to ensure that only dataclass fields are marshaled and unmarshaled. We have also added a new unit test for loading a complex data class using the `MockInstallation` class, which contains various attributes such as a string, a nested dictionary, a list of `Policy` objects, and a dictionary mapping string keys to `Policy` objects. This test case checks that the installation object correctly serializes and deserializes the `ComplexClass` instance to and from JSON format according to the specified attribute types, including handling of the `typing.Dict`, `typing.List`, and `typing.ClassVar` types. These changes improve the reliability and robustness of our library in handling complex data types defined in the `typing` module.
* `MockPrompts.extend()` now returns a copy ([72](https://github.com/databrickslabs/blueprint/issues/72)). In the latest release, the `extend()` method in the `MockPrompts` class of the `tui.py` module has been enhanced. Previously, `extend()` would modify the original `MockPrompts` object, which could lead to issues when reusing the same object in multiple places within the same test, as its state would be altered each time `extend()` was called. This has been addressed by updating the `extend()` method to return a copy of the `MockPrompts` object with the updated patterns and answers, instead of modifying the original object. This change ensures that the original `MockPrompts` object can be securely reused in multiple test scenarios without unintended side effects, preserving the integrity of the original state. Furthermore, additional tests have been incorporated to verify the correct behavior of both the new and original prompts.

0.4.1

* Fixed `MockInstallation` to emulate workspace-global setup ([69](https://github.com/databrickslabs/blueprint/issues/69)). In this release, the `MockInstallation` class in the `installation` module has been updated to better replicate a workspace-global setup, enhancing testing and development accuracy. The `is_global` method now utilizes the `product` method instead of `_product`, and a new instance variable `_is_global` with a default value of `True` is introduced in the `__init__` method. Moreover, a new `product` method is included, which consistently returns the string "mock". These enhancements resolve issue [#69](https://github.com/databrickslabs/blueprint/issues/69), "Fixed `MockInstallation` to emulate workspace-global setup", ensuring the `MockInstallation` instance behaves as a global installation, facilitating precise and reliable testing and development for our software engineering team.
* Improved `MockPrompts` with `extend()` method ([68](https://github.com/databrickslabs/blueprint/issues/68)). In this release, we've added an `extend()` method to the `MockPrompts` class in our library's TUI module. This new method allows developers to add new patterns and corresponding answers to the existing list of questions and answers in a `MockPrompts` object. The added patterns are compiled as regular expressions and the questions and answers list is sorted by the length of the regular expression patterns in descending order. This feature is particularly useful for writing tests where prompt answers need to be changed, as it enables better control and customization of prompt responses during testing. By extending the list of questions and answers, you can handle additional prompts without modifying the existing ones, resulting in more organized and maintainable test code. If a prompt hasn't been mocked, attempting to ask a question with it will raise a `ValueError` with an appropriate error message.
* Use Hatch v1.9.4 to as build machine requirement ([70](https://github.com/databrickslabs/blueprint/issues/70)). The Hatch package version for the build machine requirement has been updated from 1.7.0 to 1.9.4 in this change. This update streamlines the Hatch setup and version management, removing the specific installation step and listing `hatch` directly in the required field. The pre-setup command now only includes "hatch env create". Additionally, the acceptance tool version has been updated to ensure consistent project building and testing with the specified Hatch version. This change is implemented in the acceptance workflow file and the version of the acceptance tool used by the sandbox. This update ensures that the project can utilize the latest features and bug fixes available in Hatch 1.9.4, improving the reliability and efficiency of the build process. This change is part of the resolution of issue [#70](https://github.com/databrickslabs/blueprint/issues/70).

Page 1 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.