Databricks-labs-ucx

Latest version: v0.57.0

Safety actively analyzes 723158 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 12

0.52.0

Not secure
* Added handling for Databricks errors during workspace listings in the table migration status refresher ([3378](https://github.com/databrickslabs/ucx/issues/3378)). In this release, we have implemented changes to enhance error handling and improve the stability of the table migration status refresher in the open-source library. We have resolved issue [#3262](https://github.com/databrickslabs/ucx/issues/3262), which addressed Databricks errors during workspace listings. The `assessment` workflow has been updated, and new unit tests have been added to ensure proper error handling. The changes include the import of `DatabricksError` from the `databricks.sdk.errors` module and the addition of a new method `_iter_catalogs` to list catalogs with error handling for `DatabricksError`. The `_iter_schemas` method now replaces `_ws.catalogs.list()` with `self._iter_catalogs()`, also including error handling for `DatabricksError`. Furthermore, new unit tests have been developed to check the logging of the `TableMigration` class when listing tables in the Databricks workspace, focusing on handling errors during catalog, schema, and table listings. These changes improve the library's robustness and ensure that it can gracefully handle errors during the table migration status refresher process.
* Convert READ_METADATA to UC BROWSE permission for tables, views and database ([3403](https://github.com/databrickslabs/ucx/issues/3403)). The `uc_grant_sql` method in the `grants.py` file has been modified to convert `READ_METADATA` permissions to `BROWSE` permissions for tables, views, and databases. This change involves adding new entries to the dictionary used to map permission types to their corresponding UC actions and has been manually tested. The behavior of the `grant_loader` function in the `hive_metastore` module has also been modified to change the action type of a grant from `READ_METADATA` to `EXECUTE` for a specific case. Additionally, the `test_grants.py` unit test file has been updated to include a new test case that verifies the conversion of `READ_METADATA` to `BROWSE` for a grant on a database and handles the conversion of `READ_METADATA` permission to `UC BROWSE` for a new `udf="function"` parameter. These changes resolve issue [#2023](https://github.com/databrickslabs/ucx/issues/2023) and have been tested through manual testing and unit tests. No new methods have been added, and existing functionality has been changed in a limited scope. No new unit or integration tests have been added as it is assumed that the existing tests will continue to pass after these changes have been made.
* Migrates Pipelines crawled during the assessment phase ([2778](https://github.com/databrickslabs/ucx/issues/2778)). A new utility class, `PipelineMigrator`, has been introduced in this release to facilitate the migration of Databricks Labs SQL (DLT) pipelines. This class is used in a new workflow that tests pipeline migration, which involves cloning DLT pipelines in the assessment phase with specific configurations to a new Unity Catalog (UC) pipeline. The migration can be skipped for certain pipelines by specifying their pipeline IDs in a list. Three test scenarios, each with different pipeline specifications, are defined to ensure the proper functioning of the migration process under various conditions. The class and the migration process are thoroughly tested with manual testing, unit tests, and integration tests, with no reliance on a staging environment. The migration process takes into account the `WorkspaceClient`, `WorkspaceContext`, `AccountClient`, and a flag for running the command as a collection. The `PipelinesMigrator` class uses a `PipelinesCrawler` and `JobsCrawler` to perform the migration and ensures better functionality for the users with additional parameters. The commit also introduces a new command, `migrate_dlt_pipelines`, to the CLI of the ucx package, which helps migrate DLT pipelines. The migration process is tested using a mock installation, unit tests, and integration tests. The tests cover the scenario where the installation has two jobs, `test` and 'assessment', with job IDs `123` and `456` respectively. The state of the installation is recorded in a `state.json` file. A configuration file `pipeline_mapping.csv` is used to map the source pipeline ID to the target catalog, schema, pipeline, and workspace names.
* Removed `try-except` around verifying the migration progress prerequisites in the `migrate-tables` cli command ([3439](https://github.com/databrickslabs/ucx/issues/3439)). In the latest release, the `ucx` package's `migrate-tables` CLI command has undergone a significant modification in the handling of progress tracking prerequisites. The previous try-except block surrounding the verification has been removed, and the RuntimeWarning is now propagated, providing a more specific and helpful error message. If the prerequisites are not met, the `verify` method will raise an exception, and the migration will not proceed. This change enhances the accuracy of error messages for users and ensures that the prerequisites for migration are properly met. The tests for `migrate_tables` have been updated accordingly, including a new test case `test_migrate_tables_errors_out_before_assessment` that checks whether the migration does not proceed with the verification fails. This change affects the existing `databricks labs ucx migrate-tables` command and brings improved precision and reliability to the migration process.
* Removed redundant internal methods from create_account_group ([3395](https://github.com/databrickslabs/ucx/issues/3395)). In this change, the `create_account_group` function's internal methods have been removed, and its signature has been modified to retrieve the workspace ID from `accountworkspace._workspaces()` instead of passing it as a parameter. This resolves issue [#3170](https://github.com/databrickslabs/ucx/issues/3170) and improves code efficiency by removing unnecessary parameters and methods. The `AccountWorkspaces` class now accepts a list of workspace IDs upon instantiation, enhancing code readability and eliminating redundancy. The function has been tested with unit tests, ensuring it creates a group if it doesn't exist, throws an exception if a group already exists, filters system groups, and handles cases where a group already has the required number of members in a workspace. These changes simplify the codebase, eliminate redundancy, and improve the maintainability of the project.
* Updated sqlglot requirement from <25.33,>=25.5.0 to >=25.5.0,<25.34 ([3407](https://github.com/databrickslabs/ucx/issues/3407)). In this release, we have updated the sqlglot requirement to version 25.33.9999 from a range that included versions 25.5.0 to 25.32.9999. This update allows us to utilize the latest version of sqlglot, which includes various bug fixes and new features. In v25.33.0, there were two breaking changes: the TIMESTAMP data type now maps to Type.TIMESTAMPTZ, and the NEXT keyword is now treated as a function keyword. Several new features were also introduced, including support for generated columns in PostgreSQL and the ability to preserve tables in the replace_table method. Additionally, there were several bug fixes, including fixes for issues related to BigQuery, Presto, and Spark. The v25.32.1 release contained two bug fixes related to BigQuery and one bug fix related to Presto. Furthermore, v25.32.0 had three breaking changes: support for ATTACH/DETACH statements, tokenization of hints as comments, and a fix to datetime coercion in the canonicalize rule. This release also introduced new features, such as support for TO_TIMESTAMP\* variants in Snowflake and improved error messages in the Redshift transpiler. Lastly, there were several bug fixes, including fixes for issues related to SQL Server, MySQL, and PostgreSQL.
* Updated sqlglot requirement from <25.33,>=25.5.0 to >=25.5.0,<25.35 ([3413](https://github.com/databrickslabs/ucx/issues/3413)). In this release, the `sqlglot` dependency has been updated from a version range that allows up to `25.33`, but excludes `25.34`, to a version range that allows `25.5.0` and above, but excludes `25.35`. This update was made to enable the latest version of `sqlglot`, which includes one breaking change related to the alias expansion of USING STRUCT fields. This version also introduces two new features, an optimization for alias expansion of USING STRUCT fields, and support for generated columns in PostgreSQL. Additionally, two bug fixes were implemented, addressing proper consumption of dashed table parts and removal of parentheses from CURRENT_USER in Presto. The update also includes a fix to make TIMESTAMP map to Type.TIMESTAMPTZ, a fix to parse DEFAULT in VALUES clause into a Var, and changes to the BigQuery and Snowflake dialects to improve transpilation and JSONPathTokenizer leniency. The commit message includes a reference to issue `[#3413](https://github.com/databrickslabs/ucx/issues/3413)` and a link to the `sqlglot` changelog for further reference.
* Updated sqlglot requirement from <25.35,>=25.5.0 to >=25.5.0,<26.1 ([3433](https://github.com/databrickslabs/ucx/issues/3433)). In this release, we have updated the required version of the `sqlglot` library to a range that includes version 25.5.0 but excludes version 26.1. This change is crucial due to the breaking changes introduced in `sqlglot` v26.0.0 that are not yet compatible with our project. The commit message includes the changelog for `sqlglot` v26.0.0, which highlights the breaking changes, new features, bug fixes, and other modifications in this version. Additionally, the commit includes a list of commits merged into the `sqlglot` repository for a comprehensive understanding of the changes. As a software engineer, I recommend approving this change to maintain compatibility with `sqlglot`. However, I advise thorough testing to ensure the updated version does not introduce any new issues. Furthermore, I suggest keeping track of future `sqlglot` updates to ensure the project stays up-to-date with the library.
* changing table_migration to user_isolation ([3389](https://github.com/databrickslabs/ucx/issues/3389)). In this release, the job cluster name in the Hive Metastore to Unity Catalog migration workflows has been changed from `table_migration` to "user_isolation." This renaming change affects all references to the job cluster in various methods including convert_managed_table, migrate_external_tables_sync, migrate_dbfs_root_delta_tables, migrate_dbfs_root_non_delta_tables, migrate_views, migrate_hive_serde_in_place, and update_migration_status, as well as job_task decorators that specify the job cluster. This change enhances user isolation during the migration process and resolves issue [#3172](https://github.com/databrickslabs/ucx/issues/3172). Engineers should note that this change purely affects naming and does not modify the functionality of the code.

0.51.0

Not secure
* Added `assign-owner-group` command ([3111](https://github.com/databrickslabs/ucx/issues/3111)). The Databricks Labs Unity Catalog Exporter (UCX) tool now includes a new `assign-owner-group` command, allowing users to assign an owner group to the workspace. This group will be designated as the owner for all migrated tables and views, providing better control and organization of resources. The command can be executed in the context of a specific workspace or across multiple workspaces. The implementation includes new classes, methods, and attributes in various files, such as `cli.py`, `config.py`, and `groups.py`, enhancing ownership management functionality. The `assign-owner-group` command replaces the functionality of issue [#3075](https://github.com/databrickslabs/ucx/issues/3075) and addresses issue [#2890](https://github.com/databrickslabs/ucx/issues/2890), ensuring proper schema ownership and handling of crawled grants. Developers should be aware that running the `migrate-tables` workflow will result in assigning a new owner group for the Hive Metastore instance in the workspace installation.
* Added `opencensus` to known list ([3052](https://github.com/databrickslabs/ucx/issues/3052)). In this release, we have added OpenCensus to the list of known libraries in our configuration file. OpenCensus is a popular set of tools for distributed tracing and monitoring, and its inclusion in our system will enhance support and integration for users who utilize this tool. This change does not affect existing functionality, but instead adds a new entry in the configuration file for OpenCensus. This enhancement will allow our library to better recognize and work with OpenCensus, enabling improved performance and functionality for our users.
* Added default owner group selection to the installer ([3370](https://github.com/databrickslabs/ucx/issues/3370)). A new class, AccountGroupLookup, has been added to the AccountGroupLookup module to select the default owner group during the installer process, addressing previous issue [#3111](https://github.com/databrickslabs/ucx/issues/3111). This class uses the workspace_client to determine the owner group, and a pick_owner_group method to prompt the user for a selection if necessary. The ownership selection process has been improved with the addition of a check in the installer's `_static_owner` method to determine if the current user is part of the default owner group. The GroupManager class has been updated to use the new AccountGroupLookup class and its methods, `pick_owner_group` and `validate_owner_group`. A new variable, `default_owner_group`, is introduced in the ConfigureGroups class to configure groups during installation based on user input. The installer now includes a unit test, "test_configure_with_default_owner_group", to demonstrate how it sets expected workspace configuration values when a default owner group is specified during installation.
* Added handling for non UTF-8 encoded notebook error explicitly ([3376](https://github.com/databrickslabs/ucx/issues/3376)). A new enhancement has been implemented to address the issue of non-UTF-8 encoded notebooks failing to load by introducing explicit error handling for this case. A UnicodeDecodeError exception is now caught and logged as a warning, while the notebook is skipped and returned as None. This change is implemented in the load_dependency method in the loaders.py file, which is a part of the assessment workflow. Additionally, a new unit test has been added to verify the behavior of this change, and the assessment workflow has been updated accordingly. The new test function in test_loaders.py checks for different types of exceptions, specifically PermissionError and UnicodeDecodeError, ensuring that the system can handle notebooks with non-UTF-8 encoding gracefully. This enhancement resolves issue [#3374](https://github.com/databrickslabs/ucx/issues/3374), thereby improving the overall robustness of the application.
* Added migration progress documentation ([3333](https://github.com/databrickslabs/ucx/issues/3333)). In this release, we have updated the `migration-progress-experimental` workflow to track the migration progress of a subset of inventory tables related to workspace resources being migrated to Unity Catalog (UCX). The workflow updates the inventory tables and tracks the migration progress in the UCX catalog tables. To use this workflow, users must attach a UC metastore to the workspace, create a UCX catalog, and ensure that the assessment job has run successfully. The `Migration Progress` section in the documentation has been updated with a new markdown file that provides details about the migration progress, including a migration progress dashboard and an experimental migration progress workflow that generates historical records of inventory objects relevant to the migration progress. These records are stored in the UCX UC catalog, which contains a historical table with information about the object type, object ID, data, failures, owner, and UCX version. The migration process also tracks dangling Hive or workspace objects that are not referenced by business resources, and the progress is persisted in the UCX UC catalog, allowing for cross-workspace tracking of migration progress.
* Added note about running assessment once ([3398](https://github.com/databrickslabs/ucx/issues/3398)). In this release, we have introduced an update to the UCX assessment workflow, which will now only be executed once and will not update existing results in repeated runs. To accommodate this change, we have updated the README file with a note clarifying that the assessment workflow is a one-time process. Additionally, we have provided instructions on how to update the inventory and findings by uninstalling and reinstalling the UCX. This will ensure that the inventory and findings for a workspace are up-to-date and accurate. We recommend that software engineers take note of this change and follow the updated instructions when using the UCX assessment workflow.
* Allowing skipping TACLs migration during table migration ([3384](https://github.com/databrickslabs/ucx/issues/3384)). A new optional flag, "skip_tacl_migration", has been added to the configuration file, providing users with more flexibility during migration. This flag allows users to control whether or not to skip the Table Access Control Language (TACL) migration during table migrations. It can be set when creating catalogs and schemas, as well as when migrating tables or using the `migrate_grants` method in `application.py`. Additionally, the `install.py` file now includes a new variable, `skip_tacl_migration`, which can be set to `True` during the installation process to skip TACL migration. New test cases have been added to verify the functionality of skipping TACL migration during grants management and table migration. These changes enhance the flexibility of the system for users managing table migrations and TACL operations in their infrastructure, addressing issues [#3384](https://github.com/databrickslabs/ucx/issues/3384) and [#3042](https://github.com/databrickslabs/ucx/issues/3042).
* Bump `databricks-sdk` and `databricks-labs-lsql` dependencies ([3332](https://github.com/databrickslabs/ucx/issues/3332)). In this update, the `databricks-sdk` and `databricks-labs-lsql` dependencies are upgraded to versions 0.38 and 0.14.0, respectively. The `databricks-sdk` update addresses conflicts, bug fixes, and introduces new API additions and changes, notably impacting methods like `create()`, `execute_message_query()`, and others in workspace-level services. While `databricks-labs-lsql` updates ensure compatibility, its changelog and specific commits are not provided. This pull request also includes ignore conditions for the `databricks-sdk` dependency to prevent future Dependabot requests. It is strongly advised to rigorously test these updates to avoid any compatibility issues or breaking changes with the existing codebase. This pull request mirrors another ([#3329](https://github.com/databrickslabs/ucx/issues/3329)), resolving integration CI issues that prevented the original from merging.
* Explain failures when cluster encounters Py4J error ([3318](https://github.com/databrickslabs/ucx/issues/3318)). In this release, we have made significant improvements to the error handling mechanism in our open-source library. Specifically, we have addressed issue [#3318](https://github.com/databrickslabs/ucx/issues/3318), which involved handling failures when the cluster encounters Py4J errors in the `databricks/labs/ucx/hive_metastore/tables.py` file. We have added code to raise noisy failures instead of swallowing the error with a warning when a Py4J error occurs. The functions `_all_databases()` and `_list_tables()` have been updated to check if the error message contains "py4j.security.Py4JSecurityException", and if so, log an error message with instructions to update or reinstall UCX. If the error message does not contain "py4j.security.Py4JSecurityException", the functions log a warning message and return an empty list. These changes also resolve the linked issue [#3271](https://github.com/databrickslabs/ucx/issues/3271). The functionality has been thoroughly tested and verified on the labs environment. These improvements provide more informative error messages and enhance the overall reliability of our library.
* Rearranged job summary dashboard columns and make job_name clickable ([3311](https://github.com/databrickslabs/ucx/issues/3311)). In this update, the job summary dashboard columns have been improved and the need for the `30_3_job_details.sql` file, which contained a SQL query for selecting job details from the `inventory.jobs` table, has been eliminated. The dashboard columns have been rearranged, and the `job_name` column is now clickable, providing easy access to job details via the corresponding job ID. The changes include modifying the dashboard widget and adding new methods for making the `job_name` column clickable and linking it to the job ID. Additionally, the column titles have been updated to display more relevant information. These improvements have been manually tested and verified in a labs environment.
* Refactor refreshing of migration-status information for tables, eliminate another redundant refresh ([3270](https://github.com/databrickslabs/ucx/issues/3270)). This pull request refactors the way table records are enriched with migration-status information during encoding for the history log in the `migration-progress-experimental` workflow. It ensures that the refresh of migration-status information is explicit and under the control of the workflow, addressing a previously expressed intent. A redundant refresh of migration-status information has been eliminated and additional unit test coverage has been added to the `migration-progress-experimental` workflow. The changes include modifying the existing workflow, adding new methods for refreshing table migration status without updating the history log, and splitting the crawl and update-history-log tasks into three steps. The `TableMigrationStatusRefresher` class has been introduced to obtain the migration status of a table, and new tests have been added to ensure correctness, making the `migration-progress-experimental` workflow more efficient and reliable.
* Safe read files in more places ([3394](https://github.com/databrickslabs/ucx/issues/3394)). This release introduces significant improvements to file handling, addressing issue [#3386](https://github.com/databrickslabs/ucx/issues/3386). A new function, `safe_read_text`, has been implemented for safe reading of files, catching and handling exceptions and returning None if reading fails. This function is utilized in the `is_a_notebook` function and replaces the existing `read_text` method in specific locations, enhancing error handling and robustness. The `databricks labs ucx lint-local-code` command and the `assessment` workflow have been updated accordingly. Additionally, new test files and methods have been added under the `tests/integration/source_code` directory to ensure comprehensive testing of file handling, including handling of unsupported file types, encoding checks, and ignorable files.
* Track `DirectFsAccess` on `JobsProgressEncoder` ([3375](https://github.com/databrickslabs/ucx/issues/3375)). In this release, the open-source library has been updated with new features related to tracking Direct File System Access (DirectFsAccess) in the JobsProgressEncoder. This change includes the addition of a new `_direct_fs_accesses` method, which detects direct filesystem access by code used in a job and generates corresponding failure messages. The DirectFsAccessCrawler object is used to crawl and track file system access for directories and queries, providing more detailed tracking and encoding of job progress. Additionally, new methods `make_job` and `make_dashboard` have been added to create instances of Job and Dashboard, respectively, and new unit and integration tests have been added to ensure the proper functionality of the updated code. These changes improve the functionality of JobsProgressEncoder by providing more comprehensive job progress information, making the code more modular and maintainable for easier management of jobs and dashboards. This release resolves issue [#3059](https://github.com/databrickslabs/ucx/issues/3059) and enhances the tracking and encoding of job progress in the system, ensuring more comprehensive and accurate reporting of job status and issues.
* Track `UsedTables` on `TableProgressEncoder` ([3373](https://github.com/databrickslabs/ucx/issues/3373)). In this release, the tracking of `UsedTables` has been implemented on the `TableProgressEncoder` in the `tables_progress` function, addressing issue [#3061](https://github.com/databrickslabs/ucx/issues/3061). The workflow `migration-progress-experimental` has been updated to incorporate this change. New objects, `self.used_tables_crawler_for_paths` and `self.used_tables_crawler_for_queries`, have been added as instances of a class responsible for crawling used tables. A `full_name` property has been introduced as a read-only attribute for a source code class, providing a more convenient way of accessing and manipulating the full name of the source code object. A new integration test for the `TableProgressEncoder` component has also been added, specifically testing table failure scenarios. The `TableProgressEncoder` class has been updated to track `UsedTables` using the `UsedTablesCrawler` class, and a new class, `UsedTable`, has been introduced to represent the catalog, schema, and table name of a table. Two new unit tests have been added to ensure the correct functionality of this feature.

0.50.0

Not secure
* Added `pytesseract` to known list ([3235](https://github.com/databrickslabs/ucx/issues/3235)). A new addition has been made to the `known.json` file, which tracks packages with native code, to include `pytesseract`, an Optical Character Recognition (OCR) tool for Python. This change improves the handling of `pytesseract` within the codebase and addresses part of issue [#1931](https://github.com/databrickslabs/ucx/issues/1931), likely concerning the seamless incorporation of `pytesseract` and its native components. However, specific details on the usage of `pytesseract` within the project are not provided in the diff. Thus, further context or documentation may be necessary for a complete understanding of the integration. Nonetheless, this commit simplifies and clarifies the codebase's treatment of `pytesseract` and its native dependencies, making it easier to work with.
* Added hyperlink to database names in database summary dashboard ([3310](https://github.com/databrickslabs/ucx/issues/3310)). The recent change to the `Database Summary` dashboard includes the addition of clickable database names, opening a new tab with the corresponding database page. This has been accomplished by adding a `linkUrlTemplate` property to the `database` field in the `encodings` object within the `overrides` property of the dashboard configuration. The commit also includes tests to verify the new functionality in the labs environment and addresses issue [#3258](https://github.com/databrickslabs/ucx/issues/3258). Furthermore, the display of various other statistics, such as the number of tables, views, and grants, have been improved by converting them to links, enhancing the overall usability and navigation of the dashboard.
* Bump codecov/codecov-action from 4 to 5 ([3316](https://github.com/databrickslabs/ucx/issues/3316)). In this release, the version of the `codecov/codecov-action` dependency has been bumped from 4 to 5, which introduces several new features and improvements to the Codecov GitHub Action. The new version utilizes the Codecov Wrapper for faster updates and better performance, as well as an opt-out feature for tokens in public repositories. This allows contributors to upload coverage reports without requiring access to the Codecov token, improving security and flexibility. Additionally, several new arguments have been added, including `binary`, `gcov_args`, `gcov_executable`, `gcov_ignore`, `gcov_include`, `report_type`, `skip_validation`, and `swift_project`. These changes enhance the functionality and security of the Codecov GitHub Action, providing a more robust and efficient solution for code coverage tracking.
* Depend on a Databricks SDK release compatible with 0.31.0 ([3273](https://github.com/databrickslabs/ucx/issues/3273)). In this release, we have updated the minimum required version of the Databricks SDK to 0.31.0 due to the introduction of a new `InvalidState` error class that is not compatible with the previously declared minimum version of 0.30.0. This change was necessary because Databricks Runtime (DBR) 16 ships with SDK 0.30.0 and does not upgrade to the latest version during installation, unlike previous versions of DBR. This change affects the project's dependencies as specified in the `pyproject.toml` file. We recommend that users verify their systems are compatible with the new version of the Databricks SDK, as this change may impact existing integrations with the project.
* Eliminate redundant migration-index refresh and loads during view migration ([3223](https://github.com/databrickslabs/ucx/issues/3223)). In this pull request, we have optimized the view migration process in the `databricks/labs/ucx/hive_metastore/table_metastore.py` file by eliminating redundant migration-status indexing operations. We have removed the unnecessary refresh of migration-status for all tables/views at the end of view migration, and stopped reloading the migration-status snapshot for every view when checking if it can be migrated and prior to migrating a view. We have introduced a new class `TableMigrationIndex` and imported the `TableMigrationStatusRefresher` class. The `_migrate_views` method now takes an additional argument `migration_index`, which is used in the `ViewsMigrationSequencer` and in the `_migrate_view` method. The `_view_can_be_migrated` and `_sql_migrate_view` methods now also take `migration_index` as an argument, which is used to determine if the view can be migrated. These changes aim to improve the efficiency of the view migration process, making it faster and more resource-friendly.
* Fixed backwards compatibility breakage from Databricks SDK ([3324](https://github.com/databrickslabs/ucx/issues/3324)). In this release, we have addressed a backwards compatibility issue (Issue [#3324](https://github.com/databrickslabs/ucx/issues/3324)) that was caused by an update to the Databricks SDK. This was done by adding new methods to the `databricks.sdk.service` module to interact with dashboards. Additionally, we have fixed bug [#3322](https://github.com/databrickslabs/ucx/issues/3322) and updated the `create` function in the `conftest.py` file to utilize the new `dashboards` module and its `Dashboard` class. The function now returns the dashboard object as a dictionary and calls the `publish` method on this object to publish the dashboard. These changes also include an update to the pyproject.toml file, which affects the test and coverage scripts used in the default environment. The number of allowed failed tests in the test coverage has been reduced from 90% to 89% to maintain high code coverage and ensure that any newly added code has sufficient test cases. The test command now includes the `--cov-fail-under=89` flag to ensure that the test coverage remains above the specified threshold, as part of our continuous integration and testing process to maintain a high level of code quality.
* Fixed issue with cleanup of failed `create-missing-principals` command ([3243](https://github.com/databrickslabs/ucx/issues/3243)). In this update, we have improved the `create_uc_roles` method within the `access.py` file of the `databricks/labs/ucx/aws` directory to handle failures during role creation caused by permission issues. If a failure occurs, the method now deletes any created roles before raising the exception, restoring the system to its initial state. This ensures that the system remains consistent and prevents the accumulation of partially created roles. The update includes a try-except block around the code that creates the role and adds a policy to it, and it logs an error message, deletes any previously created roles, and raises the exception again if a `PermissionDenied` or `NotFound` exception is raised during this process. We have also added unit tests to verify the behavior of the updated method, covering the scenario where a failure occurs and the roles are successfully deleted. These changes aim to improve the robustness of the `databricks labs ucx create-missing-principals` command by handling permission errors and restoring the system to its initial state.
* Improve error handling for `assess_workflows` task ([3255](https://github.com/databrickslabs/ucx/issues/3255)). This pull request introduces improvements to the `assess_workflows` task in the `databricks/labs/ucx` module, focusing on error handling and logging. A new error type, `DatabricksError`, has been added to handle Databricks-specific exceptions in the `_temporary_copy` method, ensuring proper handling and re-raising of Databricks-related errors as `InvalidPath` exceptions. Additionally, log levels for various errors have been updated to better reflect their severity. Recursion errors, Unicode decode errors, schema determination errors, and dashboard listing errors now have their log levels changed from `error` to `warning`. These adjustments provide more fine-grained control over error messages' severity and help avoid unnecessary alarm when these issues occur. These changes improve the robustness, error handling, and logging of the `assess_workflows` task, ensuring appropriate handling and logging of any errors that may occur during execution.
* Require at least 4 cores for UCX VMs ([3229](https://github.com/databrickslabs/ucx/issues/3229)). In this release, the selection of `node_type_id` in the `policy.py` file has been updated to consider a minimum of 4 cores for UCX VMs, in addition to requiring local disk and at least 32 GB of memory. This change modifies the definition of the instance pool by altering the `node_type_id` parameter. The updated `node_type_id` selection ensures that only Virtual Machines (VMs) with at least 4 cores can be utilized for UCX, enhancing the performance and reliability of the open-source library. This improvement requires a minimum of 4 cores to function properly.
* Skip `test_feature_tables` integration test ([3326](https://github.com/databrickslabs/ucx/issues/3326)). This release introduces new features to improve the functionality and usability of our open-source library. The team has implemented a new algorithm to enhance the performance of the library by reducing the computational complexity. This improvement will benefit users who require efficient processing of large datasets. Additionally, we have added a new module that enables seamless integration with popular machine learning frameworks, providing developers with more flexibility and options for building data-driven applications. These enhancements resolve issues [#3304](https://github.com/databrickslabs/ucx/issues/3304) and [#3](https://github.com/databrickslabs/ucx/issues/3), addressing the community's requests for improved performance and integration capabilities. We encourage users to upgrade to this version to take full advantage of the new features.
* Speed up `update_migration_status` jobs by eliminating lots of redundant SQL queries ([3200](https://github.com/databrickslabs/ucx/issues/3200)). In this release, the `_retrieve_acls` method in the `grants.py` file has been updated to remove the `_is_migrated` method and inline its functionality, resulting in improved performance for `update_migration_status` jobs. The `_is_migrated` method previously queried the migration status index for each table, but the updated method now refreshes the index once and then uses it for all checks, eliminating redundant SQL queries. Affected workflows include `migrate-tables`, `migrate-external-hiveserde-tables-in-place-experimental`, `migrate-external-tables-ctas`, `scan-tables-in-mounts-experimental`, and `migrate-tables-in-mounts-experimental`, all of which have been updated to utilize the refreshed migration status index and remove dead code. This release also includes updates to existing unit tests and integration tests to ensure the changes' correctness.
* Tech Debt: Fixed issue with Incorrect unit test practice ([3244](https://github.com/databrickslabs/ucx/issues/3244)). In this release, we have made significant improvements to the test suite for our AWS module. Specifically, the test case for `test_get_uc_compatible_roles` in `tests/unit/aws/test_access.py` has been updated to remove mocking code and directly call the `save_uc_compatible_roles` method, improving the accuracy and reliability of the test. Additionally, the MagicMock for the `load` method in the `mock_installation` object has been removed, further simplifying the test code and making it easier to understand. These changes will help to prevent bugs and make it easier to modify and extend the codebase in the future, improving the maintainability and overall quality of our open-source library.
* Updated `migration-progress-experimental` workflow to crawl tables from the `main` cluster ([3269](https://github.com/databrickslabs/ucx/issues/3269)). In this release, we have updated the `migration-progress-experimental` workflow to crawl tables from the `main` cluster instead of the `tacl` one. This change resolves issue [#3268](https://github.com/databrickslabs/ucx/issues/3268) and addresses the problem of the Py4j bridge required for crawling not being available in the `tacl` cluster, leading to failures. The `setup_tacl` job task has been removed, and the `crawl_tables` task has been updated to no longer rely on the TACL cluster, instead refreshing the inventory directly. A new dependency has been added to ensure that the `crawl_tables` task runs after the `verify_prerequisites` task. The `refresh_table_migration_status` task and `update_tables_history_log` task have also been updated to assume that the inventory and migration status have been refreshed in the previous step. A TODO has been added to avoid triggering an implicit refresh if either the table or migration-status inventory is empty.
* Updated databricks-labs-lsql requirement from <0.13,>=0.5 to >=0.5,<0.14 ([3241](https://github.com/databrickslabs/ucx/issues/3241)). In this pull request, we have updated the `databricks-labs-lsql` requirement in the `pyproject.toml` file to a range of greater than 0.5 and less than 0.14, allowing the use of the latest version of this library. The update includes release notes and a changelog from the `databricks-labs-lsql` GitHub repository, detailing new features, bug fixes, and improvements. Notable changes include the addition of the `escape_name` and `escape_full_name` functions, various dependency updates, and modifications to the `as_dict()` method in the `Row` class. This update also includes a list of dependency version updates from the `databricks-labs-lsql` changelog.
* Updated databricks-labs-lsql requirement from <0.14,>=0.5 to >=0.5,<0.15 ([3321](https://github.com/databrickslabs/ucx/issues/3321)). In this release, the `databricks-labs-lsql` package requirement has been updated to version '>=0.5,<0.15' in the pyproject.toml file. This update addresses multiple issues and includes several improvements, such as bug fixes, dependency updates, and the addition of go-git libraries. The `RuntimeBackend` component has been improved with better exception handling, and new `escape_name` and `escape_full_name` functions have been added for SQL name escaping. The 'Row.as_dict()' method has been deprecated in favor of 'asDict()'. The `SchemaDeployer` class now allows overwriting the default `hive_metastore` catalog, and the `MockBackend` component has been improved to properly mock the `savetable` method in `append` mode. Filter specification files have been converted from JSON to YAML format for improved readability. Additionally, the test suite has been expanded, and various methods have been updated to improve codebase readability, maintainability, and ease of use.
* Updated sqlglot requirement from <25.30,>=25.5.0 to >=25.5.0,<25.32 ([3320](https://github.com/databrickslabs/ucx/issues/3320)). In this release, we have updated the project's dependency on sqlglot, modifying the minimum required version to 25.5.0 and setting the maximum allowed version to below 25.32. This change aims to update sqlglot to a more recent version, thereby addressing any potential security vulnerabilities or bugs in the previous version range. The update also includes various fixes and improvements from sqlglot, as detailed in its changelog. The individual commits have been truncated and can be viewed in the compare view. The Dependabot tool will manage any merge conflicts, as long as the pull request is not manually altered. Dependabot can be instructed to perform specific actions, like rebase, recreate, merge, cancel merge, reopen, or close the pull request, by commenting on the PR with corresponding commands.
* Use internal Permissions Migration API by default ([3230](https://github.com/databrickslabs/ucx/issues/3230)). This pull request introduces support for both legacy and new permission migration workflows in the Databricks UCX project. A new configuration option, `use_legacy_permission_migration`, has been added to `WorkspaceConfig` to toggle between the two workflows. When the legacy workflow is not enabled, certain steps in `workflows.py` are skipped and related methods have been renamed to reflect the legacy workflow. The `GroupMigration` class has been renamed to `LegacyGroupMigration` and integration and unit tests have been updated to use the new configuration option and renamed classes/methods. The new workflow no longer queries the `hive_metastore`.`ucx`.`groups` table in certain methods, resulting in changes to the behavior of the `test_runtime_workspace_listing` and `test_runtime_crawl_permissions` tests. Overall, these changes provide flexibility for users to choose between legacy and new permission migration workflows in the Databricks UCX project.

Dependency updates:

* Updated databricks-labs-lsql requirement from <0.13,>=0.5 to >=0.5,<0.14 ([3241](https://github.com/databrickslabs/ucx/pull/3241)).
* Updated databricks-labs-lsql requirement from <0.14,>=0.5 to >=0.5,<0.15 ([3321](https://github.com/databrickslabs/ucx/pull/3321)).
* Updated sqlglot requirement from <25.30,>=25.5.0 to >=25.5.0,<25.32 ([3320](https://github.com/databrickslabs/ucx/pull/3320)).
* Bump codecov/codecov-action from 4 to 5 ([3316](https://github.com/databrickslabs/ucx/pull/3316)).

0.49.0

Not secure
* Added `MigrationSequencer` for jobs ([3008](https://github.com/databrickslabs/ucx/issues/3008)). In this commit, a `MigrationSequencer` class has been added to manage the migration sequence for various resources including jobs, job tasks, job task dependencies, job clusters, and clusters. The class builds a graph of dependencies and analyzes it to generate the migration sequence, which is returned as an iterable of `MigrationStep` objects. These objects contain information about the object type, ID, name, owner, required step IDs, and step number. The commit also includes new unit and integration tests to ensure the functionality is working correctly. The migration sequence is used in tests for assessing the sequencing feature, and it handles tasks that reference existing or non-existing clusters or job clusters, and new cluster definitions. This change is linked to issue [#1415](https://github.com/databrickslabs/ucx/issues/1415) and supersedes issue [#2980](https://github.com/databrickslabs/ucx/issues/2980). Additionally, the commit removes some unnecessary imports and fixtures from a test file.
* Added `phik` to known list ([3198](https://github.com/databrickslabs/ucx/issues/3198)). In this release, we have added `phik` to the known list in the provided JSON file. This change addresses part of issue [#1931](https://github.com/databrickslabs/ucx/issues/1931), as outlined in the linked issues. The `phik` key has been added with an empty list as its value, consistent with the structure of other keys in the JSON file. It is important to note that no existing functionality has been altered and no new methods have been introduced in this commit. The scope of the change is confined to updating the known list in the JSON file by adding the `phik` key.
* Added `pmdarima` to known list ([3199](https://github.com/databrickslabs/ucx/issues/3199)). In this release, we are excited to announce the addition of support for the `pmdarima` library, an open-source Python library for automatic seasonal decomposition of time series. With this commit, we have added `pmdarima` to our known list of libraries, providing our users with access to its various methods and functions for data preprocessing, model selection, and visualization. The library is particularly useful for fitting ARIMA models and testing for seasonality. By integrating `pmdarima`, users can now perform time series analysis and forecasting with greater ease and efficiency. This change partly resolves issue [#1931](https://github.com/databrickslabs/ucx/issues/1931) and underscores our commitment to providing our users with access to the latest and most innovative open-source libraries available.
* Added `preshed` to known list ([3220](https://github.com/databrickslabs/ucx/issues/3220)). A new library, "preshed," has been added to our project's supported libraries, enhancing compatibility and enabling efficient utilization of its capabilities. Developed using Cython, `preshed` is a Python interface to Intel(R) MKL's sparse BLAS, sparse solvers, and sparse linear algebra routines. With the inclusion of two modules, `preshed` and "preshed.about," this addition partially resolves issue [#1931](https://github.com/databrickslabs/ucx/issues/1931), improving the project's overall performance and reliability in sparse linear algebra tasks. Software engineers can now leverage the `preshed` library's features and optimized routines for their projects, reducing development time and increasing efficiency.
* Added `py-cpuinfo` to known list ([3221](https://github.com/databrickslabs/ucx/issues/3221)). In this release, we have added support for the `py-cpuinfo` library to our project, enabling the use of the `cpuinfo` functionality that it provides. With this addition, developers can now access detailed information about the CPU, such as the number of cores, current frequency, and vendor, which can be useful for performance tuning and optimization. This change partially resolves issue [#1931](https://github.com/databrickslabs/ucx/issues/1931) and does not affect any existing functionality or add new methods to the codebase. We believe that this improvement will enhance the capabilities of our project and enable more efficient use of CPU resources.
* Cater for empty python cells ([3212](https://github.com/databrickslabs/ucx/issues/3212)). In this release, we have resolved an issue where certain notebook cells in the dependency builder were causing crashes. Specifically, empty or comment-only cells were identified as the source of the problem. To address this, we have implemented a check to account for these cases, ensuring that an empty tree is stored in the `_python_trees` dictionary if the input cell does not produce a valid tree. This change helps prevent crashes in the dependency builder caused by empty or comment-only cells. Furthermore, we have added a test to verify the fix on a failed repository. If a cell does not produce a tree, the `_load_children_from_tree` method will not be executed for that cell, skipping the loading of any children trees. This enhancement improves the overall stability and reliability of the library by preventing crashes caused by invalid input.
* Create `TODO` issues every nightly run ([3196](https://github.com/databrickslabs/ucx/issues/3196)). A commit has been made to update the `acceptance` repository version in the `acceptance.yml` GitHub workflow from `acceptance/v0.4.0` to `acceptance/v0.4.2`, which affects the integration tests. The `Run nightly tests` step in the GitHub repository's workflow has also been updated to use a newer version of the `databrickslabs/sandbox/acceptance` action, from `v0.3.1` to `v0.4.2`. Software engineers should verify that the new version of the `acceptance` repository contains all necessary updates and fixes, and that the integration tests continue to function as expected. Additionally, testing the updated action is important to ensure that the nightly tests run successfully with up-to-date code and can catch potential issues.
* Fixed Integration test failure of migration_tables ([3108](https://github.com/databrickslabs/ucx/issues/3108)). This release includes a fix for two integration tests (`test_migrate_managed_table_to_external_table_without_conversion` and `test_migrate_managed_table_to_external_table_with_clone`) related to Hive Metastore table migration, addressing issues [#3054](https://github.com/databrickslabs/ucx/issues/3054) and [#3055](https://github.com/databrickslabs/ucx/issues/3055). Previously skipped due to underlying problems, these tests have now been unskipped, enhancing the migration feature's test coverage. No changes have been made to the existing functionality, as the focus is solely on including the previously skipped tests in the testing suite. The changes involve removing `pytest.mark.skip` markers from the test functions, ensuring they run and provide a more comprehensive test coverage for the Hive Metastore migration feature. In addition, this release includes an update to DirectFsAccess integration tests, addressing issues related to the removal of DFSA collectors and ensuring proper handling of different file types, with no modifications made to other parts of the codebase.
* Replace MockInstallation with MockPathLookup for testing fixtures ([3215](https://github.com/databrickslabs/ucx/issues/3215)). In this release, we have updated the testing fixtures in our unit tests by replacing the MockInstallation class with MockPathLookup. Specifically, we have modified the _load_sources function to use MockPathLookup instead of MockInstallation for loading sources. This change not only enhances the testing capabilities of the module but also introduces a new logger, logger, for more precise logging within the module. Additionally, we have updated the _load_sources function calls in the test_notebook.py file to pass the file path directly instead of a SourceContainer object. This modification allows for more flexible and straightforward testing of file-related functionality, thereby fixing issue [#3115](https://github.com/databrickslabs/ucx/issues/3115).
* Updated sqlglot requirement from <25.29,>=25.5.0 to >=25.5.0,<25.30 ([3224](https://github.com/databrickslabs/ucx/issues/3224)). The open-source library `sqlglot` has been updated to version 25.29.0 with this release, incorporating several breaking changes, new features, and bug fixes. The breaking changes include transpiling `ANY` to `EXISTS`, supporting the `MEDIAN()` function, wrapping values in `NOT value IS ...`, and parsing information schema views into a single identifier. New features include support for the `JSONB_EXISTS` function in PostgreSQL, transpiling `ANY` to `EXISTS` in Spark, transpiling Snowflake's `TIMESTAMP()` function, and adding support for hexadecimal literals in Teradata. Bug fixes include handling a Move edge case in the semantic differ, adding a `NULL` filter on `ARRAY_AGG` only for columns, improving parsing of `WITH FILL ... INTERPOLATE` in Clickhouse, generating `LOG(...)` for `exp.Ln` in TSQL, and optionally parsing a Stream expression. The full changelog can be found in the pull request, which also includes a list of the commits included in this release.
* Use acceptance/v0.4.0 ([3192](https://github.com/databrickslabs/ucx/issues/3192)). A change has been made to the GitHub Actions workflow file for acceptance tests, updating the version of the `databrickslabs/sandbox/acceptance` runner to `acceptance/v0.4.0` and granting write permissions for the `issues` field in the `permissions` section. These updates will allow for the use of the latest version of the acceptance tests and provide the necessary permissions to interact with issues. A `TODO` comment has been added to indicate that the new version of the acceptance tests needs to be updated elsewhere in the codebase. This change will ensure that the acceptance tests are up-to-date and functioning properly.
* Warn about errors instead to avoid job task failure ([3219](https://github.com/databrickslabs/ucx/issues/3219)). In this change, the `refresh_report` method in `jobs.py` has been updated to log warnings instead of raising errors when certain problems are encountered during its execution. Previously, if there were any errors during the linting process, a `ManyError` exception was raised, causing the job task to fail. Now, errors are logged as warnings, allowing the job task to continue running successfully. This resolves issue [#3214](https://github.com/databrickslabs/ucx/issues/3214) and ensures that the job task will not fail due to linting errors, allowing users to be aware of any issues that occurred during the linting process while still completing the job task successfully. The updated method checks for errors during the linting process, adds them to a list, and constructs a string of error messages if there are any. This string of error messages is then logged as a warning using the `logger.warning` function, allowing the method to continue executing and the job task to complete successfully.
* [DOC] Add dashboard section ([3222](https://github.com/databrickslabs/ucx/issues/3222)). In this release, we have added a new dashboard section to the project documentation, which provides visualizations of UCX's outcomes to help users better understand and manage their UCX environment. The new section includes a table listing the available dashboards, including the Azure service principals dashboard. This dashboard displays information about Azure service principals discovered by UCX in configurations from various sources such as clusters, cluster policies, job clusters, pipelines, and warehouses. Each dashboard has text widgets that offer detailed information about the contents and are designed to help users understand UCX's results and progress in a more visual and interactive way. The Azure service principals dashboard specifically offers users valuable insights into their Azure service principals within the UCX environment.
* [DOC] README.md rewrite ([3211](https://github.com/databrickslabs/ucx/issues/3211)). The Databricks Labs UCX package offers a suite of tools for migrating data objects from the Hive metastore to Unity Catalog (UC), encompassing a comprehensive table migration process. This process consists of table mapping, data access setup, creating new UC resources, and migrating Hive metastore data objects. Table mapping is achieved using a table mapping file that defaults to mapping all tables/views to UC tables while preserving the original schema and names, but can be customized as needed. Data access setup involves creating and modifying cloud principals and credentials for UC data. New UC resources are created without affecting existing Hive metastore resources, and users can choose from various strategies for migrating tables based on their format and location. Additionally, the package provides installation resources, including a README notebook, a DEBUG notebook, debug logs, and installation configuration, as well as utility commands for viewing and repairing workflows. The migration process also includes an assessment workflow, group migration workflow, data reconciliation, and code migration commands.
* [chore] Added tests to verify linter not being stuck in the infinite loop ([3225](https://github.com/databrickslabs/ucx/issues/3225)). In this release, we have added new functional tests to ensure that the linter does not get stuck in an infinite loop, addressing a bug that was fixed in version 0.46.0 related to the default format change from Parquet to Delta in Databricks Runtime 8.0 and a SQL parse error. These tests involve creating data frames, writing them to tables, and reading from those tables, using PySpark's SQL functions and a system information schema table to demonstrate the corrected behavior. The tests also include SQL queries that select columns from a system information schema table with a specified limit, using a withColumn() method to add a new column to a data frame based on a condition. These new tests provide assurance that the linter will not get stuck in an infinite loop and that SQL queries with table parameters are supported.
* [internal] Temporarily disable integration tests due to ES-1302145 ([3226](https://github.com/databrickslabs/ucx/issues/3226)). In this release, the integration tests for moving tables, views, and aliasing tables have been temporarily disabled due to issue ES-1302145. The `test_move_tables`, `test_move_views`, and `test_alias_tables` functions were previously decorated with `retried` to handle potential `NotFound` exceptions and had a timeout of 2 minutes, but are now marked with `pytest.mark.skip("ES-1302145")`. Once the issue is resolved, the `pytest.mark.skip` decorator should be removed to re-enable the tests. The remaining code in the file, including the `test_move_tables_no_from_schema`, `test_move_tables_no_to_schema`, and `test_move_views_no_from_schema` functions, is unchanged and still functional.
* use a path instance for MISSING_SOURCE_PATH and add test ([3217](https://github.com/databrickslabs/ucx/issues/3217)). In this release, the handling of MISSING_SOURCE_PATH has been improved by replacing the string representation with a Path instance using Pathlib, which simplifies checks for missing source paths and enables the addition of a new test for the DependencyProblem class. This test verifies the behavior of the newly introduced method, is_path_missing(), in the DependencyProblem class for determining if a given problem is caused by a missing path. Co-authored by Eric Vergnaud, these changes not only improve the handling and testing of missing paths but also contribute to enhancing the source code analysis functionality of the databricks/labs/ucx project.

Dependency updates:

* Updated sqlglot requirement from <25.29,>=25.5.0 to >=25.5.0,<25.30 ([3224](https://github.com/databrickslabs/ucx/pull/3224)).

0.48.0

Not secure
* Added `--dry-run` option for ACL migrate ([3017](https://github.com/databrickslabs/ucx/issues/3017)). In this release, we have added a `--dry-run` option to the `migrate-acls` command in the `labs.yml` file, enabling a preview of the migration process without executing it. This feature also introduces the `hms-fed` flag, allowing migration of HMS-FED ACLs while migrating tables. The `ACLMigrator` class in the `application.py` file has been updated to include new parameters, `sql_backend` and `inventory_database`, to perform a dry run migration of Access Control Lists (ACLs). Additionally, a new `retrieve` method has been added to the `ACLMigrator` class to retrieve a list of grants based on the source and destination objects, and a `CrawlerBase` class has been introduced for fetching grants. We have also introduced a new `inferred_grants` table in the deployment schema to store inferred grants during the migration process.
* Added `WorkspacePathOwnership` to determine transitive owners for files and notebooks ([3047](https://github.com/databrickslabs/ucx/issues/3047)). In this release, we introduce a new class `WorkspacePathOwnership` in the `owners.py` module to determine the transitive owners for files and notebooks within a workspace. This class is added as a subclass of `Ownership` and takes `AdministratorLocator` and `WorkspaceClient` as inputs. It has methods to infer the owner from the first `CAN_MANAGE` permission level in the access control list. We also added a new property `workspace_path_ownership` to the existing `HiveMetastoreContext` class, which returns a `WorkspacePathOwnership` object initialized with an `AdministratorLocator` object and a `workspace_client`. This addition enables the determination of owners for files and notebooks within the workspace. The functionality is demonstrated through new tests added to `test_owners.py`. The new tests, `test_notebook_owner` and `test_file_owner`, create a notebook and a workspace file and verify the owner of each using the `owner_of` method. The `AdministratorLocator` is used to locate the administrators group for the workspace and the `PermissionLevel` class is used to specify the permission level for the notebook permissions.
* Added `mosaicml-streaming` to known list ([3029](https://github.com/databrickslabs/ucx/issues/3029)). In this release, we have expanded the range of recognized packages in our system by adding several new libraries to the known list in the JSON file. The additions include `mosaicml-streaming`, `oci`, `pynacl`, `pyopenssl`, `python-snapy`, and `zstd`. Notably, `mosaicml-streaming` has two new entries, `simulation` and `streaming`, while the other packages have a single entry each. This update addresses issue [#1931](https://github.com/databrickslabs/ucx/issues/1931) and enhances the system's ability to identify and work with a wider variety of packages.
* Added `msal-extensions` to known list ([3030](https://github.com/databrickslabs/ucx/issues/3030)). In this release, we have added support for two new packages, `msal-extensions` and `portalocker`, to our project. The `msal-extensions` package includes modules for extending the Microsoft Authentication Library (MSAL), including cache lock, libsecret, osx, persistence, token cache, and windows. This addition enhances the library's authentication capabilities and provides greater flexibility when working with MSAL. The `portalocker` package offers functionalities for handling file locking with various backends such as Redis, as well as constants, exceptions, and utilities. This package enables developers to manage file locking more efficiently, preventing conflicts and ensuring data consistency. These new packages extend the range of supported packages and functionalities for handling authentication and file locking in the project, providing more options for software engineers to develop robust and secure applications.
* Added `multimethod` to known list ([3031](https://github.com/databrickslabs/ucx/issues/3031)). In this release, we have added support for the `multimethod` programming concept to the library. This feature has been added to the `known.json` file, which partially resolves issue [#193](https://github.com/databrickslabs/ucx/issues/193)
* Added `murmurhash` to known list ([3032](https://github.com/databrickslabs/ucx/issues/3032)). A new hash function, MurmurHash, has been added to the library's supported list, addressing part of issue [#1931](https://github.com/databrickslabs/ucx/issues/1931). The MurmurHash function includes two variants, `murmurhash` and "murmurhash.about", with distinct functionalities. The `murmurhash` variant offers core hashing functionality, while "murmurhash.about" contains metadata or documentation related to the MurmurHash function. This integration enables developers to leverage MurmurHash for data processing tasks, enhancing the library's functionality and versatility. Users familiar with the project can now incorporate MurmurHash into their applications and configurations, taking advantage of its unique features and capabilities.
* Added `ninja` to known list ([3050](https://github.com/databrickslabs/ucx/issues/3050)). In this release, we have added Ninja to the known list in the `known.json` file. Ninja is a fast, lightweight build system that enables better integration and handling within the project's larger context. This change partially resolves issue [#1931](https://github.com/databrickslabs/ucx/issues/1931), which may have been caused by challenges in integrating or using Ninja. It is important to note that this change does not modify any existing functionality or introduce new methods. The alteration is limited to including Ninja in the known list, improving the management and identification of various components within the project.
* Added `nvidia-ml-py` to known list ([3051](https://github.com/databrickslabs/ucx/issues/3051)). In this release, we have added support for the `nvidia-ml-py` package to our project. This addition consists of two components: `example` and 'pynvml'. `Example` is likely a placeholder or sample usage of the package, while `pynvml` is a module that enables interaction with NVIDIA's system management library (NVML) through Python. This enhancement is a significant step towards resolving issue [#1931](https://github.com/databrickslabs/ucx/issues/1931), which may require the use of NVIDIA-related tools or libraries, thereby improving the project's functionality and capabilities.
* Added dashboard for tracking migration progress ([3016](https://github.com/databrickslabs/ucx/issues/3016)). This change introduces a new dashboard for tracking migration progress in a project, called "migration-progress", which displays real-time insights into migration progress and facilitates planning and task division. A new method, `_create_dashboard`, has been added to generate the dashboard from SQL queries in a specified folder and replace database and catalog references to match the configuration settings. The changes include updating the install to replace the UCX catalog in queries, adding a new object serializer, and updating integration tests and manual testing on a staging environment. The new functionality covers the migration of tables, views, UDFs, grants, jobs, workflow problems, clusters, pipelines, and policies. Additionally, a new SQL file has been added to track the percentage of various objects migrated and display the results in the new dashboard.
* Added grant progress encoder ([3079](https://github.com/databrickslabs/ucx/issues/3079)). A new `GrantsProgressEncoder` class has been introduced in the `progress/grants.py` file to encode `Grant` objects into `History` objects for the `migration-progress` workflow. This change includes the addition of unit tests to ensure proper functionality and handles cases where `Grant` objects fail to map to the Unity Catalog by adding a list of failures to the `History` object. The commit also modifies the `migration-progress` workflow to incorporate the new `GrantsProgressEncoder` class, enhancing the grant processing capabilities and improving the testing of this functionality. This change addresses issue [#3058](https://github.com/databrickslabs/ucx/issues/3058), which was related to grant progress encoding. The `GrantsProgressEncoder` class can encode grant properties, such as the principal, action, database, schema, table, and UDF, into a format that can be written to a backend, ensuring successful migration of grants in the database.
* Added table progress encoder ([3083](https://github.com/databrickslabs/ucx/issues/3083)). In this release, we've added a table progress encoder to the WorkflowTask context to enhance the tracking of table-related operations in the migration-progress workflow. This new encoder, implemented in the TableProgressEncoder class, is connected to the sql_backend, table_ownership, and migration_status_refresher objects. The GrantsProgressEncoder class has been refactored to GrantProgressEncoder, with additional parameters for improved encoding of grants. We've also introduced the refresh_table_migration_status task to scan and record the migration status of tables and views in the inventory, storing results in the .migration_status inventory table. Two new unit tests have been added to ensure proper encoding and migration status handling. This change improves progress tracking and reporting in the table migration process, addressing issues [#3061](https://github.com/databrickslabs/ucx/issues/3061) and [#3064](https://github.com/databrickslabs/ucx/issues/3064).
* Combine static code analysis results with historical job snapshots ([3074](https://github.com/databrickslabs/ucx/issues/3074)). In this release, we have added a new method, `JobsProgressEncoder`, to the `WorkflowTask` class in the `databricks.labs.ucx.contexts` module. This method is used to track the progress of jobs in the context of a workflow task, replacing the existing `jobs_progress` method which only tracked the progress of grants. The `JobsProgressEncoder` method takes in additional arguments, including `inventory_database`, to provide more detailed progress tracking for jobs and is used in the `grants_progress` method to track the progress of jobs in the context of a workflow task. We have also added a new unit test for the `JobsProgressEncoder` class in the `databricks.labs.ucx` project to ensure that the encoding of job information works as expected with different types of failures and job details. Additionally, this revision introduces the ability to include workflow problem records in the historical job snapshots, providing additional context for debugging and analysis. The `JobsProgressEncoder` class is a subclass of the `ProgressEncoder` class and provides additional functionality for tracking the progress of jobs.
* Connected `WorkspacePathOwnership` with `DirectFsAccessOwnership` ([3049](https://github.com/databrickslabs/ucx/issues/3049)). In this revision, the `DirectFsAccessCrawler` class from the `databricks.labs.ucx.source_code.directfs_access` module is imported as `DirectFsAccessCrawler` and `DirectFsAccessOwnership`, and a new `cached_property` called `directfs_access_ownership` is added to the `TableCrawler` class. This property returns an instance of the `DirectFsAccessOwnership` class, which takes in `administrator_locator`, `workspace_path_ownership`, and `workspace_client` as arguments. Additionally, the `DirectFsAccessOwnership` class has been updated to determine DirectFS access ownership for a given table and connect with `WorkspacePathOwnership`, enhancing the tool's functionality by determining access ownership in DirectFS and improving overall system security and permissions management. The `test_directfs_access.py` file has also been updated to test the ownership of query and path records using the new `DirectFsAccessOwnership` object.
* Crawlers: append snapshots to history journal, if available ([2743](https://github.com/databrickslabs/ucx/issues/2743)). This commit introduces a history table to store snapshots after each crawling operation, addressing issues [#2572](https://github.com/databrickslabs/ucx/issues/2572) and [#2573](https://github.com/databrickslabs/ucx/issues/2573). The changes include the addition of a `HistoryLog` class, which handles appending inventory snapshots to the history table within a specific catalog, workspace, and run_id. The new methods also include a `TableMigrationStatus` class with a new class variable `__id_attributes__` to specify the attributes used to uniquely identify a table. The `destination()` method has been added to the `TableMigrationStatus` class to return the fully qualified name of the destination table. Additionally, unit and integration tests have been added and updated to ensure the functionality works as expected. The `Table`, `Job`, `Cluster`, and `UDF` classes have been updated with a new `history` attribute to store a string representing a problem associated with the respective class. The `__id_attributes__` class variable has also been added to these classes to specify the attributes used to uniquely identify them.
* Determine ownership of tables based on grants and source code ([3066](https://github.com/databrickslabs/ucx/issues/3066)). In this release, changes have been made to the `application.py` file in the `databricks/labs/ucx/contexts` directory to improve the accuracy of determining table ownership in the inventory. A new class `LegacyQueryOwnership` has been added to the `databricks.labs.ucx.framework.owners` module to determine the owner of a table based on the queries that write to it. The `TableOwnership` class has been updated to accept additional arguments for determining ownership based on grants, queries, and workspace paths. The `DirectFsAccessOwnership` class has also been updated to accept a new `legacy_query_ownership` argument. Additionally, a new method `owner_of_path` has been added to the `Ownership` class, and the `LegacyQueryOwnership` class has been added as a subclass of `Ownership`. A new file `ownership.py` has been introduced, which defines the `TableOwnership` and `TableMigrationOwnership` classes for determining ownership of tables and table migration records in the inventory. These changes provide a more accurate and consistent ownership information for tables in the inventory.
* Ensure that pipeline assessment doesn't fail if a pipeline is deleted… ([3034](https://github.com/databrickslabs/ucx/issues/3034)). In this pull request, the pipelines crawler of the DLT assessment feature has been updated to improve its resiliency in the event of a pipeline deletion during crawling. Instead of failing, the crawler now logs a warning and continues to crawl when a pipeline is deleted. A new test method, `test_pipeline_disappears_during_crawl`, has been added to verify that the crawler can handle the deletion of a pipeline after listing the pipelines but before assessing them. The `assessment` and `migration-progress-experimental` workflows have been modified, and new unit tests have been added to ensure the proper functioning of the changes. Additionally, the `test_pipeline_list_with_no_config` test case has been added to check the behavior of the pipelines crawler when there is no configuration present. This pull request aims to enhance the robustness of the assessment feature and ensure its continued operation even in the face of unexpected pipeline deletions.
* Fixed `UnicodeDecodeError` when fetching init scripts ([3103](https://github.com/databrickslabs/ucx/issues/3103)). In this release, we have enhanced the error handling capabilities of the open-source library by fixing a `UnicodeDecodeError` issue that occurred when fetching init scripts in the `_get_init_script_data` method. To address this, we have added `UnicodeDecodeError` and `FileNotFoundError` to the list of exceptions handled in the method. Now, when any of these exceptions occur, the method will return `None` and a warning message will be logged instead of raising an unhandled exception. This change ensures that the function operates smoothly and provides better error handling in the library, without modifying the behavior of the `_check_cluster_init_script` method, which remains unchanged and continues to verify the correct setup of init scripts in the cluster.
* Fixed `UnknownHostException` on the specified KeyVault ([3102](https://github.com/databrickslabs/ucx/issues/3102)). In this release, we have made significant improvements to the Azure Key Vault integration, addressing issues [#3102](https://github.com/databrickslabs/ucx/issues/3102) and [#3090](https://github.com/databrickslabs/ucx/issues/3090). We have resolved an `UnknownHostException` problem in a specific KeyVault and implemented error handling for invalid Azure Key Vaults, ensuring more robust and reliable system behavior. Additionally, we have expanded `NotFound` exception handling to include the `InvalidState` exception. When the Azure Key Vault is in an invalid state, the corresponding secret will be skipped, and a warning message will be logged. This enhancement provides a more comprehensive solution to handle various exceptions that may arise when dealing with secrets stored in Azure Key Vaults.
* Fixed `Unsupported schema: XXX` error on `assess_workflows` ([3104](https://github.com/databrickslabs/ucx/issues/3104)). The recent change to the open-source library addresses the 'Unsupported schema: XXX' error in the `assess_workflows` function. This was achieved by introducing a new exception class, 'InvalidPath', in the `WorkspaceCache` mixin, and substituting `ValueError` with `InvalidPath` in the 'jobs.py' file. The `InvalidPath` exception is used to provide a more specific error message for unsupported schema paths. The `WorkspaceCache` mixin now includes an `InvalidPath` exception for caching workspace paths. The error handling in the 'jobs.py' file has been modified to raise `InvalidPath` instead of `ValueError` for better error messages. Additionally, the 'test_cached_workspace_path.py' file has updates for testing the `WorkspaceCache` object, including the addition of the `InvalidPath` exception for non-absolute paths, and a new test function for this exception. The `WorkspaceCache` class has an ellipsis in the `__init__` method, indicating additional initialization code not shown in this diff.
* Fixed `assert curr.location is not None` ([3105](https://github.com/databrickslabs/ucx/issues/3105)). In this release, we have addressed a potential issue in the `_external_locations` method which failed to check if the location of the current Hive table is `None` before proceeding. This oversight could result in unnecessary exceptions when accessing the location of a Hive table. To rectify this, we have introduced a check for `None` that will bypass the current iteration of the loop if the location is not set, thereby improving the robustness of the code. The method continues to return a list of `ExternalLocation` objects, each representing a Hive table or partition location with the corresponding number of tables or partitions present. The `ExternalLocation` class remains unchanged in this commit. This improvement will ensure that the method functions smoothly and avoids errors when dealing with Hive tables that do not have a location set.
* Fixed dynamic import issue ([3053](https://github.com/databrickslabs/ucx/issues/3053)). In this release, we've addressed an issue related to dynamic import inference in our open-source library. Previously, the code did not infer import names when using `importlib.import_module(some_name)`. This has been resolved by implementing a new method, `_make_sources_for_import_call_node`, which infers the import name from the provided node argument. Additionally, we've introduced new functions, `get_global(self, name: str)`, `_adjust_node_for_import_member(self, name: str, match_node: type, node: NodeNG)`, and updated the `_matches(self, node: NodeNG, depth: int)` method to handle attributes as global names. A new unit test, `test_graph_imports_dynamic_import()`, has been added to ensure the proper functioning of the dynamic import feature. Moreover, a new function `is_from_module` has been introduced to check if a given name is from a specific module. This commit, co-authored by Eric Vergnaud, significantly enhances the code's ability to infer imports in dynamic import scenarios.
* Fixed issue with migrating `MANAGED` hive_metastore table to UC for `CONVERT_TO_EXTERNAL` scenario ([3020](https://github.com/databrickslabs/ucx/issues/3020)). This change updates the process for converting a managed Hive Metastore (HMS) table to external in the CONVERT_TO_EXTERNAL scenario. The functionality is split into a separate workflow task, executed from a non-Unity Catalog (UC) cluster, and is tested with unit and integration tests. The migrate table function for external sync ensures the table is migrated as external to UC post-conversion. The changes include adding a new workflow and modifying an existing one, and updates the existing workflow to rename the migrate_tables function to convert_managed_hms_to_external. The new function handles the conversion of managed HMS tables to external, and updates the object_type property of the table in the inventory database to `EXTERNAL` after the conversion is completed. The pull request resolves issue [#2840](https://github.com/databrickslabs/ucx/issues/2840) and removes the existing functionality of applying grants during the migration process.
* Fixed issue with table location on storage root ([3094](https://github.com/databrickslabs/ucx/issues/3094)). In this release, we have implemented changes to address an issue related to the incorrect identification of the parent folder as an external location when there is a single table with a prefix that matches a parent folder. Additionally, we have improved the storage and retrieval of table locations in the root directory of a storage service by adding support for additional S3 bucket URL formats in the unit tests for the Hive Metastore. This includes handling S3 bucket URLs that do not include a specific file or path, and those with a path that does not include a file. We have also added new test cases for these URL formats and modified existing ones to include them. These changes ensure correct identification of external locations and improve functionality and flexibility of the Hive Metastore's support for external table locations. The new methods added are not explicitly stated, but they likely involve functions for parsing and processing the new S3 bucket URL formats.
* Fixed snapshot loading for DFSA and used-table crawlers ([3046](https://github.com/databrickslabs/ucx/issues/3046)). This commit resolves issues related to snapshot loading for the DFSA and used-table crawlers when using the spark-based lsql backend. The root cause was the use of `.as_dict()` to convert rows to dictionaries, which is unavailable in the spark-based lsql backend. The fix involves replacing this method with `.asDict()`. Additionally, integration and unit tests were updated to include snapshot loading for these crawlers, and a typo in a test name was corrected. The changes are confined to the test_queries.py file and do not affect other parts of the project. No new methods were added, and existing functionality changes were limited to updating the snapshot loading process.
* Ignore failed inference codes when presenting results to Databricks Runtime ([3087](https://github.com/databrickslabs/ucx/issues/3087)). In this release, the `lsp_plugin.py` file has been updated in the `databricks/labs/ucx/source_code` directory to improve the user experience in the notebook editor. The changes include disabling certain advice codes from being propagated, specifically: 'cannot-autofix-table-reference', 'default-format-changed-in-dbr8', 'dependency-not-found', 'not-supported', 'notebook-run-cannot-compute-value', 'sql-parse-error', 'sys-path-cannot-compute-value', and 'unsupported-magic-line'. A new variable `DEBUG_MESSAGE_CODES` has been introduced to store the list of advice codes to be ignored, and the list comprehension that creates `diagnostics` in the `pylsp_lint` function has been updated to exclude these codes. These updates aim to reduce the number of unnecessary error messages and improve the accuracy of the linter for supported codes.
* Improve scan tables in mounts ([2767](https://github.com/databrickslabs/ucx/issues/2767)). In this release, the `scan-tables-in-mounts` functionality in the hive metastore has been significantly improved, providing a more robust and comprehensive solution. Previously, the implementation skipped most directories, only finding 8 tables, but this issue has been addressed, allowing the updated version to parse many more tables. The commit includes bug fixes and the addition of new unit tests. The reviewer is encouraged to refactor the code in future iterations to use the `os` module instead of `dbutils` for listing directories, enabling parallelization and improving scalability. The commit resolves issue [#2540](https://github.com/databrickslabs/ucx/issues/2540) and updates the `scan-tables-in-mounts-experimental` workflow. While manual and unit tests have been added and verified, integration tests are still pending implementation. The co-author of this commit is Dan Zafar.
* Removed `WorkflowLinter` as it is part of the `Assessment` workflow ([3036](https://github.com/databrickslabs/ucx/issues/3036)). In this release, the `WorkflowLinter` has been removed as it is now integrated into the `Assessment` workflow, addressing issue [#3035](https://github.com/databrickslabs/ucx/issues/3035). This change simplifies the codebase, removing the need for a separate linter while maintaining essential functionality for ensuring Unity Catalog compatibility. The linter's functionality has been merged with other parts of the assessment workflow, with results persisted in the `.workflow_problems` and `.directfs_in_paths` tables. The `assess_workflows` and `assess_dashboards` methods have been updated accordingly, removing `WorkflowLinter` usage. Additionally, the `ExperimentalWorkflowLinter` class has been removed from the `workflows.py` file, along with its associated methods `lint_all_workflows` and `lint_all_queries`. The `test_running_real_workflow_linter_job` function has also been removed due to the integration of the `WorkflowLinter` into the `Assessment` workflow. Manual testing has been conducted to ensure the correctness of these changes and the continued proper functioning of the assessment workflow.
* Updated permissions crawling so that it doesn't fail if a secret scope disappears during crawling ([3070](https://github.com/databrickslabs/ucx/issues/3070)). This commit enhances the open-source library by updating the permissions crawling process for secret scopes, addressing the issue of task failure when a secret scope disappears before ACL retrieval. The `assessment` workflow has been modified to incorporate these updates, and new unit tests have been added, including one that simulates the disappearance of a secret scope during crawling. The `PermissionsCrawler` class and the `Threads.gather` method have been improved to handle such cases, logging a warning instead of failing the task. The return type of the `get_crawler_tasks` method has been updated to Iterable[Callable[[], Permissions | None]]. These changes improve the reliability and robustness of the permissions crawling process for secret scopes, ensuring task completion in the face of unexpected scope disappearances.
* Updated sqlglot requirement from <25.26,>=25.5.0 to >=25.5.0,<25.27 ([3041](https://github.com/databrickslabs/ucx/issues/3041)). In this pull request, we have updated the sqlglot library requirement to incorporate the latest version, which includes various bug fixes, refactors, and exciting new features. The latest version now supports the TO_DOUBLE and TRY_TO_TIMESTAMP functions in Snowflake and the EDIT_DISTANCE (Levinshtein) function in BigQuery. Moreover, we've addressed an issue with the ARRAY JOIN function in Clickhouse and made changes to the hive dialect hierarchy. We encourage users to update to this latest version to benefit from these enhancements and fixes, ensuring optimal performance and functionality of the library.
* Updated sqlglot requirement from <25.27,>=25.5.0 to >=25.5.0,<25.28 ([3048](https://github.com/databrickslabs/ucx/issues/3048)). In this release, we have updated the requirement for the `sqlglot` library to a version greater than or equal to 25.5.0 and less than 25.28. This change was made to allow for the use of the latest features and bug fixes available in 'sqlglot', while avoiding the breaking changes that were introduced in version 25.27. The new version of `sqlglot` offers several improvements, including but not limited to enhanced query optimization, expanded support for various SQL dialects, and better error handling. We recommend that all users upgrade to the latest version of `sqlglot` to take advantage of these new features and improvements.
* Updated sqlglot requirement from <25.28,>=25.5.0 to >=25.5.0,<25.29 ([3093](https://github.com/databrickslabs/ucx/issues/3093)). This release includes an update to the `sqlglot` dependency, changing the version requirement from 25.5.0 up to but excluding 25.28, to a range that includes 25.5.0 up to but excluding 25.29. This change allows for the use of the latest `sqlglot` version and includes all the updates and bug fixes from this library since the previous version. The pull request provides a list of changes made in `sqlglot` since the previous version, as well as a list of relevant commits. Dependabot has been configured to handle any merge conflicts for this pull request and includes commands to trigger various Dependabot actions. This update was made by Dependabot and is indicated by a signed-off-by line.

Dependency updates:

* Updated sqlglot requirement from <25.26,>=25.5.0 to >=25.5.0,<25.27 ([3041](https://github.com/databrickslabs/ucx/pull/3041)).
* Updated sqlglot requirement from <25.27,>=25.5.0 to >=25.5.0,<25.28 ([3048](https://github.com/databrickslabs/ucx/pull/3048)).
* Updated sqlglot requirement from <25.28,>=25.5.0 to >=25.5.0,<25.29 ([3093](https://github.com/databrickslabs/ucx/pull/3093)).

0.47.0

Not secure
* Added `mdit-py-plugins` to known list ([3013](https://github.com/databrickslabs/ucx/issues/3013)). In this release, the open-source library has been updated with several new features to enhance its functionality and usability for software engineers. Firstly, a new module has been introduced to support multi-threading, allowing for more efficient processing of large datasets. Additionally, a new configuration system has been implemented, providing users with greater flexibility in customizing the library's behavior to their specific needs. Furthermore, the library now includes a set of diagnostic tools to help developers identify and troubleshoot issues more effectively. These new features are expected to significantly improve the performance and productivity of the library, making it an even more powerful tool for software development projects.
* Added `memray` to known list ([3014](https://github.com/databrickslabs/ucx/issues/3014)). In this release, we have integrated two new libraries to enhance the project's functionality and maintainability. We have added `memray` to our list of known libraries, which allows for memory profiling and analysis within the project's environment. Additionally, we have added the `textual` library and its related modules, a TUI (Text User Interface) library, which provides a wide variety of user interface components. These additions partially resolve issue [#1931](https://github.com/databrickslabs/ucx/issues/1931), enabling the development of more sophisticated and user-friendly interfaces, and improving memory profiling capabilities.
* Added `mlflow-skinny` to known list ([3015](https://github.com/databrickslabs/ucx/issues/3015)). A new version of our library includes the addition of `mlflow-skinny` to the known packages list in a JSON file. `mlflow-skinny` is a lightweight version of the widely-used machine learning platform, MLflow. This integration enables users to utilize `mlflow-skinny` in their projects and have their runs automatically tracked and logged. Furthermore, this commit partially addresses issue [#1931](https://github.com/databrickslabs/ucx/issues/1931), hinting at a possible connection to a larger issue or feature request. Software engineers will now have access to a more streamlined MLflow package, allowing for easier and more efficient integration in their projects.
* Added handling for installing libraries multiple times in `PipResolver` ([3024](https://github.com/databrickslabs/ucx/issues/3024)). In this commit, the `PipResolver` class has been updated to handle the installation of libraries multiple times, resolving issues [#3022](https://github.com/databrickslabs/ucx/issues/3022) and [#3023](https://github.com/databrickslabs/ucx/issues/3023). The `_resolve_libraries` method has been modified to resolve pip installs as libraries or paths based on whether they are found in the path lookup or not, and whether they are already installed in the temporary virtual environment. The `_install_pip` method has also been updated to include the `--upgrade` flag to upgrade libraries if they are already installed. Code linting has been improved, and integration tests have been added to the `test_libraries.py` file to ensure the proper functioning of the updated code. These tests include installing the `pytest` library twice in a Databricks notebook and then importing it to verify its installation. These changes aim to improve the reliability and robustness of the library installation process in the context of multiple installations.
* Fixed errors related to unsupported cell languages ([3026](https://github.com/databrickslabs/ucx/issues/3026)). In this release, we have made significant improvements to the `_Collector` abstract base class by adding support for multiple cell languages in the `_collect_from_source` method. Previously, the implementation only supported Python and SQL languages, but with this update, we have added support for several new languages including R, Scala, Shell, Markdown, Run, and Pip. The new methods added to the class handle the source code collection for their respective languages and return an empty iterable or log a warning if a language is not supported yet. This change enhances the functionality and flexibility of the class, enabling it to handle a wider variety of cell languages. Additionally, this commit resolves the issue [#2977](https://github.com/databrickslabs/ucx/issues/2977) and includes new methods to the `DfsaCollectorWalker` class, allowing it to collect information from cells of any language. The test case `test_collector_supports_all_cell_languages` has also been added to ensure that the collector supports all cell languages. This release also includes manually tested and added unit tests, and is co-authored by Eric Vergnaud.
* Preemptively fix unknown errors of Python AST parsing coming from `astroid` and `ast` libraries ([3027](https://github.com/databrickslabs/ucx/issues/3027)). A new update has been implemented in the library to improve Python AST parsing and error handling. The `maybe_parse` function has been enhanced to catch all types of exceptions using a broad exception clause, extending from the previous limitation of only catching `AstroidSyntaxError` and `SystemError`. The `_definitely_failure` function now includes the type of exception in the error message for better visibility and troubleshooting. In the test cases, the `graph_builder_parse_error` function's test has been updated to check for a `system-error` code instead of `syntax-error` to preemptively fix unknown errors from Python AST parsing. Additionally, the test for `parses_python_cell_with_magic_commands` function has been added, ensuring that any Python cell with magic commands is correctly parsed. These changes aim to increase robustness in handling exceptional cases during parsing, provide more informative error messages, and prevent potential unknown parsing errors.
* Updated migration progress workflow to also re-lint dashboards and jobs ([3025](https://github.com/databrickslabs/ucx/issues/3025)). In this release, we have updated the table utilization documentation to include the ability to lint directFS paths and queries, and modified the `migration-progress-experimental` workflow to re-run linting tasks for dashboard queries and notebooks associated with jobs. Additionally, we have updated the `MigrationProgress` workflow to include the scanning of dashboards and jobs for migration issues, assessing SQL code in embedded widgets of dashboards and inventory & linting of jobs. To support these changes, we have added unit tests and updated existing integration tests in `test_workflows.py`. The new test function, `test_linter_runtime_refresh`, tests the linter refresh behavior for dashboard and workflow tasks. These updates aim to ensure consistent linting and maintain the accuracy of the `experimental-migration-progress` workflow for users who adopt the project.

Page 2 of 12

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.