Databricks-labs-lsql

Latest version: v0.4.2

Safety actively analyzes 622894 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

0.4.2

* Added more `NotFound` error type ([94](https://github.com/databrickslabs/lsql/issues/94)). In the latest update, the `core.py` file in the `databricks/labs/lsql` package has undergone enhancements to the error handling functionality. The `_raise_if_needed` function has been modified to raise a `NotFound` error when the error message includes the phrase "does not exist". This update enables the system to categorize specific SQL query errors as `NotFound` error messages, thereby improving the overall error handling and reporting capabilities. This change was a collaborative effort, as indicated by the co-authored-by statement in the commit.

0.4.1

* Fixing ovewrite integration tests ([92](https://github.com/databrickslabs/lsql/issues/92)). A new enhancement has been implemented for the `overwrite` feature's integration tests, addressing a concern with write operations. Two new variables, `catalog` and "schema", have been incorporated using the `env_or_skip` function. These variables are utilized in the `save_table` method, which is now invoked twice with the same table, once with the `append` and once with the `overwrite` option. The data in the table is retrieved and checked for accuracy after each call, employing the updated `Row` class with revised field names `first` and "second", formerly `name` and "id". This modification ensures the proper operation of the `overwrite` feature during integration tests and resolves any related issues. The commit message `Fixing overwrite integration tests` signifies this change.

0.4.0

* Added catalog and schema parameters to execute and fetch ([90](https://github.com/databrickslabs/lsql/issues/90)). In this release, we have added optional `catalog` and `schema` parameters to the `execute` and `fetch` methods in the `SqlBackend` abstract base class, allowing for more flexibility when executing SQL statements in specific catalogs and schemas. These updates include new method signatures and their respective implementations in the `SparkSqlBackend` and `DatabricksSqlBackend` classes. The new parameters control the catalog and schema used by the `SparkSession` instance in the `SparkSqlBackend` class and the `SqlClient` instance in the `DatabricksSqlBackend` class. This enhancement enables better functionality in multi-catalog and multi-schema environments. Additionally, this change comes with unit tests and integration tests to ensure proper functionality. The new parameters can be used when calling the `execute` and `fetch` methods. For example, with a `SparkSqlBackend` instance `spark_backend`, you can execute a SQL statement in a specific catalog and schema with the following code: `spark_backend.execute("SELECT * FROM my_table", catalog="my_catalog", schema="my_schema")`. Similarly, the `fetch` method can also be used with the new parameters.

0.3.1

* Check UCX and LSQL for backwards compatibility ([78](https://github.com/databrickslabs/lsql/issues/78)). In this release, we introduce a new GitHub Actions workflow, downstreams.yml, which automates unit testing for downstream projects upon changes made to the upstream project. The workflow runs on pull requests, merge groups, and pushes to the main branch and sets permissions for id-token, contents, and pull-requests. It includes a compatibility job that runs on Ubuntu, checks out the code, sets up Python, installs the toolchain, and accepts downstream projects using the databrickslabs/sandbox/downstreams action. The job matrix includes two downstream projects, ucx and remorph, and uses the build cache to speed up the pip install step. This feature ensures that changes to the upstream project do not break compatibility with downstream projects, maintaining a stable and reliable library for software engineers.
* Fixed `Builder` object has no attribute `sdk_config` error ([86](https://github.com/databrickslabs/lsql/issues/86)). In this release, we've resolved a `Builder` object has no attribute `sdk_config` error that occurred when initializing a Spark session using the `DatabricksSession.builder` method. The issue was caused by using dot notation to access the `sdk_config` attribute, which is incorrect. This has been updated to the correct syntax of `sdkConfig`. This change enables successful creation of the Spark session, preventing the error from recurring. The `DatabricksSession` class and its methods, such as `getOrCreate`, continue to be used for interacting with Databricks clusters and workspaces, while the `WorkspaceClient` class manages Databricks resources within a workspace.

Dependency updates:

* Bump codecov/codecov-action from 1 to 4 ([84](https://github.com/databrickslabs/lsql/pull/84)).
* Bump actions/setup-python from 4 to 5 ([83](https://github.com/databrickslabs/lsql/pull/83)).
* Bump actions/checkout from 2.5.0 to 4.1.2 ([81](https://github.com/databrickslabs/lsql/pull/81)).
* Bump softprops/action-gh-release from 1 to 2 ([80](https://github.com/databrickslabs/lsql/pull/80)).

0.3.0

* Added support for `save_table(..., mode="overwrite")` to `StatementExecutionBackend` ([74](https://github.com/databrickslabs/lsql/issues/74)). In this release, we've added support for overwriting a table when saving data using the `save_table` method in the `StatementExecutionBackend`. Previously, attempting to use the `overwrite` mode would raise a `NotImplementedError`. Now, when this mode is specified, the method first truncates the table before inserting the new rows. The truncation is done using the `execute` method to run a `TRUNCATE TABLE` SQL command. Additionally, we've added a new integration test, `test_overwrite`, to the `test_deployment.py` file to verify the new `overwrite` mode functionality. A new option, `mode="overwrite"`, has been added to the `save_table` method, allowing for the existing data in the table to be deleted and replaced with the new data being written. We've also added two new test cases, `test_statement_execution_backend_save_table_overwrite_empty_table` and `test_mock_backend_overwrite`, to verify the new functionality. It's important to note that the method signature has been updated to include a default value for the `mode` parameter, setting it to `append` by default. This change does not affect the functionality and only provides a more convenient default behavior for users of the method.

0.2.5

* Fixed PyPI badge ([72](https://github.com/databrickslabs/lsql/issues/72)). In this release, we have implemented a fix to the PyPI badge in the README file of our open-source library. The PyPI badge displays the version of the package and serves as a quick reference for users. This fix ensures the accuracy and proper functioning of the badge, without involving any changes to the functionality or methods within the project. Software engineers can be assured that this update is limited to the README file, specifically the PyPI badge, and will not affect the overall functionality of the library.
* Fixed `no-cheat` check ([71](https://github.com/databrickslabs/lsql/issues/71)). In this release, we have made improvements to the `no-cheat` verification process for new code. Previously, the check for disabling the linter was prone to false positives when the string '# pylint: disable' appeared for reasons other than disabling the linter. The updated code now includes an additional filter to exclude the string `CHEAT` from the search, and the number of characters in the output is counted using the `wc -c` command. If the count is not zero, the script will terminate with an error message. This change enhances the accuracy of the `no-cheat` check, ensuring that the linter is being used correctly and that all new code meets our quality standards.
* Removed upper bound on `sqlglot` dependency ([70](https://github.com/databrickslabs/lsql/issues/70)). In this update, we have removed the upper bound on the `sqlglot` dependency version in the project's `pyproject.toml` file. Previously, the version constraint required `sqlglot` to be at least 22.3.1 but less than 22.5.0. With this modification, there will be no upper limit, enabling the project to utilize any version greater than or equal to 22.3.1. This change provides the project with the flexibility to take advantage of future bug fixes, performance improvements, and new features available in newer `sqlglot` package versions. Developers should thoroughly test the updated package version to ensure compatibility with the existing codebase.

Page 1 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.