Dbt-clickhouse

Latest version: v1.8.9

Safety actively analyzes 723650 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 11

1.6.2

Bug Fix
- The dbt `on_schema_change` configuration value for incremental models was effectively being ignored. This has been fixed
with a very limited implementation. Closes https://github.com/ClickHouse/dbt-clickhouse/issues/199. Because of the way that
ORDER BY/SORT BY/PARTITION BY/PRIMARY KEYS work in ClickHouse, plus the complexities of correctly transforming ClickHouse data types,
`sync_all_columns` is not currently supported (although an implementation that works for non-key columns is theoretically possible,
such an enhancement is not currently planned). Accordingly, only `ignore`, `fail`, and `append_new_columns` values are supported
for `on_schema_change`. It is also not currently supported for Distributed tables.

Note that actually appending new columns requires a fallback to the `legacy` incremental strategy, which is quite inefficient,
so while theoretically possible, using `append_new_columns` is not recommended except for very small data volumes.

1.6.1

Bug Fixes
- Identifier quoting was disabled for tables/databases etc. This would cause failures for schemas or tables using reserved words
or containing special characters. This has been fixed and some macros have been updated to correctly handle such identifiers.
Note that there still may be untested edge cases where nonstandard identifiers cause issues, so they are still not recommended.
Closes https://github.com/ClickHouse/dbt-clickhouse/issues/144. Thanks to [Alexandru Pisarenco](https://github.com/apisarenco) for the
report and initial PR!
- The new `allow_automatic_deduplication` setting was not being correctly propagated to the adapter, so setting it to `True`
did not have the intended affect. In addition, this setting is now ignored for older ClickHouse versions that
do not support `CREATE TABLE AS SELECT ... EMPTY`, since the automatic deduplication window is required to allow correct
inserts in Replicated tables on those older versions. Fixes https://github.com/ClickHouse/dbt-clickhouse/issues/216.

1.6.0

Improvements
- Compatible with dbt 1.6.x. Note that dbt new `clone` feature is not supported, as ClickHouse has no native "light weight"
clone functionality, and copying tables without actual data transfer is not possible in ClickHouse (barring file manipulation
outside ClickHouse itself).
- A new ClickHouse specific Materialized View materialization contributed by [Rory Sawyer](https://github.com/SoryRawyer).
This creates a ClickHouse Materialized view using the `TO` form with the name `<model_name>_mv` and the associated target
table `<model_name>`. It's highly recommended to fully understand how ClickHouse materialized views work before using
this materialization.

1.5.2

Bug Fixes
- The `ON CLUSTER` clause was in the incorrect place for legacy incremental materializations. This has been fixed. Thanks to
[Steven Reitsma](https://github.com/StevenReitsma) for the fix!
- The `ON CLUSTER` DDL for drop tables did not include a SYNC modifier, which might be the cause of some "table already exists"
errors. The `SYNC` modifier has been added to the `on_cluster` macro when dropping relations.
- Fixed a bug where using table settings such as `allow_nullable_key` would break "legacy" incremental materializations. Closes
https://github.com/ClickHouse/dbt-clickhouse/issues/209. Also see the new model `config` property `insert_settings` described
below.
- Fixed an issue where incremental materializations would incorrectly exclude duplicated inserted elements due to "automatic"
ClickHouse deduplication on replicated tables. Closes https://github.com/ClickHouse/dbt-clickhouse/issues/213. The fix consists
of always sending a `replicated_deduplication_window=0` table setting when creating the incremental relations. This
behavior can be overridden by setting the new profile parameter `allow_automatic_deduplication` to `True`, although for
general dbt operations this is probably not necessary and not recommended. Finally thanks to Andy(https://github.com/andy-miracl)
for the report and debugging help!

Improvements
- Added a new profile property `allow_automatic_deduplication`, which defaults to `False`. ClickHouse Replicated deduplication is
now disable for incremental inserts, but this property can be set to true if for some reason the default ClickHouse behavior
for inserted blocks is desired.
- Added a new model `config` property `query_settings` for any ClickHouse settings that should be sent with the `INSERT INTO`
or `DELETE_FROM` queries used with materializations. Note this is distinct from the existing property `settings` which is
used for ClickHouse "table" settings in DDL statements like `CREATE TABLE ... AS`.

1.5.1

Bug Fix
- Fix table materialization for compatibility with SQLFluff. Thanks to [Kristof Szaloki](https://github.com/kris947) for the PR!

1.5.0

Improvements
- Compatible with dbt 1.5.x
- Contract support (using exact column data types)

Bug Fix
- Fix s3 macro when bucket includes `https://` prefix. Closes https://github.com/ClickHouse/dbt-clickhouse/issues/192.

Page 4 of 11

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.