Dlt

Latest version: v1.4.0

Safety actively analyzes 681866 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 12 of 15

0.2.0a16

What's Changed
* uses structured data types to store json by rudolfix in https://github.com/dlt-hub/dlt/pull/121
* brings back the functionality to run dbt packages by rudolfix in https://github.com/dlt-hub/dlt/pull/122

This update changes how the `complex` data type is being stored. Most often `complex` data type is generated to hold `json` data when `nesting_level` is limited in the source. The complex types are stored as JSONB (postgres), SUPER (redshift), JSON (BigQuery)

This update also allows to easily run `dbt` packages together with `dlt` pipelines. Most of the work went into making it user friendly and avoiding any dependency conflicts. This feature is also quite well tested. Take a look at those two examples:
https://github.com/dlt-hub/dlt/blob/devel/docs/examples/dbt_run_jaffle.py
https://github.com/dlt-hub/dlt/blob/devel/docs/examples/chess/chess_dbt.py
https://github.com/dlt-hub/dlt/tree/devel/docs/examples/chess/dbt_transform

Runing the `dbt` package is just two lines and you have more control over it that with the CLI

0.2.0a10

What's Changed
* only specific default config values are generated in `dlt init` ie BigQuery Location
* correct postgres port generated
* sends traces to sentry if `RUNTIME__SENTRY_DSN` is present
* sends slack notification if `RUNTIME__SLACK_INCOMING_HOOK` is present

0.2.0a9

What's Changed
* `dlt init` renames sources and resources by rudolfix in https://github.com/dlt-hub/dlt/pull/106

0.2.0a8

What's Changed
* deletes all pipeline state (schemas, state, intermediate files) if destination dataset is dropped
* synchronizes state with the destination in the `run` method
* you can opt out from state sync with `restore_from_destination=false` in ie. `config.toml`
* loads all schemas/source into single dataset by default. this will simplify experience for less advanced users. you can switch back to the old behavior (each source/schema has separate dataset) with `use_single_dataset=false` config option
* enables CTRL-C when running user code
* commits all files extracted from several sources after all user code ran

0.2.0a7

What's Changed
* adds `max_table_nesting` argument to `dlt.source` to control the depth of parent-child table nesting
* fixes the pipeline_name when runtime configuration is embedded
* reacts to signals (ie CTRL-C) during extraction and other signal improvements
* passes github vars to loggers/tracers

0.2.0a6

Fixes the module import bug in `dlt init` command

Page 12 of 15

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.