Datacube

Latest version: v1.8.19

Safety actively analyzes 682361 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 8 of 13

1.6rc2

Not secure
Backwards Incompatible Changes

- The `helpers.write_geotiff()` function has been updated to support
files smaller than 256x256. It also no longer supports specifying
the time index. Before passing data in, use
`xarray_data.isel(time=<my_time_index>)`. (\277)
- Removed product matching options from `datacube dataset update`
(\445). No matching is needed in this case as all datasets are
already in the database and are associated to products.
- Removed `--match-rules` option from `datacube dataset add` (\447)
- The seldom-used `stack` keyword argument has been removed from
`Datcube.load`. (\461)
- The behaviour of the time range queries has changed to be compatible
with standard Python searches (eg. time slice an xarray). Now the
time range selection is inclusive of any unspecified time units.
(\440)
- Example 1:
`time=('2008-01', '2008-03')` previously would have returned all
data from the start of 1st January, 2008 to the end of 1st of
March, 2008. Now, this query will return all data from the start
of 1st January, 2008 and 23:59:59.999 on 31st of March, 2008.

- Example 2:
To specify a search time between 1st of January and 29th of
February, 2008 (inclusive), use a search query like
`time=('2008-01', '2008-02')`. This query is equivalent to using
any of the following in the second time element:

`('2008-02-29')`
`('2008-02-29 23')`
`('2008-02-29 23:59')`
`('2008-02-29 23:59:59')`
`('2008-02-29 23:59:59.999')`

Changes

- A `--location-policy` option has been added to the `datacube dataset
update` command. Previously this command would always add a new
location to the list of URIs associated with a dataset. It's now
possible to specify `archive` and `forget` options, which will mark
previous location as archived or remove them from the index
altogether. The default behaviour is unchanged. (\469)

- The masking related function `describe_variable_flags()` now returns
a pandas DataFrame by default. This will display as a table in
Jupyter Notebooks. (\422)

- Usability improvements in `datacube dataset [add|update]` commands
(\447, \448, \398)

- Embedded documentation updates
- Deprecated `--auto-match` (it was always on anyway)
- Renamed `--dtype` to `--product` (the old name will still work,
but with a warning)
- Add option to skip lineage data when indexing (useful for saving
time when testing) (\473)

- Enable compression for metadata documents stored in NetCDFs
generated by `stacker` and `ingestor` (\452)

- Implement better handling of stacked NetCDF files (\415)

- Record the slice index as part of the dataset location URI,
using `part=<int>` syntax, index is 0-based
- Use this index when loading data instead of fuzzy searching by
timestamp
- Fall back to the old behaviour when `part=<int>` is missing and
the file is more than one time slice deep

- Expose the following dataset fields and make them searchable:

- `indexed_time` (when the dataset was indexed)
- `indexed_by` (user who indexed the dataset)
- `creation_time` (creation of dataset: when it was processed)
- `label` (the label for a dataset)

(See \432 for more details)

Bug Fixes

- The `.dimensions` property of a product no longer crashes when
product is missing a `grid_spec`. It instead defaults to `time,y,x`
- Fix a regression in `v1.6rc1` which made it impossible to run
`datacube ingest` to create products which were defined in `1.5.5`
and earlier versions of ODC. (\423, \436)
- Allow specifying the chunking for string variables when writing
NetCDFs (\453)

1.6rc1

Not secure
This is the first release in a while, and so there’s a lot of changes, including
some significant refactoring, with the potential having issues when upgrading.


Backwards Incompatible Fixes


- Drop Support for Python 2. Python 3.5 is now the earliest supported Python version.
- Removed the old `ndexpr`, `analytics` and `execution engine` code. There is work underway in the [execution engine branch](https://github.com/opendatacube/datacube-core/compare/csiro/execution-engine) to replace these features.

Enhancements


- Support for third party drivers, for custom data storage and custom index implementations

- The correct way to get an Index connection in code is to use [`datacube.index.index_connect()`](http://datacube-core.readthedocs.io/en/stable/dev/api/dcindex.html#datacube.index.index_connect).

- Changes in ingestion configuration

- Must now specify the [Data Write Plug-ins](http://datacube-core.readthedocs.io/en/stable/architecture/driver.html#write-plugin) to use. For s3 ingestion there was a top level `container` specified, which has been renamed and moved under `storage`. The entire `storage` section is passed through to the [Data Write Plug-ins](http://datacube-core.readthedocs.io/en/stable/architecture/driver.html#write-plugin), so drivers requiring other configuration can include them here. eg:


...
storage:
...
driver: s3aio
bucket: my_s3_bucket
...


- Added a `Dockerfile` to enable automated builds for a reference Docker image.

- Multiple environments can now be specified in one datacube config. See [PR 298](https://github.com/opendatacube/datacube-core/pulls/298) and the [Runtime Config](http://datacube-core.readthedocs.io/en/stable/user/config.html#runtime-config-doc)

- Allow specifying which `index_driver` should be used for an environment.

- Command line tools can now output CSV or YAML. (Issue [issue 206](https://github.com/opendatacube/datacube-core/issues/206), [PR 390](https://github.com/opendatacube/datacube-core/pulls/390))

- Support for saving data to NetCDF using a Lambert Conformal Conic Projection ([PR 329](https://github.com/opendatacube/datacube-core/pulls/329))

- Lots of documentation updates:

- Information about [Bit Masking](http://datacube-core.readthedocs.io/en/stable/dev/api/masking.html#bit-masking).
- A description of how data is loaded.
- Some higher level architecture documentation.
- Updates on how to index new data.


Bug Fixes




- Allow creation of `datacube.utils.geometry.Geometry` objects from 3d representations. The Z axis is simply thrown away.
- The [`datacube --config_file`](http://datacube-core.readthedocs.io/en/stable/ops/tools.html#cmdoption-datacube-c) option has been renamed to [`datacube --config`](http://datacube-core.readthedocs.io/en/stable/ops/tools.html#cmdoption-datacube-c), which is shorter and more consistent with the other options. The old name can still be used for now.
- Fix a severe performance regression when extracting and reprojecting a small region of data. ([PR 393](https://github.com/opendatacube/datacube-core/pulls/393))
- Fix for a somewhat rare bug causing read failures by attempt to read data from a negative index into a file. ([PR 376](https://github.com/opendatacube/datacube-core/pulls/376))
- Make `CRS` equality comparisons a little bit looser. Trust either a *Proj.4* based comparison or a *GDAL* based comparison. (Closed [issue 243](https://github.com/opendatacube/datacube-core/issues/243))

New Data Support



- Added example prepare script for Collection 1 USGS data; improved band handling and downloads.
- Add a product specification and prepare script for indexing Landsat L2 Surface Reflectance Data ([PR 375](https://github.com/opendatacube/datacube-core/pulls/375))
- Add a product specification for Sentinel 2 ARD Data ([PR 342](https://github.com/opendatacube/datacube-core/pulls/342))

1.5.5

- Fixes to package dependencies. No code changes.

1.5.4

Not secure
- Minor features backported from 2.0:

- Support for `limit` in searches

- Alternative lazy search method `find_lazy`

- Fixes:

- Improve native field descriptions

- Connection should not be held open between multi-product searches

- Disable prefetch for celery workers

- Support jsonify-ing decimals

1.5.3

- Use `cloudpickle` as the `celery` serialiser

- Allow `celery` tests to run without installing it

- Move `datacube-worker` inside the main datacube package

- Write `metadata_type` from the ingest configuration if available

- Support config parsing limitations of Python 2

- Fix 303: resolve GDAL build dependencies on Travis

- Upgrade `rasterio` to newer version

1.5.2

Not secure
New Features

- Support for AWS S3 array storage
- Driver Manager support for NetCDF, S3, S3-file drivers.


Usability Improvements

- When `datacube dataset add` is unable to add a Dataset to the index, print
out the entire Dataset to make it easier to debug the problem.
- Give `datacube system check` prettier and more readable output.
- Make `celery` and `redis` optional when installing.
- Significantly reduced disk space usage for integration tests
- `Dataset` objects now have an `is_active` field to mirror `is_archived`.
- Added `index.datasets.get_archived_location_times()` to see when each
location was archived.

Bug Fixes

- Fix bug when reading data in native projection, but outside `source` area. Often hit when running `datacube-stats`
- Fix error loading and fusing data using `dask`. (Fixes 276)
- When reading data, implement `skip_broken_datasets` for the `dask` case too
- Fix bug 261. Unable to load Australian Rainfall Grid Data. This was as a
result of the CRS/Transformation override functionality being broken when
using the latest `rasterio` version `1.0a9`

Page 8 of 13

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.