Tensorflow-gnn

Latest version: v1.0.2

Safety actively analyzes 626474 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 5

1.0.3rc0

Release 1.0 is the first with a stable [public API](tensorflow_gnn/docs/api_docs/README.md).

What's Changed in r1.0

* Overall
* Use with incompatible Keras v3 raises a clear error.
* As of release 1.0.3, the error refers to the new [Keras version guide](tensorflow_gnn/docs/guide/keras_version.md) and explains how to get Keras v2 with TF2.16+ via `TF_USE_LEGACY_KERAS=1`.
* Releases 1.0.0 to 1.0.2 had a pip package requirement for TF `<2.16` but could be made to work the same way.
* Minimum supported TF/Keras version moved to `>=2.12`.
* Importing the library no longer leaks private module names.
* All parts of the `GraphSchema` protobuf are now exposed under`tfgnn.proto.*`.
* [Model saving](tensorflow_gnn/docs/guide/model_saving.md) now clearly distinguishes export to inference (pure TF, fully supported) from misc ways of saving for model reuse.
* Numerous small bug fixes.
* Subgraph sampling: major upgrade
* New and unified sampler for [in-memory](tensorflow_gnn/docs/guide/inmemory_sampler.md) and [beam-based](tensorflow_gnn/docs/guide/beam_sampler.md) subgraph sampling.
* Module `tfgnn.experimental.in_memory` is removed in favor of the new sampler.
* New console script `tfgnn_sampler` replaces the old `tfgnn_graph_sampler`.
* GraphTensor
* Most `tfgnn.*` functions on GraphTensor now work in Keras' Functional API, including the factory methods `GraphTensor.from_pieces(...)` etc.
* New static checks for GraphTensor field shapes, opt out with `tfgnn.disable_graph_tensor_validation()`.
* New runtime checks for GraphTensor field shapes, sizes and index ranges, opt in with `tfgnn.enable_graph_tensor_validation_at_runtime()`.
* `GraphTensor` maintains `.row_splits_dtype` separately from `.indices_dtype`.
* The `GraphSchema` and the I/O functions for `tf.Example` now support all non-quantized, non-complex floating-point and integer types as well as `bool` and `string`.
* Added convenience wrapper `tfgnn.pool_neighbors_to_node()`.
* Misc fixes to `tfgnn.random_graph_tensor()`, now respects component boundaries.
* Runner
* New tasks for link prediction and node classification/regression based on structured readout.
* Now comes with API docs.
* Models collection
* `models/contrastive_losses` gets multiple extensions, including a triplet loss and API docs.
* `models/multi_head_attention` replaces sigmoid with elu+1 in trained scaling.
* Bug fixes for mixed precision.

**Full Changelog**: https://github.com/tensorflow/gnn/compare/v0.6.1...v1.0.0

1.0.2

Release 1.0 is the first with a stable [public API](tensorflow_gnn/docs/api_docs/README.md).

What's Changed in r1.0

* Overall
* Supported TF/Keras versions moved to `>=2.12,<2.16`, incompatible Keras v3 raises a clear error.
* Importing the library no longer leaks private module names.
* All parts of the `GraphSchema` protobuf are now exposed under`tfgnn.proto.*`.
* [Model saving](tensorflow_gnn/docs/guide/model_saving.md) now clearly distinguishes export to inference (pure TF, fully supported) from misc ways of saving for model reuse.
* Numerous small bug fixes.
* Subgraph sampling: major upgrade
* New and unified sampler for [in-memory](tensorflow_gnn/docs/guide/inmemory_sampler.md) and [beam-based](tensorflow_gnn/docs/guide/beam_sampler.md) subgraph sampling.
* Module `tfgnn.experimental.in_memory` is removed in favor of the new sampler.
* New console script `tfgnn_sampler` replaces the old `tfgnn_graph_sampler`.
* GraphTensor
* Most `tfgnn.*` functions on GraphTensor now work in Keras' Functional API, including the factory methods `GraphTensor.from_pieces(...)` etc.
* New static checks for GraphTensor field shapes, opt out with `tfgnn.disable_graph_tensor_validation()`.
* New runtime checks for GraphTensor field shapes, sizes and index ranges, opt in with `tfgnn.enable_graph_tensor_validation_at_runtime()`.
* `GraphTensor` maintains `.row_splits_dtype` separately from `.indices_dtype`.
* The `GraphSchema` and the I/O functions for `tf.Example` now support all non-quantized, non-complex floating-point and integer types as well as `bool` and `string`.
* Added convenience wrapper `tfgnn.pool_neighbors_to_node()`.
* Misc fixes to `tfgnn.random_graph_tensor()`, now respects component boundaries.
* Runner
* New tasks for link prediction and node classification/regression based on structured readout.
* Now comes with API docs.
* Models collection
* `models/contrastive_losses` gets multiple extensions, including a triplet loss and API docs.
* `models/multi_head_attention` replaces sigmoid with elu+1 in trained scaling.
* Bug fixes for mixed precision.

**Full Changelog**: https://github.com/tensorflow/gnn/compare/v0.6.1...v1.0.0

1.0.2rc1

Release 1.0 is the first with a stable [public API](tensorflow_gnn/docs/api_docs/README.md).

What's Changed in r1.0

* Overall
* Supported TF/Keras versions moved to `>=2.12,<2.16`, incompatible Keras v3 raises a clear error.
* Importing the library no longer leaks private module names.
* All parts of the `GraphSchema` protobuf are now exposed under`tfgnn.proto.*`.
* [Model saving](tensorflow_gnn/docs/guide/model_saving.md) now clearly distinguishes export to inference (pure TF, fully supported) from misc ways of saving for model reuse.
* Numerous small bug fixes.
* Subgraph sampling: major upgrade
* New and unified sampler for [in-memory](tensorflow_gnn/docs/guide/inmemory_sampler.md) and [beam-based](tensorflow_gnn/docs/guide/beam_sampler.md) subgraph sampling.
* Module `tfgnn.experimental.in_memory` is removed in favor of the new sampler.
* New console script `tfgnn_sampler` replaces the old `tfgnn_graph_sampler`.
* GraphTensor
* Most `tfgnn.*` functions on GraphTensor now work in Keras' Functional API, including the factory methods `GraphTensor.from_pieces(...)` etc.
* New static checks for GraphTensor field shapes, opt out with `tfgnn.disable_graph_tensor_validation()`.
* New runtime checks for GraphTensor field shapes, sizes and index ranges, opt in with `tfgnn.enable_graph_tensor_validation_at_runtime()`.
* `GraphTensor` maintains `.row_splits_dtype` separately from `.indices_dtype`.
* The `GraphSchema` and the I/O functions for `tf.Example` now support all non-quantized, non-complex floating-point and integer types as well as `bool` and `string`.
* Added convenience wrapper `tfgnn.pool_neighbors_to_node()`.
* Misc fixes to `tfgnn.random_graph_tensor()`, now respects component boundaries.
* Runner
* New tasks for link prediction and node classification/regression based on structured readout.
* Now comes with API docs.
* Models collection
* `models/contrastive_losses` gets multiple extensions, including a triplet loss and API docs.
* `models/multi_head_attention` replaces sigmoid with elu+1 in trained scaling.
* Bug fixes for mixed precision.

**Full Changelog**: https://github.com/tensorflow/gnn/compare/v0.6.1...v1.0.0

1.0.2rc0

Release 1.0 is the first with a stable [public API](tensorflow_gnn/docs/api_docs/README.md).

What's Changed in r1.0

* Overall
* Supported TF/Keras versions moved to `>=2.12,<2.16`, incompatible Keras v3 raises a clear error.
* Importing the library no longer leaks private module names.
* All parts of the `GraphSchema` protobuf are now exposed under`tfgnn.proto.*`.
* [Model saving](tensorflow_gnn/docs/guide/model_saving.md) now clearly distinguishes export to inference (pure TF, fully supported) from misc ways of saving for model reuse.
* Numerous small bug fixes.
* Subgraph sampling: major upgrade
* New and unified sampler for [in-memory](tensorflow_gnn/docs/guide/inmemory_sampler.md) and [beam-based](tensorflow_gnn/docs/guide/beam_sampler.md) subgraph sampling.
* Module `tfgnn.experimental.in_memory` is removed in favor of the new sampler.
* New console script `tfgnn_sampler` replaces the old `tfgnn_graph_sampler`.
* GraphTensor
* Most `tfgnn.*` functions on GraphTensor now work in Keras' Functional API, including the factory methods `GraphTensor.from_pieces(...)` etc.
* New static checks for GraphTensor field shapes, opt out with `tfgnn.disable_graph_tensor_validation()`.
* New runtime checks for GraphTensor field shapes, sizes and index ranges, opt in with `tfgnn.enable_graph_tensor_validation_at_runtime()`.
* `GraphTensor` maintains `.row_splits_dtype` separately from `.indices_dtype`.
* The `GraphSchema` and the I/O functions for `tf.Example` now support all non-quantized, non-complex floating-point and integer types as well as `bool` and `string`.
* Added convenience wrapper `tfgnn.pool_neighbors_to_node()`.
* Misc fixes to `tfgnn.random_graph_tensor()`, now respects component boundaries.
* Runner
* New tasks for link prediction and node classification/regression based on structured readout.
* Now comes with API docs.
* Models collection
* `models/contrastive_losses` gets multiple extensions, including a triplet loss and API docs.
* `models/multi_head_attention` replaces sigmoid with elu+1 in trained scaling.
* Bug fixes for mixed precision.

**Full Changelog**: https://github.com/tensorflow/gnn/compare/v0.6.1...v1.0.0

1.0.1

Release 1.0 is the first with a stable [public API](tensorflow_gnn/docs/api_docs/README.md).

What's Changed in r1.0

* Overall
* Supported TF/Keras versions moved to `>=2.12,<2.16`, incompatible Keras v3 raises a clear error.
* Importing the library no longer leaks private module names.
* All parts of the `GraphSchema` protobuf are now exposed under`tfgnn.proto.*`.
* [Model saving](tensorflow_gnn/docs/guide/model_saving.md) now clearly distinguishes export to inference (pure TF, fully supported) from misc ways of saving for model reuse.
* Numerous small bug fixes.
* Subgraph sampling: major upgrade
* New and unified sampler for [in-memory](tensorflow_gnn/docs/guide/inmemory_sampler.md) and [beam-based](tensorflow_gnn/docs/guide/beam_sampler.md) subgraph sampling.
* Module `tfgnn.experimental.in_memory` is removed in favor of the new sampler.
* New console script `tfgnn_sampler` replaces the old `tfgnn_graph_sampler`.
* GraphTensor
* Most `tfgnn.*` functions on GraphTensor now work in Keras' Functional API, including the factory methods `GraphTensor.from_pieces(...)` etc.
* New static checks for GraphTensor field shapes, opt out with `tfgnn.disable_graph_tensor_validation()`.
* New runtime checks for GraphTensor field shapes, sizes and index ranges, opt in with `tfgnn.enable_graph_tensor_validation_at_runtime()`.
* `GraphTensor` maintains `.row_splits_dtype` separately from `.indices_dtype`.
* The `GraphSchema` and the I/O functions for `tf.Example` now support all non-quantized, non-complex floating-point and integer types as well as `bool` and `string`.
* Added convenience wrapper `tfgnn.pool_neighbors_to_node()`.
* Misc fixes to `tfgnn.random_graph_tensor()`, now respects component boundaries.
* Runner
* New tasks for link prediction and node classification/regression based on structured readout.
* Now comes with API docs.
* Models collection
* `models/contrastive_losses` gets multiple extensions, including a triplet loss and API docs.
* `models/multi_head_attention` replaces sigmoid with elu+1 in trained scaling.
* Bug fixes for mixed precision.

**Full Changelog**: https://github.com/tensorflow/gnn/compare/v0.6.1...v1.0.0

1.0.0

First release with a stable [public API](tensorflow_gnn/docs/api_docs/README.md).

What's Changed

* Overall
* Supported TF/Keras versions moved to `>=2.12,<2.16`, incompatible Keras v3 raises a clear error.
* Importing the library no longer leaks private module names.
* All parts of the `GraphSchema` protobuf are now exposed under`tfgnn.proto.*`.
* [Model saving](tensorflow_gnn/docs/guide/model_saving.md) now clearly distinguishes export to inference (pure TF, fully supported) from misc ways of saving for model reuse.
* Numerous small bug fixes.
* Subgraph sampling: major upgrade
* New and unified sampler for [in-memory](tensorflow_gnn/docs/guide/inmemory_sampler.md) and [beam-based](tensorflow_gnn/docs/guide/beam_sampler.md) subgraph sampling.
* Module `tfgnn.experimental.in_memory` is removed in favor of the new sampler.
* New console script `tfgnn_sampler` replaces the old `tfgnn_graph_sampler`.
* GraphTensor
* Most `tfgnn.*` functions on GraphTensor now work in Keras' Functional API, including the factory methods `GraphTensor.from_pieces(...)` etc.
* New static checks for GraphTensor field shapes, opt out with `tfgnn.disable_graph_tensor_validation()`.
* New runtime checks for GraphTensor field shapes, sizes and index ranges, opt in with `tfgnn.enable_graph_tensor_validation_at_runtime()`.
* `GraphTensor` maintains `.row_splits_dtype` separately from `.indices_dtype`.
* The `GraphSchema` and the I/O functions for `tf.Example` now support all non-quantized, non-complex floating-point and integer types as well as `bool` and `string`.
* Added convenience wrapper `tfgnn.pool_neighbors_to_node()`.
* Misc fixes to `tfgnn.random_graph_tensor()`, now respects component boundaries.
* Runner
* New tasks for link prediction and node classification/regression based on structured readout.
* Now comes with API docs.
* Models collection
* `models/contrastive_losses` gets multiple extensions, including a triplet loss and API docs.
* `models/multi_head_attention` replaces sigmoid with elu+1 in trained scaling.
* Bug fixes for mixed precision.

**Full Changelog**: https://github.com/tensorflow/gnn/compare/v0.6.1...v1.0.0

Page 1 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.