Molgraph

Latest version: v0.8.1

Safety actively analyzes 723685 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 6 of 7

0.5.30.5.5

Bug fixes
- `molgraph`
- Make molgraph compatible with tf>=2.9.0. Before only compatible with tf>=2.12.0.
- `molgraph.layers`
- `_get_reverse_edge_features()` of `edge_conv.py` is now correctly obtaining the reverse edge features.
- Missing numpy import is now added for some preprocessing layers.

0.5.8

Bug fixes
- `molgraph.layers`
- `from_config` of `molgraph.layers.gnn_layer` should now properly build/initialize the the derived layer. Specifically, a `GraphTensorSpec` should now be passed to `build_from_signature()`.

Minor features and improvements
- `molgraph.models`
- `layer_names` of `molgraph.models.GradientActivationMapping` is now optional. If `None` (the default), the object will look for, and use, all layers subclassed from `GNNLayer`. If not found, an error will be raised.

0.5.7

Breaking changes
- `molgraph`
- Optional `positional_encoding` field of `GraphTensor` is renamed to `node_position`. A (Laplacian) positional encoding is included in a `GraphTensor` instance when e.g. `positional_encoding_dim` argument of `chemistry.MolecularGraphEncoder` is not `None`. The positional encoding is still referred to as "positional" and "encoding" in `layers.LaplacianPositionalEncoding` and `chemistry.MolecularGraphEncoder`, though the actual data field added to the `GraphTensor` is `node_position`.
- `molgraph.chemistry`
- `inputs` argument replaced with `data`.

Bug fixes
- `molgraph.chemistry`
- `molgraph.chemistry.tf_records.write()` no longer leaks memory. A large dataset (about 10 million small molecules, encoded as graph tensors) is expected to be written to tf records without exceeding 3GB memory usage.

Minor features and improvements
- `molgraph.chemistry`
- `molgraph.chemistry.tf_records.write()` now accepts `None` input for `encoder`. If `None` is passed, it is assumed that `data['x']` contains `GraphTensor` instances (and not e.g. SMILES strings).
- `molgraph.tensors`
- `node_position` is now an attribute of the `GraphTensor`. Note: `positional_encoding` can still be used to access the positional encoding (now `node_position` of a `GraphTensor` instance). However, it will be depracated in the near future.

0.5.6

Breaking changes
- `molgraph.layers`
- `molgraph.layers.DotProductIncident` no longer takes `apply_sigmoid` as an argument. Instead it takes `normalize`, which specifies whether the dot product should be normalized, resulting in cosine similarities (values between -1 and 1).
- `molgraph.models`
- `GraphAutoEncoder` (GAE) and `GraphVariationalAutoEncoder` (GVAE) are changed. The default `loss` is `None`, which means that a default loss function is used. This loss function simply tries to maximize the positive edge scores and minimize the negative edge scores. `predict` now returns the (positive) edge scores corresponding to the inputted `GraphTensor` instance. `get_config` now returns a dictionary, as expected. The default decoder is `molgraph.layers.DotProductIncident(normalize=True)`. Note: there is still some more work to be done with GAE/GVAE; e.g. improving the "`NegativeGraphSampler`" and (for VGAE) improving the `beta` schedule.
- `molgraph.tensors`
- `GraphTensor.propagate()` now removes the `edge_weight` data component, as
it has already been used.


Major features and improvements
- `molgraph.models`
- `GraphMasking` (alias: `MaskedGraphModeling`) is now implemented. Like the autoencoders, this model pretrains an encoder; though instead of predicting links between nodes, it predicts randomly masked node and edge features. (Currently only works with tokenized node and edge features (via `chemistry.Tokenizer`).) This pretraining strategy is inspired by BERT for language modeling.

Bug fixes
- `molgraph.layers`
- `from_config` now works as expected for all gnn layers. Consequently, `gnn_model.from_config(gnn_model.get_config())` now works fine.

Minor features and improvements
- `molgraph.layers`
- `_build_from_vocabulary_size()` removed from `EmbeddingLookup`. Instead creates `self.embedding` in `adapt()` or `build()`.

0.5.2

Breaking changes

- `molgraph.models`
- Update DGIN and DMPNN. These models are now working more as expected.

0.5.1

- `molgraph`
- Replace tensorflow/keras functions to make MolGraph compatible with tensorflow 2.13.0. E.g. `keras.utils.register_keras_serializable` is replaced with `tf.keras.saving.register_keras_serializable`.

Page 6 of 7

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.