Pytorch-widedeep

Latest version: v1.6.5

Safety actively analyzes 682532 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 4

1.3.0

* Added a new functionality to access feature importance via attention weights for all DL models for Tabular data except for the `TabPerceiver`. This functionality is accessed via the `feature_importance` attribute in the trainer (computed during training with a sample of observations) and at predict time via de `explain` method.
* Fix all restore weights capabilities in all forms of training. Such capabilities are present in two callbacks, the `EarlyStopping` and the `ModelCheckpoint` Callbacks. Prior to this release there was a bug and the weights were not restored.

joss_paper_package_version_v1.2.0

1.2.2

1. Fixed a bug related to the option of adding a FC head on top of the "backbone" models
2. Added a notebook to illustrate how one could use a Hugginface model along with any other model in the library

v.1.2.1
Simple minor release fixing the implementation of the additive attention (see 110 )

1.2.0

There are a number of changes and new features in this release, here is a summary:

1. Refactored the code related to the 3 forms of training in the library:
- Supervised Training (via the `Trainer` class)
- Self-Supervised pre-training: we have implemented two methods or routines for self-supervised pre-training. These are:
- Encoder-Decoder Pre-Training (via the `EncoderDecoderTrainer` class): this is inspired by the [TabNet paper](https://arxiv.org/abs/1908.07442)
- Constrastive-Denoising Pre-Training (via de `ConstrastiveDenoising` class): this is inspired by the [SAINT paper](https://arxiv.org/abs/2106.01342)
- Bayesian or Probabilistic Training (via the `BayesianTrainer`: this is inspired by the paper [Weight Uncertainty in Neural Networks

1.1.2

Simply Update all documentation

1.1.0

This release fixes some minor bugs but mainly brings a couple of new functionalities:

1. New experimental Attentive models, namely: `ContextAttentionMLP` and `SelfAttentionMLP`.
2. 2 Probabilistic models based on Bayes by Backprop (BBP) as described in [Weight Uncertainty in Neural Networks](https://arxiv.org/abs/1505.05424), namely: `BayesianTabMlp` and `BayesianWide`.

1.0.10

This minor release simply fixes issue 53 related to the fact that `SAINT`, the `FT-Transformer` and the `TabFasformer` failed when the input data had no categorical columns

Page 2 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.