Neuralnetlib

Latest version: v4.3.4

Safety actively analyzes 714815 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 11

4.0.2

- feat(ensemble): add AdaBoost
- feat(ensemble): add GradientBoostingMachine
- ci: bump version to 4.0.2

4.0.1

- docs: add gif into notebook
- feat(GAN): auto sizing for epoch plot
- feat: add data imputation
- feat(ensemble): add IsolationForest
- feat(ensemble): add RandomForest
- feat(cluster): add KMeans
- feat(cluster): add DBSCAN
- docs: update readme

4.0.0

- feat: add GAN
- fix(GAN): weights and backpropagation
- refactor(BatchNormalization): more stable training
- fix(GAN): some fixes and general improvements
- feat(Loss): add WGAN loss
- feat(WGAN): improve generation
- ci: bump version to 3.3.9
- fix(WGAN): discriminator training on both real and fake data
- feat(GAN): improve image generation
- fix(Transformer): yet another mode collapse fix
- feat(layers): better gradient scaling and stability
- feat(Tokenizer): add BPE tokenize mode
- docs: update readme
- ci: bump version to 4.0.0

3.3.9

- feat: add GAN
- fix(GAN): weights and backpropagation
- refactor(BatchNormalization): more stable training
- fix(GAN): some fixes and general improvements
- feat(Loss): add WGAN loss
- feat(WGAN): improve generation
- ci: bump version to 3.3.9

3.3.8

- docs: add embedding to debug notebook
- refactor: huge simplification of sce to cels which conducted to higher BLUE score
- fix(Transformer): random state
- fix: some fixes and improvements
- fix: some fixes and improvements
- fix(MultiHeadAttion): attention weights now have correct ranges [-0.7;1.0]
- fix: some fixes and improvements
- fix: some fixes and improvements
- feat: new examples
- docs: update readme
- ci: bump version to 3.3.8

3.3.7

- docs: remove useless comments
- fix(Transformer): too much things to tell
- feat: even more precise floating point for metrics and loss
- refactor: special tokens now passed via __init__ for Transformer
- feat: enhance beam search and token prediction mechanisms
- docs: update readme
- fix(Transformer): vanishing gradient fix
- fix(Transformer): still on it (wip)
- fix(Transformer): another fix
- fix(Transformer): special token indices
- fix(Transformer): normalization IS the issue
- docs: update readme
- fix(Transformer): cross attention weights
- fix: LearningRateScheduler
- fix: LearningRateScheduler
- fix: normalization in data preparation
- fix: different vocab size for different tokenizations
- fix(PositionalEncoding): scaling
- fix(AddNorm): better normalization
- fix(TransformerEncoderLayer): huge improvements
- perf(SequenceCrossEntropy): add vectorization
- fix(Tokenizer+Transformer): tokenization alignement for special tokens
- fix(transformer): investigate and address gradient instability and explosion
- fix(sce): label smoothing
- refactor: gradient clipping
- fix(Transformer): gradient explosion
- fix(Transformer): tokens padding and max sequence
- test: tried with a better dataset
- fix(sce): y_pred treated as logits instead of probs
- fix(TransformerEncoderLayer): remove arbitrary scaling
- fix(Transformer): sce won't ignore sos and eos tokens
- fix: sce extending lossfunction
- fix(sce): softmax not necessary
- feat: add BLEU, ROUGE-L and ROUGE-N scores
- fix: validation data in fit method and shuffle in train_test_split
- docs: modifies example to use validation split and bleu score
- fix(PositionalEncoding): better positional scaling
- ci: bump version to 3.3.7

Page 3 of 11

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.