Neuralnetlib

Latest version: v4.0.4

Safety actively analyzes 685670 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 9

3.3.8

- docs: add embedding to debug notebook
- refactor: huge simplification of sce to cels which conducted to higher BLUE score
- fix(Transformer): random state
- fix: some fixes and improvements
- fix: some fixes and improvements
- fix(MultiHeadAttion): attention weights now have correct ranges [-0.7;1.0]
- fix: some fixes and improvements
- fix: some fixes and improvements
- feat: new examples
- docs: update readme
- ci: bump version to 3.3.8

3.3.7

- docs: remove useless comments
- fix(Transformer): too much things to tell
- feat: even more precise floating point for metrics and loss
- refactor: special tokens now passed via __init__ for Transformer
- feat: enhance beam search and token prediction mechanisms
- docs: update readme
- fix(Transformer): vanishing gradient fix
- fix(Transformer): still on it (wip)
- fix(Transformer): another fix
- fix(Transformer): special token indices
- fix(Transformer): normalization IS the issue
- docs: update readme
- fix(Transformer): cross attention weights
- fix: LearningRateScheduler
- fix: LearningRateScheduler
- fix: normalization in data preparation
- fix: different vocab size for different tokenizations
- fix(PositionalEncoding): scaling
- fix(AddNorm): better normalization
- fix(TransformerEncoderLayer): huge improvements
- perf(SequenceCrossEntropy): add vectorization
- fix(Tokenizer+Transformer): tokenization alignement for special tokens
- fix(transformer): investigate and address gradient instability and explosion
- fix(sce): label smoothing
- refactor: gradient clipping
- fix(Transformer): gradient explosion
- fix(Transformer): tokens padding and max sequence
- test: tried with a better dataset
- fix(sce): y_pred treated as logits instead of probs
- fix(TransformerEncoderLayer): remove arbitrary scaling
- fix(Transformer): sce won't ignore sos and eos tokens
- fix: sce extending lossfunction
- fix(sce): softmax not necessary
- feat: add BLEU, ROUGE-L and ROUGE-N scores
- fix: validation data in fit method and shuffle in train_test_split
- docs: modifies example to use validation split and bleu score
- fix(PositionalEncoding): better positional scaling
- ci: bump version to 3.3.7

3.3.6

- feat: add Transformer model and layer architecture (wip)
- fix(Transformer): gradient propagation between layers
- fix(Transformer): tokenization, sequence handling and shapes
- fix(callbacks): now compatible with every model architecture
- fix_later: find why the Transformer output won't work
- ci: bump version to 3.3.6

3.3.5

- feat(autoencoder): add VAE image generation
- refactor: imports organization
- refactor: examples folder tree organization
- docs: fix typo
- feat(preprocessing): add ImageDataGenerator
- ci: bump version to 3.3.5

3.3.4

- docs: update readme
- docs: remove useless comments
- fix(convolution): stride parameter
- feat(layer): add UpSampling2D"
- docs: update readme
- perf: changed NCHW to NHWC for CPU efficiency
- docs: update readme
- perf: switch from NCL to NLC for CPU efficiency
- ci: bump version to 3.3.4

3.3.3

- fix(example): weight init
- docs(examples): fresh run
- docs: update readme
- fix(layers): encoder and decoder layers
- fix(conv2d): align output shape calculation between im2col and convolve
- ci: bump version to 3.3.3

Page 2 of 9

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.