Flambe

Latest version: v0.4.18

Safety actively analyzes 683530 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 4

0.4.6

Not secure
New features
- The flambé configuration can now be seen through the web UI
- `Embeddings` now supports loading locally saved `gensim` embeddings
- Better language modeling support with the `Enwiki8` and `Wikitext103` datasets, and the new `CorpusSampler`
- `Experiment` now has an option fail fast flag to avoid continuing the execution of the stages when an error has occured
- Added a `BPETokenizer`, a simple wrapper around `FastBPE`
- Added a new option to the `TextField` that allows dropping unknown tokens when processing an example.

Bug fixes
- Fix bug that was not allowing the resources to be synced to all hosts
- Update `sru` version that fixes performance bugs.
- Fix bug where GPU trained model could not be loaded to CPU when using `load_state_from_file`

0.4.5

Not secure
New features:
* Now everything is batch first. Samplers will generate tensors with the first dimension being the batch.

Bug fixes
* Tensors now get serialized correctly when put inside the state. We now use `torch.save` and `torch.load` instead of plain `pickle` to handle the stash.
* The console in the website now works in a more robust way.
* We fix a bug in serialization that wasn't allowing loading correctly objects that originally had links.

0.4.4

Not secure
New features
-------------
- New link mechanisms: when using links inside the configuration format (`!`) there is now a different syntax for schematic paths and attribute paths. You can reference objects as shown in the config using the bracket notation(ex: `trainer[model][encoder]`), while you can refer to object attributes using dot notation, as it is done now (ex: `trainer[model][encoder].input_dim`). See documentation for more information.
- New objects: `Transformer`, `TransformerEncoder` and `TransformerDecoder`
- New objects: `TransformerSRU`, `TransformerSRUEncoder`, `TransformerSRUDecoder` (a variant of the Transformer architecture where the FFN is replaced with an SRU cell)
- New pytorch-transformers integration with all the available objects: `BertEmbedder`, `RobertaEmbedder`, `TransoXLEmbedder`, `XLNetEmbedder`, etc.. The equivalent text field objects are also provided. These objects are meant to be used to load pretrained models and to possibly be finetuned, not for training from scratch. You can load any pretrained model with a simple alias: `BertEmbedder('bert-base-uncased')`
- Positional encoding was added as an option to the `Embeddings` object. They can be fixed or learned.
- The `Embedder` now supports custom pooling. Currently implemented: `AvgPooling`, `SumPooling`, `LastPooling`, `FirstPooling`.
- There is now a way to get the current trial output directory for custom outputs: `flambe.logging.get_trial_dir`
- `WordTokenizer` now tokenizes punctuation
- `NGramTokenizer` now supports stop words removal
- The `Trainer` now evaluates before starting training


Breaking changes
------------------
- The new linking mechanism may cause some errors. For example, the common `train.model.trainable_parameters` link in configurations will not work anymore, as the trainer is not initialized yet in this scenario. The correct link is now `train[model].trainable_parameters`
- `PooledRNNEncoder` is deprecated but can still be used in this version
- The `autogen_val_test` method in TabularDataset was renamed `autogen` and supports a test_path input

Bug Fixes
----------
- Fixed a bug where the AUC for max_fpr < 1 was computed wrong
-
Fixed a bug that prevented loading a saved components containing extensions

0.4.3

Not secure
New Features
------------
- `LabelField` now have new attributes for label frequencies that can be used, for example, to apply weights to loss functions
- Volume type (`gp2` or `io1`) is now an `AWSCluster` parameter
- Object tags are verified when launching a config
- New Binary metrics
- Volumes in AWS now have tags as well
- New utility to build config templates using jinja
- TabularDataset has a new factory method to automate train/dev/test splitting
- The Trainer now has gradient clipping options


Bug Fixes
----------
- Timeouts in AWS work now in a more robust way:
* The ray cluster is shutdown in all instances so that the AWS alarms can correctly trigger.
* Fixed a bug that triggered the created alarms if the machine was currently idle.
- More stable execution of remote commands through tmux
- `rsync`ing flambe to the instances in `dev` mode is now much faster as it ignores all .gitignore resources
- Fields now correctly dump the vocabulary when saved

0.4.2

Not secure
New Features
-------------
- AWSCluster can take a region name

Bug Fixes
----------
- `model` is now correctly imported at the top level
- Better flow in installing extensions remotely

Documentation
----------------
- Improved "Using Custom Code in Flambé" tutorial
- Fixed typos in docs

0.4.1

Not secure
Documentation
---------------
- New AWS cluster tutorial

Bug Fixes
----------
- Fix an issue with accessing correct role in AWSCluster
- Correctly remove output folders for variants that did not run due to a reduce operation in a previous stage

Page 3 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.