Delu

Latest version: v0.0.23

Safety actively analyzes 623567 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

0.0.23

This is a minor release.

- Various improvements in the documentation.
- `delu.nn.named_sequential` is deprecated.
- `delu.utils.data.Enumerate` is deprecated.
- `delu.utils.data.IndexDataset` is deprecated.

0.0.22

This release improves the documentation website (both style and content).

0.0.21

This is a relatively big release after v0.0.18.

Breaking changes
- `delu.iter_batches`: now, `shuffle` is a keyword-only argument
- `delu.nn.Lambda`
- now, this module accepts only the functions from the `torch` module or methods of `torch.Tensor`
- now, the passed callable is not accessible as a public attribute
- `delu.random.seed`: the algorithm computing the library- and device-specific seeds changed, so the result can change compared to the previous versions
- In the following functions, the first arguments are now positional-only:
- `delu.to`
- `delu.cat`
- `delu.iter_batches`
- `delu.Timer.format`
- `delu.data.Enumerate`
- `delu.nn.Lambda`
- `delu.random.seed`
- `delu.random.set_state`

New features

- Added `delu.tools` -- a new home for `EarlyStopping`, `Timer` and other general tools.

- Added `delu.nn.NLinear` -- a module representing N linear layers that are applied to N different inputs:
`(*B, *N, D1) -> (*B, *N, D2)`, where `*B` are the batch dimensions.
- Added `delu.nn.named_sequential` -- a shortcut for creating `torch.nn.Sequential` with named modules without `OrderedDict`:

sequential = delu.nn.named_sequential(
('linear1', nn.Linear(10, 20)),
('activation', nn.ReLU()),
('linear2', nn.Linear(20, 1))
)


- `delu.nn.Lambda`: now, the constructor accepts keyword arguments for the callable:

m = delu.nn.Lambda(torch.squeeze, dim=1)


- `delu.random.seed`
- the algorithm computing random seeds for all libraries was improved
- now, `None` is allowed as `base_seed`; in this case, an unpredictable seed generated by OS will be used **and returned**:

truly_random_seed = delu.random.seed(None)

- `delu.random.set_state`: now, omitting the `'torch.cuda'` is allowed to avoid setting the states of CUDA RNGs
Deprecations & Renamings
- `delu.data` was renamed to `delu.utils.data`. The old name is now a deprecated alias.
- `delu.Timer` and `delu.EarlyStopping` were moved to the new `delu.tools` submodule. The old names are now deprecated aliases.

Dependencies
- Now, `torch >=1.8,<3`

Documentation
- Updated logo
- Simplified structure
- Removed the only (and not particularly representative) end-to-end example

Project
- Migrate from sphinx doctest to xdoctest

0.0.18

Add support for PyTorch 2

0.0.17

Breaking changes
- `delu.cat`: now, the input must be a list (before, any iterable was allowed)
- `delu.Timer`: now, `print(timer)`, `str(timer)`, `f'{timer}'` etc. return the full-precision representation (without rounding to seconds)

New features
- `delu.EarlyStopping`: a simpler replacement for `delu.ProgressTracker`, but now, without tracking the best score. The usage is very similar, see the documentation.
- `delu.cat`: now, supports nested collections (e.g. the input can be a list of `tuple[Tensor, dict[str, tuple[Tensor, Tensor]]]`)

Deprecations
- `delu.ProgressTracker`: instead, use `delu.EarlyStopping`
- `delu.data.FnDataset`: no alternatives are provided

0.0.15

Breaking changes

- `delu.iter_batches` is now powered by `torch.arange/randperm`, the interface was changed accordingly
- `delu.Timer`: the methods `add` and `sub` are removed


New features

- `delu.to`: like `torch.Tensor.to`, but for (nested) collections of tensors
- `delu.cat`: like `torch.cat`, but for collections of tensors
- `delu.iter_batches` is now faster and has a better interface

Deprecations

- `delu.concat` is deprecated in favor of `delu.cat`
- `delu.hardware.free_memory` is now a deprecated alias to `delu.cuda.free_memory`
- deprecate `delu.data.Stream`
- deprecate `delu.data.collate`
- instead, use `torch.utils.data.dataloader.default_collate`
- deprecate `delu.data.make_index_dataloader`
- instead, use `delu.data.IndexDataset` + `torch.utils.data.DataLoader`
- deprecate `delu.evaluation`
- instead, use `torch.nn.Module.eval` + `torch.inference_mode`
- deprecate `delu.hardware.get_gpus_info`
- instead, use the corresponding functions from `torch.cuda`
- deprecate `delu.improve_reproducibility`
- instead, use `delu.random.seed` and manually set settings mention [here](https://pytorch.org/docs/stable/notes/randomness.html)

Documentation
- many improved explanations and examples

Dependencies

- require `python>=3.8`
- remove `tqdm` and `pynvml` from dependencies

Project
- switch from flake8 to ruff
- move tool settings from setup.cfg to pyproject.toml for coverage, isort, mypy
- freeze versions in requirements_dev.txt

Page 1 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.