Neuralcompression

Latest version: v0.3.1

Safety actively analyzes 693883 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 2

0.2.1

This release covers a few small fixes from PRs 171 and 172.

Dependencies
- To retrieve versioning information, we now use `importlib`. This is included only with Python >= 3.8, so NeuralCompression will now only run on versions of Python at least as recent as 3.8. (171).
- Install requirements are flexible, whereas dev requirements are fixed (171). This should improve CI stability while allowing researchers flexibility in tuning their research environment while using NeuralCompression.
- `torch` has been removed as a build dependency (172).
- Other build dependencies have been modified to be flexible (172).

Build System
- C++ code from `_pmf_to_quantized_cdf` introduced compilation requirements when running `setup.py`. Since we didn't configure our build system to handle specific operating systems, this caused a failed release upload to PyPI. The build system has been altered to use `torch.utils.cpp_extension.load`, which defers compilation to the the user after package installation. We would like to improve this further at some point, but the modifications from 171 gets the package stable. Note: there is a reasonable chance this could fail on non-Linux OS's such as Windows. Those users will still be able to use other package features that don't rely on `_pmf_to_quantized_cdf`.

Other
- Fixed a linting issue where `isort` was not checking in CI if packages were properly sorted. (171).
- Fixed a random test issue (171).

0.2.0

NeuralCompression is a PyTorch-based Python package intended to simplify neural network-based compression research. It is similar to (and shares some of the functionality) of fantastic libraries like [TensorFlow Compression](https://tensorflow.github.io/compression/) and [Compress AI](https://interdigitalinc.github.io/CompressAI/).

The major theme of v0.2.0 release is **autoencoders**, particularly features useful for implementing existing models by Ballé and features useful to expand on these models in forthcoming research. In addition, 0.2.0 sees some code organization changes and published [documentation](https://neuralcompression.readthedocs.io/en/latest/). I recommend reading the new “[Image Compression](https://neuralcompression.readthedocs.io/en/latest/examples/plot_image_compression.html)” example to see some of these changes.

API Additions

Data (`neuralcompression.data`)

* `CLIC2020Image`: [Challenge on Learned Image Compression (CLIC) 2020](http://compression.cc/tasks/) Image Dataset
* `CLIC2020Video`: [Challenge on Learned Image Compression (CLIC) 2020](http://compression.cc/tasks/) Video Dataset

Distributions (`neuralcompression.distributions`)

* `NoisyNormal`: normal distribution with additive identically distributed (i.i.d.) uniform noise.
* `UniformNoise`: adapts a continuous distribution via additive identically distributed (i.i.d.) uniform noise.

Functional (`neuralcompression.functional`)

* `estimate_tails`: estimates approximate tail quantiles.
* `log_cdf`: logarithm of the distribution’s cumulative distribution function (CDF).
* `log_expm1`: logarithm of `e^{x} - 1`.
* `log_ndtr`: logarithm of the normal cumulative distribution function (CDF).
* `log_survival_function`: logarithm of `x` for a distribution’s survival function.
* `lower_bound`: `torch.maximum` with a gradient for `x < bound`.
* `lower_tail`: approximates lower tail quantile for range coding.
* `ndtr`: the normal cumulative distribution function (CDF).
* `pmf_to_quantized_cdf`: transforms a probability mass function (PMF) into a quantized cumulative distribution function (CDF) for entropy coding.
* `quantization_offset`: computes a distribution-dependent quantization offset.
* `soft_round_conditional_mean`: conditional mean of `x` given noisy soft rounded values.
* `soft_round_inverse`: inverse of `soft_round`.
* `soft_round`: differentiable approximation of `torch.round`.
* `survival_function`: survival function of `x`. Generally defined as `1 - distribution.cdf(x)`.
* `upper_tail`: approximates upper tail quantile for range coding.

Layers (`neuralcompression.layers`)

* `AnalysisTransformation2D`: applies the 2D analysis transformation over an input signal.
* `ContinuousEntropy`: base class for continuous entropy layers.
* `GeneralizedDivisiveNormalization`: applies generalized divisive normalization for each channel across a batch of data.
* `HyperAnalysisTransformation2D`: applies the 2D hyper analysis transformation over an input signal.
* `HyperSynthesisTransformation2D`: applies the 2D hyper synthesis transformation over an input signal.
* `NonNegativeParameterization`: the parameter is subjected to an invertible transformation that slows down the learning rate for small values.
* `RateMSEDistortionLoss`: rate-distortion loss.
* `SynthesisTransformation2D`: applies the 2D synthesis transformation over an input signal.

Models (`neuralcompression.models`)

End-to-end Optimized Image Compression


End-to-end Optimized Image Compression
Johannes Ballé, Valero Laparra, Eero P. Simoncelli

0.1.0

This releases the project to PyPI and tests the GitHub action for releases.

Page 2 of 2

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.