Piq

Latest version: v0.8.0

Safety actively analyzes 682244 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

0.8.0

PyTorch Image Quality (PIQ) v0.8.0 Release Notes
- CLIP-IQA (331)
- Fixes (359, 364, 368)

New Features
CLIP-IQA
With this release, we introduce implementation of [CLIP-IQA](https://arxiv.org/abs/2207.12396). In contrast to other available implementations, our apprach is standalone (say no to any additional dependencies) and delivers the estimations, which match the official implementation.

The image quality is measured based on a general notion of text-to-image similarity learned by the [CLIP model](https://arxiv.org/pdf/2103.00020.pdf) during its large-scale pre-training on a large dataset with paired texts and images.
Such approach follows the idea that two antonyms (“Good photo” and “Bad photo”) can be used as anchors in the text embedding space representing good and bad images in terms of their image quality.
After the anchors are defined, one can use them to determine the quality of a given image in the following way:
1. Compute the image embedding of the image of interest using the pre-trained CLIP model;
2. Compute the text embeddings of the selected anchor antonyms;
3. Compute the angle (cosine similarity) between the image embedding (1) and both text embeddings (2);
4. Compute the Softmax of cosine similarities (3) -> CLIP-IQA score.

Fixes
- Added proper meshgrid indexing (359);
- Fixed usage of metrics on GPU (364);
- Added documentation for new measures (368);

**Full Changelog**: https://github.com/photosynthesis-team/piq/compare/v0.7.1...v0.8.0

Contributors: rbischof, zakajd, snk4tr, denproc.

0.7.1

PyTorch Image Quality (PIQ) v0.7.1 Release Notes: Optimisations
- Enhancements (317, 330, 325, 326, 334, 339, 340, 337, 338, 341, 342, 356)


Enhancements
- Added SR-SIM and SR-SIMc to benchmark script (317);
- Updated Github CI/CD (330, 339, 340);
- Added batch-wise computation of features for feature-based metrics (325);
- Updated Readme ([26d044e](https://github.com/photosynthesis-team/piq/commit/26d044e28231cd286b4a7e9e0e6c704d1ed39398), #326);
- Optmised CPU usage for FSIM and VSI (334, 342);
- Optimised setup (337);
- Added Conda deployment for generic environment (338);
- Added Manifest (356);

**Contributors:** pooya-mohammadi , zakajd, snk4tr, denproc.

**Full Changelog**: https://github.com/photosynthesis-team/piq/compare/v0.7.0...v0.7.1

0.7.0

PyTorch Image Quality (PIQ) v0.7.0 Release Notes
- Information Content Weighted Structural Similarity (IW-SSIM) index (301, 311);
- Pre-commit Hooks (293);
- Enhancements (273, 276, 286, 285, 290, 296, 292, 302, 305, 313, 270);

New Features
Information Content Weighted Structural Similarity (IW-SSIM) Index (301, 311)
The new release of PIQ introduces metric and loss function interfaces for Information Content Weighted Structural Similarity (IW-SSIM) index. PIQ implementation of IW-SSIM is standalone. It doesn't require any additional packages for computing the index and includes optimised use of Laplacian pyramids. The performance of the proposed IW-SSIM index matches SRCC and KRCC estimations on [TID2013](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7442122), and [KADID10k](https://arxiv.org/pdf/2001.08113.pdf) datasets.

IW-SSIM was proposed by [Zhou Wang and Qiang Li](https://ece.uwaterloo.ca/~z70wang/publications/IWSSIM.pdf). IW-SSIM takes into account local information content for image quality assessment. The content is estimated using advanced statistical models of natural images. Experiments showed consistent improvement in the performance for IQA, while IW-SSIM led to the best overall performance.


Pre-Commit Hooks (293)
With this release, pre-commit hooks were added to the project. Pre-commit hooks enable automatic validation of code before committing it into the project. Such approach allows to automate code style checks locally and reduces number of commits to fix code style issues. To enable the feature on your machine, please follow the [contribution guide](https://piq.readthedocs.io/en/latest/contributing.html).


Enhancements
- Updated library version parsers to meet `semver` and `PEP404` formats (273, 290);
- Updates for `github/actions` (296);
- Upgrade for variable creation depending on input type (292);
- Upgrade Benchmarking for metrics delivered with PIQ package (270);
- Update README file (270);

Bug Fix
- Fix for documentation description of feature-base metrics (276);
- Fix for MDSI loss computation (286);
- Fix for Precision and Recall metrics (285);
- Fix for PSNR description (302);
- Fix exceptions for some metric (305);
- Fix BRISQUE interpolation to match MATLAB resize (313);

**Contributors:** merunes-goldman, zakajd, snk4tr, denproc.

0.6.0

PyTorch Image Quality (PIQ) v0.6.0 Release Notes
- Spectral Residual based Similarity (SR-SIM, SR-SIMc) Metric (202)
- DCT Subbands Similarity (DSS) Metric (225, 268)
- Benchmark on PIPAL dataset (269)

New Features
Spectral Residual based Similarity (202)
With current release, we added Spectral Residual based Similarity (SR-SIM) measure. The metric was [introduced](https://sse.tongji.edu.cn/linzhang/ICIP12/ICIP-SR-SIM.pdf) based on a specific visual saliency model, spectral residual visual saliency.
In addition, we also implemented SR-SIMc, which is a `chromatic` version of the SR-SIM.

DCT Subbands Similarity (DSS) (225, 268)
DCT Subbands Similarity (DSS) was [presented](http://sipl.eelabs.technion.ac.il/wp-content/uploads/sites/6/2016/09/paper15-Image-Quality-Assessment-Based-on-DCT-Subband-Similarity.pdf) visual quality metric that correlates well with human visual perception. The measure uses properties of human visual perception, evaluating changes in structural information in sub-bands in DCT domain. DSS showed great results according to public image datasets benchmarks, while being computationally efficient.

PIPAL Benchmark (269)
In this release we added another public image dataset benchmark. [PIPAL](https://www.jasongt.com/projectpages/pipal.html) is the largest human-rated set of images to date and the only one containing rich number of realistic distortions from GAN models. Benchmarking metrics performance on this set can give a good estimate of their usefulness in GAN modes evaluation.
Benchmark results are available at [README.rst](https://github.com/photosynthesis-team/piq#benchmark) and [documentatioin](https://piq.readthedocs.io/en/latest/overview.html#benchmark).

Bug Fix
- Fixed readme formatting on [pypi](https://pypi.org/project/piq/) (#263);
- Added type check for `ContentLoss` before copying weights tensor (264);
- Fixed bug with layers/weighs length in the `ContentLoss` (259);

**Contributors:** zakajd, snk4tr, denproc, leihuayi.

0.5.5

PyTorch Image Quality (PIQ) v0.5.5 Release Notes
- Precision-Recall (P&R) Metric (247)
- Documentation (217)
- Enhancements (211, 219, 220, 229, 230, 233, 234)
- Bug Fix (213, 237, 244, 238, 246, 250, 243)

New Features
Precision-Recall (247)
In this release, we added new Precision-Recall metric. The metric was [introduced](https://arxiv.org/pdf/1904.06991.pdf) for assessing generative models, i.e. estimate the quality and coverage of the generated samples. The metric can separately and reliably measure both of these aspects in image generation tasks by forming explicit, non-parametric representations of the manifolds of real and generated data.

Documentation (217)
With this release we introduce [piq.readthedocs.io](https://piq.readthedocs.io/en/latest/) documentation to provide assistance using our library. Installation and usage guides help to start using [PIQ](https://github.com/photosynthesis-team/piq) framework in your projects. The documentation includes relevant information about metric interfaces and metric specific differences. In order to keep the documentation up-to-date, we use automatic pipeline for generation and deployment to [piq.readthedocs.io](https://piq.readthedocs.io/en/latest/). In addition, we updated in #233 pipeline allows to update `README.rst` and documentation simultaneously.

Enhancements
- Updated README with benchmarking results and references (211);
- Updated PR template with check list for new metrics (219);
- Added BibTex Citation (220);
- Added guide for documentation upgrades (229);
- Unified tensor names for all metrics, measures and losses (230);
- Upgraded documentation pipeline to keep both documentation and `README.rst` up-to-date (233);
- Upgraded input validation to simplify the interface and allow advanced users to turn off validation (234);


Bug Fix
- Fixed missing average pooling in SSIM, implementation enhancements (213);
- Removed rarely used library dependencies from `requirements.txt` (237);
- Added exceptional import to MSID due to updated `requirements.txt` (244);
- Added downsampling to DISTS boosting the performance (238);
- Fixed use of FFT due to interface changes introduced in `torch==1.8.1` (246);
- Fixed the condition for FFT due to interface changes introduced in `torch==1.8.1`(250);
- Unified documentation style across all docstrings (243);

**Contributors:** zakajd, snk4tr, denproc, hecoding.

0.5.4

PyTorch Image Quality (PIQ) v0.5.4 Release Notes
New metric, readme update, usability enhancements and bug fixes.

New Features
New metric - PieAPP (184)

Documentation
- README update and small readme-related interface changed (204, 206)

Enhancements
- Unified tensor descriptions (172) bring more consistency between different metrics and measures
- More flexibility with respect to input data with a new `allow_negative` flag for some metrics, which support negative inputs (169)
- MyPy tests bringing more reliability are a part of our CI pipeline (180)
- More efficient computations of FID (no additional CPU load while computed on GPU) (186)
- New tests for correct `data_range` for virtually all present metrics (195)
- Corrections in the `fid_inception` in case of `normalize_mode=False` (191)
- Bug fixes in FSIMc (200) and VIFp (210)

**Contributors:** zakajd, snk4tr, akamaus, bes-dev, denproc.

Page 1 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.