Albumentations

Latest version: v2.0.4

Safety actively analyzes 707009 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 8

1.4.11

- Support our work
- Transforms
- Core functionality
- Deprecations
- Improvements and bug fixes

Support Our Work
1. Love the library? You can contribute to its development by becoming a [sponsor for the library](https://github.com/sponsors/albumentations-team). Your support is invaluable, and every contribution makes a difference.
2. Haven't starred our repo yet? Show your support with a ⭐! It's just [only one mouse click](https://github.com/albumentations-team/albumentations).
3. Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our [issues](https://github.com/albumentations-team/albumentations/issues) or join the conversation on our [Discord server for Albumentations](https://discord.gg/AmMnDBdzYs)

Transforms
Added OverlayElements transform

Allows to paste set of images + corresponding masks to the image.
It is not entirely `CopyAndPaste` as "masks", "bounding boxes" and "keypoints" are not supported, but step in that direction.

[Example Notebook](https://github.com/albumentations-team/albumentations_examples/blob/main/notebooks/example_OverlayElements.ipynb)

![image](https://github.com/albumentations-team/albumentations/assets/5481618/2e0ebf10-fda4-430f-bfd2-b00d5825d9b2)

Affine
Added balanced sampling for `scale_limit`

From [FAQ](https://albumentations.ai/docs/faq/#how-to-perform-balanced-scaling):

The default scaling logic in `RandomScale`, `ShiftScaleRotate`, and `Affine` transformations is biased towards upscaling.

For example, if `scale_limit = (0.5, 2)`, a user might expect that the image will be scaled down in half of the cases and scaled up in the other half. However, in reality, the image will be scaled up in 75% of the cases and scaled down in only 25% of the cases. This is because the default behavior samples uniformly from the interval `[0.5, 2]`, and the interval `[0.5, 1]` is three times smaller than `[1, 2]`.

To achieve balanced scaling, you can use Affine with balanced_scale=True, which ensures that the probability of scaling up and scaling down is equal.

python
balanced_scale_transform = A.Compose([A.Affine(scale=(0.5, 2), balanced_scale=True)])


by ternaus

RandomSizedBBoxSafeCrop
Added support for keypoints

by ternaus

BBoxSafeRandomCrop

Added support for keypoints

by ternaus


RandomToneCurve

1. Now can sample noise per channel
2. Works with any number of channels
3. Now works not just with uint8, but with float32 images as well

by zakajd

ISONoise
1. BugFix
2. Now works not just with uint8, but with float32 images as well

by ternaus

Core
Added `strict` parameter to Compose

If `strict=True` only targets that are expected could be passed.
If `strict = False`, user can pass data with extra keys. Such data would not be affected by transforms.

Request came from users that use pipelines in the form:
python
transform = A.Compose([....])

data = A.Compose(**data)


by ayasyrev

Refactoring
Crop module was heavily refactored, all tests and checks pass, but we will see.


Deprecations
Grid Dropout

Old way:
python
GridDropout(
holes_number_x=XXX,
holes_numver_y=YYY,
unit_size_min=ZZZ,
unit_size_max=PPP
)

New way:

python
GridDropout(
holes_number_xy = (XXX, YYY),
unit_size_range = (ZZZ, PPP)
)


by ternaus

RandomSunFlare
Old way:

python
RandomSunFlare(
num_flare_circles_lower = XXX,
num_flare_circles_upper = YYY
)


new way:
python
RandomSunFlare(num_flare_circles_range = (XXX, YYY))


Bugfixes
- Bugfix in `ISONoise`, as it returned zeros. by ternaus
- BugFix in `Affine` as during rotation image, mask, keypoints have one center point for rotation and bounding box another => we need to create two separate affine matrices. by ternaus
- Small fix in Error Message by philipp-fischer
- Bugfix that affected many transforms, where users specified probability as number and not as `p=number`. Say for `VerticalFlip(0.5)` you could expect 50% chance, but 0.5 was attributed not to `p` but to `always_apply` which meant that the transform was always applied. by ayasyrev

1.4.10

Hotfix release that addresses issues introduced in 1.4.9

There were two issues in GaussNoise that this release addresses:

- Default value of 0.5 for `noise_scale_factor`, which is different from the behavior before version 1.4.9. Now default value = 1, which means random noise is created for every point independently
- Noise was truncated before adding to the image, so that `gauss >=0`. Fixed.

1.4.9

- Support our work
- New transforms
- Integrations
- Speedups
- Deprecations
- Improvements and bug fixes

Support Our Work
1. Love the library? You can contribute to its development by becoming a [sponsor for the library](https://github.com/sponsors/albumentations-team). Your support is invaluable, and every contribution makes a difference.
2. Haven't starred our repo yet? Show your support with a ⭐! It's just [only one mouse click](https://github.com/albumentations-team/albumentations).
3. Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our [issues](https://github.com/albumentations-team/albumentations/issues) or join the conversation on our [Discord server for Albumentations](https://discord.gg/AmMnDBdzYs)

Transforms

PlanckianJitter

New transform, based on

- Paper: [https://arxiv.org/abs/2202.07993](https://arxiv.org/abs/2202.07993)
- Repo: [https://github.com/TheZino/PlanckianJitter](https://github.com/TheZino/PlanckianJitter)

<img width="634" alt="Screenshot 2024-06-17 at 17 53 00" src="https://github.com/albumentations-team/albumentations/assets/5481618/d042299a-3fcd-47e2-a2f8-c023646659d1">

Statements from the paper on why [PlanckianJitter](https://albumentations.ai/docs/api_reference/full_reference/#albumentations.augmentations.transforms.PlanckianJitter) is superior to [ColorJitter](https://albumentations.ai/docs/api_reference/full_reference/#albumentations.augmentations.transforms.ColorJitter):

1. **Realistic Color Variations:** [PlanckianJitter](https://albumentations.ai/docs/api_reference/full_reference/#albumentations.augmentations.transforms.PlanckianJitter) applies physically realistic illuminant variations based on Planck’s Law for black-body radiation. This leads to more natural and realistic variations in chromaticity compared to the arbitrary changes in hue, saturation, brightness, and contrast applied by ColorJitter​​.

2. **Improved Representation for Color-Sensitive Tasks:** The transformations in [PlanckianJitter](https://albumentations.ai/docs/api_reference/full_reference/#albumentations.augmentations.transforms.PlanckianJitter) maintain the ability to discriminate image content based on color information, making it particularly beneficial for tasks where color is a crucial feature, such as classifying natural objects like birds or flowers. [ColorJitter](https://albumentations.ai/docs/api_reference/full_reference/#albumentations.augmentations.transforms.ColorJitter), on the other hand, can significantly alter colors, potentially degrading the quality of learned color features​​.

3. **Robustness to Illumination Changes:** [PlanckianJitter](https://albumentations.ai/docs/api_reference/full_reference/#albumentations.augmentations.transforms.PlanckianJitter) produces models that are robust to illumination changes commonly observed in real-world images. This robustness is advantageous for applications where lighting conditions can vary widely​​.

4. **Enhanced Color Sensitivity:** Models trained with [PlanckianJitter](https://albumentations.ai/docs/api_reference/full_reference/#albumentations.augmentations.transforms.PlanckianJitter) show a higher number of color-sensitive neurons, indicating that these models retain more color information compared to those trained with [ColorJitter](https://albumentations.ai/docs/api_reference/full_reference/#albumentations.augmentations.transforms.ColorJitter), which tends to induce color invariance​​.

by zakajd

GaussNoise
Added option to approximate [GaussNoise](https://albumentations.ai/docs/api_reference/full_reference/#albumentations.augmentations.transforms.GaussNoise).

Generation of random Noise for large images is slow.

Added scaling factor for noise generation. Value should be in the range `(0, 1]`. When set to 1, noise is sampled for each pixel independently. If less, noise is sampled for a smaller size and resized to fit the shape of the image. Smaller values make the transform much faster. Default: 0.5

Integrations

Added integration wit HFHub. Now you can load and save augmentation pipeline to HuggingFace and reuse it in the future or share with others.

[Notebook with documentation](https://albumentations.ai/docs/examples/example_hfhub/)

python
import albumentations as A
import numpy as np

transform = A.Compose([
A.RandomCrop(256, 256),
A.HorizontalFlip(),
A.RandomBrightnessContrast(),
A.RGBShift(),
A.Normalize(),
])

evaluation_transform = A.Compose([
A.PadIfNeeded(256, 256),
A.Normalize(),
])

transform.save_pretrained("qubvel-hf/albu", key="train")
^ this will save the transform to a directory "qubvel-hf/albu" with filename "albumentations_config_train.json"

transform.save_pretrained("qubvel-hf/albu", key="train", push_to_hub=True)
^ this will save the transform to a directory "qubvel-hf/albu" with filename "albumentations_config_train.json"
+ push the transform to the Hub to the repository "qubvel-hf/albu"

transform.push_to_hub("qubvel-hf/albu", key="train")
^ this will push the transform to the Hub to the repository "qubvel-hf/albu" (without saving it locally)

loaded_transform = A.Compose.from_pretrained("qubvel-hf/albu", key="train")
^ this will load the transform from local folder if exist or from the Hub repository "qubvel-hf/albu"

evaluation_transform.save_pretrained("qubvel-hf/albu", key="eval", push_to_hub=True)
^ this will save the transform to a directory "qubvel-hf/albu" with filename "albumentations_config_eval.json"

loaded_evaluation_transform = A.Compose.from_pretrained("qubvel-hf/albu", key="eval")
^ this will load the transform from the Hub repository "qubvel-hf/albu"

by qubvel

Speedups
These transforms should be faster for all types of images. But measured only for three channel `uint8`

- [RGBShift](https://albumentations.ai/docs/api_reference/full_reference/#albumentations.augmentations.transforms.RGBShift): **2X (+106%)**
- [GaussNoise](https://albumentations.ai/docs/api_reference/full_reference/#albumentations.augmentations.transforms.GaussNoise): **3.3X (+ 236%)**

[Full updated benchmark](https://albumentations.ai/docs/benchmarking_results/)

Deprecations

Deprecated `always_apply`

For years we had two parameters in constructors - `probability` and `always_apply`. The interplay between them is not always obvious and intuitively `always_apply=True` should be equivalent to `p=1`.

`always_apply` is deprecated now. `always_apply=True` still works, but it will be deprecated in the future. Use `p=1` instead

by ayasyrev


RandomFog
Updated interface for [RandomFog]()

Old way:
python
RandomFog(fog_coef_lower=0.3, fog_coef_upper=1)


New way:
python
RandomFog(fog_coef_range=(0.3, 1))


by ternaus

Improvements and bugfixes

Disable check for updates

When one imports Albumentations library, there is a check that it is the latest version installed.

To disable this check you can set up environmental variable: `NO_ALBUMENTATIONS_UPDATE` to `1`

by lerignoux

Fix for deprecation warnings
For a set of transforms we were throwing deprecation warnings, even when modern version of the interface was used. Fixed. by ternaus

Albucore


We moved low level operations like add, multiply, normalize, etc to a separate library: https://github.com/albumentations-team/albucore

There are numerous ways to perform such operations in opencv and numpy. And there is no clear winner. Results depend on image type.

Separate library gives us confidence that we picked the fastest version that works on any image type.

by ternaus

Bugfixes

Various bugfixes by ayasyrev immortalCO

1.4.8

- Support our work
- Documentation
- Deprecations
- Improvements and bug fixes

Support Our Work
1. Love the library? You can contribute to its development by becoming a [sponsor for the library](https://github.com/sponsors/albumentations-team). Your support is invaluable, and every contribution makes a difference.
2. Haven't starred our repo yet? Show your support with a ⭐! It's just [only one mouse click](https://github.com/albumentations-team/albumentations).
3. Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our [issues](https://github.com/albumentations-team/albumentations/issues) or join the conversation on our [Discord server for Albumentations](https://discord.gg/AmMnDBdzYs)

Documentation
Added to the documentation links to the UI on HuggingFace to explore hyperparameters visually.

<div style="display: flex; justify-content: space-around; align-items: center;">
<img width="730" alt="Screenshot 2024-05-28 at 16 27 09" src="https://github.com/albumentations-team/albumentations/assets/5481618/525ca812-a2ad-46cb-9fb2-b89ec3a119a3">
<img width="885" alt="Screenshot 2024-05-28 at 16 28 03" src="https://github.com/albumentations-team/albumentations/assets/5481618/ff81c193-4355-4aee-962c-77459c8a1292">
</div>


Deprecations
RandomSnow
Updated interface:

Old way:

python
transform = A.Compose([A.RandomSnow(
snow_point_lower=0.1,
snow_point_upper=0.3,
p=0.5
)])


New way:
python
transform = A.Compose([A.RandomSnow(
snow_point_range=(0.1, 0.3),
p=0.5
)])


by MarognaLorenzo

RandomRain
Old way
python
transform = A.Compose([A.RandomSnow(
slant_lower=-10,
slant_upper=10,
p=0.5
)])


New way:
python
transform = A.Compose([A.RandomRain(
slant_range=(-10, 10),
p=0.5
)])


by MarognaLorenzo

Improvements
Created library with core functions [albucore](https://github.com/albumentations-team/albucore). Moved a few helper functions there.
We need this library to be sure that transforms are:
1. At least as fast as `numpy` and `opencv`. For some functions it is possible to be faster than both of them.
2. Easier to debug.
3. Could be used in other projects, not related to Albumentations.

Bugfixes
- Bugfix in `check_for_updates`. Now the pipeline does not throw an error regardless of why we cannot check for update.
- Bugfix in `RandomShadow`. Does not create unexpected purple color on bright white regions with shadow overlay anymore.
- BugFix in `Compose`. Now `Compose([])` does not throw an error, but just works as `NoOp` by ayasyrev
- Bugfix in `min_max` normalization. Now return 0 and not NaN on constant images. by ternaus
- Bugfix in `CropAndPad`. Now we can sample pad/crop values for all sides with interface like `((-0.1, -0.2), (-0.2, -0.3), (0.3, 0.4), (0.4, 0.5))` by christian-steinmeyer
- Small refactoring to decrease tech debt by ternaus and ayasyrev

1.4.7

- Support our work
- Documentation
- Deprecations
- Improvements and bug fixes

Support Our Work
1. Love the library? You can contribute to its development by becoming a [sponsor for the library](https://github.com/sponsors/albumentations-team). Your support is invaluable, and every contribution makes a difference.
2. Haven't starred our repo yet? Show your support with a ⭐! It's just [only one mouse click](https://github.com/albumentations-team/albumentations).
3. Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our [issues](https://github.com/albumentations-team/albumentations/issues) or join the conversation on our [Discord server for Albumentations](https://discord.gg/AmMnDBdzYs)

Documentation

- Added to the [website tutorial](https://albumentations.ai/docs/integrations/huggingface/object_detection/) on how to use Albumentations with Hugginigface for object Detection. Based on the [tutorial](https://huggingface.co/docs/transformers/main/en/tasks/object_detection) by qubvel

Deprecations
ImageCompression

Old way:

python
transform = A.Compose([A.ImageCompression(
quality_lower=75,
quality_upper=100,
p=0.5
)])


New way:

python
transform = A.Compose([A.ImageCompression(
quality_range=(75, 100),
p=0.5
)])

by MarognaLorenzo

Downscale

Old way:
python
transform = A.Compose([A.Downscale(
scale_min=0.25,
scale_max=1,
interpolation= {"downscale": cv2.INTER_AREA, "upscale": cv2.INTER_CUBIC},
p=0.5
)])


New way:
python
transform = A.Compose([A.Downscale(
scale_range=(0.25, 1),
interpolation_pair = {"downscale": cv2.INTER_AREA, "upscale": cv2.INTER_CUBIC},
p=0.5
)])


As of now both ways work and will provide the same result, but old functionality will be removed in later releases.

by ternaus

Improvements
- Buggix in `Blur`.
- Bugfix in `bbox clipping`, it could be not intuitive, but boxes should be clipped by `height, width` and not `height - 1, width -1` by ternaus
- Allow to compose only keys, that are required there. Any extra unnecessary key will give an error by ayasyrev
- In `PadIfNeeded` if value parameter is not None, but border mode is reflection, border mode is changed to `cv2.BORDER_CONSTANT` by ternaus

1.4.6

This is out of schedule release with a bugfix that was introduced in version 1.4.5

In version 1.4.5 there was a bug that went unnoticed - if you used pipeline that consisted only of `ImageOnly` transforms but pass bounding boxes into it, you would get an error.

If you had in such pipeline at least one non `ImageOnly` transform, say `HorizontalFlip` or `Crop`, everything would work as expected.

We fixed the issue and added tests to be sure that it will not happen in the future.

Page 4 of 8

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.