Albumentations

Latest version: v1.4.21

Safety actively analyzes 682387 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 7

1.4.9

- Support our work
- New transforms
- Integrations
- Speedups
- Deprecations
- Improvements and bug fixes

Support Our Work
1. Love the library? You can contribute to its development by becoming a [sponsor for the library](https://github.com/sponsors/albumentations-team). Your support is invaluable, and every contribution makes a difference.
2. Haven't starred our repo yet? Show your support with a ⭐! It's just [only one mouse click](https://github.com/albumentations-team/albumentations).
3. Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our [issues](https://github.com/albumentations-team/albumentations/issues) or join the conversation on our [Discord server for Albumentations](https://discord.gg/AmMnDBdzYs)

Transforms

PlanckianJitter

New transform, based on

- Paper: [https://arxiv.org/abs/2202.07993](https://arxiv.org/abs/2202.07993)
- Repo: [https://github.com/TheZino/PlanckianJitter](https://github.com/TheZino/PlanckianJitter)

<img width="634" alt="Screenshot 2024-06-17 at 17 53 00" src="https://github.com/albumentations-team/albumentations/assets/5481618/d042299a-3fcd-47e2-a2f8-c023646659d1">

Statements from the paper on why [PlanckianJitter](https://albumentations.ai/docs/api_reference/full_reference/#albumentations.augmentations.transforms.PlanckianJitter) is superior to [ColorJitter](https://albumentations.ai/docs/api_reference/full_reference/#albumentations.augmentations.transforms.ColorJitter):

1. **Realistic Color Variations:** [PlanckianJitter](https://albumentations.ai/docs/api_reference/full_reference/#albumentations.augmentations.transforms.PlanckianJitter) applies physically realistic illuminant variations based on Planck’s Law for black-body radiation. This leads to more natural and realistic variations in chromaticity compared to the arbitrary changes in hue, saturation, brightness, and contrast applied by ColorJitter​​.

2. **Improved Representation for Color-Sensitive Tasks:** The transformations in [PlanckianJitter](https://albumentations.ai/docs/api_reference/full_reference/#albumentations.augmentations.transforms.PlanckianJitter) maintain the ability to discriminate image content based on color information, making it particularly beneficial for tasks where color is a crucial feature, such as classifying natural objects like birds or flowers. [ColorJitter](https://albumentations.ai/docs/api_reference/full_reference/#albumentations.augmentations.transforms.ColorJitter), on the other hand, can significantly alter colors, potentially degrading the quality of learned color features​​.

3. **Robustness to Illumination Changes:** [PlanckianJitter](https://albumentations.ai/docs/api_reference/full_reference/#albumentations.augmentations.transforms.PlanckianJitter) produces models that are robust to illumination changes commonly observed in real-world images. This robustness is advantageous for applications where lighting conditions can vary widely​​.

4. **Enhanced Color Sensitivity:** Models trained with [PlanckianJitter](https://albumentations.ai/docs/api_reference/full_reference/#albumentations.augmentations.transforms.PlanckianJitter) show a higher number of color-sensitive neurons, indicating that these models retain more color information compared to those trained with [ColorJitter](https://albumentations.ai/docs/api_reference/full_reference/#albumentations.augmentations.transforms.ColorJitter), which tends to induce color invariance​​.

by zakajd

GaussNoise
Added option to approximate [GaussNoise](https://albumentations.ai/docs/api_reference/full_reference/#albumentations.augmentations.transforms.GaussNoise).

Generation of random Noise for large images is slow.

Added scaling factor for noise generation. Value should be in the range `(0, 1]`. When set to 1, noise is sampled for each pixel independently. If less, noise is sampled for a smaller size and resized to fit the shape of the image. Smaller values make the transform much faster. Default: 0.5

Integrations

Added integration wit HFHub. Now you can load and save augmentation pipeline to HuggingFace and reuse it in the future or share with others.

[Notebook with documentation](https://albumentations.ai/docs/examples/example_hfhub/)

python
import albumentations as A
import numpy as np

transform = A.Compose([
A.RandomCrop(256, 256),
A.HorizontalFlip(),
A.RandomBrightnessContrast(),
A.RGBShift(),
A.Normalize(),
])

evaluation_transform = A.Compose([
A.PadIfNeeded(256, 256),
A.Normalize(),
])

transform.save_pretrained("qubvel-hf/albu", key="train")
^ this will save the transform to a directory "qubvel-hf/albu" with filename "albumentations_config_train.json"

transform.save_pretrained("qubvel-hf/albu", key="train", push_to_hub=True)
^ this will save the transform to a directory "qubvel-hf/albu" with filename "albumentations_config_train.json"
+ push the transform to the Hub to the repository "qubvel-hf/albu"

transform.push_to_hub("qubvel-hf/albu", key="train")
^ this will push the transform to the Hub to the repository "qubvel-hf/albu" (without saving it locally)

loaded_transform = A.Compose.from_pretrained("qubvel-hf/albu", key="train")
^ this will load the transform from local folder if exist or from the Hub repository "qubvel-hf/albu"

evaluation_transform.save_pretrained("qubvel-hf/albu", key="eval", push_to_hub=True)
^ this will save the transform to a directory "qubvel-hf/albu" with filename "albumentations_config_eval.json"

loaded_evaluation_transform = A.Compose.from_pretrained("qubvel-hf/albu", key="eval")
^ this will load the transform from the Hub repository "qubvel-hf/albu"

by qubvel

Speedups
These transforms should be faster for all types of images. But measured only for three channel `uint8`

- [RGBShift](https://albumentations.ai/docs/api_reference/full_reference/#albumentations.augmentations.transforms.RGBShift): **2X (+106%)**
- [GaussNoise](https://albumentations.ai/docs/api_reference/full_reference/#albumentations.augmentations.transforms.GaussNoise): **3.3X (+ 236%)**

[Full updated benchmark](https://albumentations.ai/docs/benchmarking_results/)

Deprecations

Deprecated `always_apply`

For years we had two parameters in constructors - `probability` and `always_apply`. The interplay between them is not always obvious and intuitively `always_apply=True` should be equivalent to `p=1`.

`always_apply` is deprecated now. `always_apply=True` still works, but it will be deprecated in the future. Use `p=1` instead

by ayasyrev


RandomFog
Updated interface for [RandomFog]()

Old way:
python
RandomFog(fog_coef_lower=0.3, fog_coef_upper=1)


New way:
python
RandomFog(fog_coef_range=(0.3, 1))


by ternaus

Improvements and bugfixes

Disable check for updates

When one imports Albumentations library, there is a check that it is the latest version installed.

To disable this check you can set up environmental variable: `NO_ALBUMENTATIONS_UPDATE` to `1`

by lerignoux

Fix for deprecation warnings
For a set of transforms we were throwing deprecation warnings, even when modern version of the interface was used. Fixed. by ternaus

Albucore


We moved low level operations like add, multiply, normalize, etc to a separate library: https://github.com/albumentations-team/albucore

There are numerous ways to perform such operations in opencv and numpy. And there is no clear winner. Results depend on image type.

Separate library gives us confidence that we picked the fastest version that works on any image type.

by ternaus

Bugfixes

Various bugfixes by ayasyrev immortalCO

1.4.8

- Support our work
- Documentation
- Deprecations
- Improvements and bug fixes

Support Our Work
1. Love the library? You can contribute to its development by becoming a [sponsor for the library](https://github.com/sponsors/albumentations-team). Your support is invaluable, and every contribution makes a difference.
2. Haven't starred our repo yet? Show your support with a ⭐! It's just [only one mouse click](https://github.com/albumentations-team/albumentations).
3. Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our [issues](https://github.com/albumentations-team/albumentations/issues) or join the conversation on our [Discord server for Albumentations](https://discord.gg/AmMnDBdzYs)

Documentation
Added to the documentation links to the UI on HuggingFace to explore hyperparameters visually.

<div style="display: flex; justify-content: space-around; align-items: center;">
<img width="730" alt="Screenshot 2024-05-28 at 16 27 09" src="https://github.com/albumentations-team/albumentations/assets/5481618/525ca812-a2ad-46cb-9fb2-b89ec3a119a3">
<img width="885" alt="Screenshot 2024-05-28 at 16 28 03" src="https://github.com/albumentations-team/albumentations/assets/5481618/ff81c193-4355-4aee-962c-77459c8a1292">
</div>


Deprecations
RandomSnow
Updated interface:

Old way:

python
transform = A.Compose([A.RandomSnow(
snow_point_lower=0.1,
snow_point_upper=0.3,
p=0.5
)])


New way:
python
transform = A.Compose([A.RandomSnow(
snow_point_range=(0.1, 0.3),
p=0.5
)])


by MarognaLorenzo

RandomRain
Old way
python
transform = A.Compose([A.RandomSnow(
slant_lower=-10,
slant_upper=10,
p=0.5
)])


New way:
python
transform = A.Compose([A.RandomRain(
slant_range=(-10, 10),
p=0.5
)])


by MarognaLorenzo

Improvements
Created library with core functions [albucore](https://github.com/albumentations-team/albucore). Moved a few helper functions there.
We need this library to be sure that transforms are:
1. At least as fast as `numpy` and `opencv`. For some functions it is possible to be faster than both of them.
2. Easier to debug.
3. Could be used in other projects, not related to Albumentations.

Bugfixes
- Bugfix in `check_for_updates`. Now the pipeline does not throw an error regardless of why we cannot check for update.
- Bugfix in `RandomShadow`. Does not create unexpected purple color on bright white regions with shadow overlay anymore.
- BugFix in `Compose`. Now `Compose([])` does not throw an error, but just works as `NoOp` by ayasyrev
- Bugfix in `min_max` normalization. Now return 0 and not NaN on constant images. by ternaus
- Bugfix in `CropAndPad`. Now we can sample pad/crop values for all sides with interface like `((-0.1, -0.2), (-0.2, -0.3), (0.3, 0.4), (0.4, 0.5))` by christian-steinmeyer
- Small refactoring to decrease tech debt by ternaus and ayasyrev

1.4.7

- Support our work
- Documentation
- Deprecations
- Improvements and bug fixes

Support Our Work
1. Love the library? You can contribute to its development by becoming a [sponsor for the library](https://github.com/sponsors/albumentations-team). Your support is invaluable, and every contribution makes a difference.
2. Haven't starred our repo yet? Show your support with a ⭐! It's just [only one mouse click](https://github.com/albumentations-team/albumentations).
3. Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our [issues](https://github.com/albumentations-team/albumentations/issues) or join the conversation on our [Discord server for Albumentations](https://discord.gg/AmMnDBdzYs)

Documentation

- Added to the [website tutorial](https://albumentations.ai/docs/integrations/huggingface/object_detection/) on how to use Albumentations with Hugginigface for object Detection. Based on the [tutorial](https://huggingface.co/docs/transformers/main/en/tasks/object_detection) by qubvel

Deprecations
ImageCompression

Old way:

python
transform = A.Compose([A.ImageCompression(
quality_lower=75,
quality_upper=100,
p=0.5
)])


New way:

python
transform = A.Compose([A.ImageCompression(
quality_range=(75, 100),
p=0.5
)])

by MarognaLorenzo

Downscale

Old way:
python
transform = A.Compose([A.Downscale(
scale_min=0.25,
scale_max=1,
interpolation= {"downscale": cv2.INTER_AREA, "upscale": cv2.INTER_CUBIC},
p=0.5
)])


New way:
python
transform = A.Compose([A.Downscale(
scale_range=(0.25, 1),
interpolation_pair = {"downscale": cv2.INTER_AREA, "upscale": cv2.INTER_CUBIC},
p=0.5
)])


As of now both ways work and will provide the same result, but old functionality will be removed in later releases.

by ternaus

Improvements
- Buggix in `Blur`.
- Bugfix in `bbox clipping`, it could be not intuitive, but boxes should be clipped by `height, width` and not `height - 1, width -1` by ternaus
- Allow to compose only keys, that are required there. Any extra unnecessary key will give an error by ayasyrev
- In `PadIfNeeded` if value parameter is not None, but border mode is reflection, border mode is changed to `cv2.BORDER_CONSTANT` by ternaus

1.4.6

This is out of schedule release with a bugfix that was introduced in version 1.4.5

In version 1.4.5 there was a bug that went unnoticed - if you used pipeline that consisted only of `ImageOnly` transforms but pass bounding boxes into it, you would get an error.

If you had in such pipeline at least one non `ImageOnly` transform, say `HorizontalFlip` or `Crop`, everything would work as expected.

We fixed the issue and added tests to be sure that it will not happen in the future.

1.4.5

- Support our work
- Highlights
- Deprecations
- Improvements and bug fixes

Support Our Work
1. Love the library? You can contribute to its development by becoming a [sponsor for the library](https://github.com/sponsors/albumentations-team). Your support is invaluable, and every contribution makes a difference.
2. Haven't starred our repo yet? Show your support with a ⭐! It's just [only one mouse click](https://github.com/albumentations-team/albumentations).
3. Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our [issues](https://github.com/albumentations-team/albumentations/issues) or join the conversation on our [Discord server for Albumentations](https://discord.gg/AmMnDBdzYs)

Highlights

Bbox clipping

Before version 1.4.5 it was assumed that bounding boxes that are fed into the augmentation pipeline should not extend outside of the image.

Now we added an option to clip boxes to the image size before augmenting them. This makes pipeline more robust to inaccurate labeling

**Example:**

Will fail if boxes extend outside of the image:
python
transform = A.Compose([
A.HorizontalFlip(p=0.5)
], bbox_params=A.BboxParams(format='coco'))


Clipping bounding boxes to the image size:

python
transform = A.Compose([
A.HorizontalFlip(p=0.5)
], bbox_params=A.BboxParams(format='coco', clip=True))


by ternaus

SelectiveChannelTransform

Added [SelectiveChannelTransform](https://albumentations.ai/docs/api_reference/full_reference/?#albumentations.core.composition.SelectiveChannelTransform) that allows to apply transforms to a selected number of channels.

For example it could be helpful when working with multispectral images, when RGB is a subset of the overall multispectral stack which is common when working with satellite imagery.

Example:

python
aug = A.Compose(
[A.HorizontalFlip(p=0.5),
A.SelectiveChannelTransform(transforms=[A.ColorJItter(p=0.5),
A.ChromaticAberration(p=0.5))], channels=[1, 2, 18], p=1)],
)

Here HorizontalFlip applied to the whole multispectral image, but pipeline of `ColorJitter` and `ChromaticAberration` only to channels `[1, 2, 18]`

by ternaus

Deprecations

CoarseDropout

Old way:
python
transform = A.Compose([A.CoarseDropout(
min_holes = 5,
max_holes = 8,
min_width = 3,
max_width = 12,
min_height = 4,
max_height = 5
)])


New way:
python
transform = A.Compose([A.CoarseDropout(
num_holes_range=(5, 8),
hole_width_range=(3, 12),
hole_height_range=(4, 5)
)])


As of now both ways work and will provide the same result, but old functionality will be removed in later releases.

ternaus

Improvements and bug fixes
- Number of fixes and speedups in the core of the library `Compose` and `BasicTransform` by ayasyrev
- Extended `Contributor's guide` by ternaus
- Can use `random` for `fill_value` in `CoarseDropout`by ternaus
- Fix in [ToGray](https://albumentations.ai/docs/api_reference/full_reference/?#albumentations.augmentations.transforms.ToGray) docstring by wilderrodrigues
- BufFix in [D4](https://albumentations.ai/docs/api_reference/full_reference/?#albumentations.augmentations.geometric.transforms.D4) - now works not only with square, but with rectangular images as well. By ternaus
- BugFix in [RandomCropFromBorders](https://albumentations.ai/docs/api_reference/full_reference/?#albumentations.augmentations.crops.transforms.RandomCropFromBorders) by ternaus

1.4.4

- Support our work
- Highlights
- Transforms
- Improvements and bug fixes


Support Our Work
1. Love the library? You can contribute to its development by becoming a [sponsor for the library](https://github.com/sponsors/albumentations-team). Your support is invaluable, and every contribution makes a difference.
2. Haven't starred our repo yet? Show your support with a ⭐! It's just [only one mouse click](https://github.com/albumentations-team/albumentations).
3. Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our [issues](https://github.com/albumentations-team/albumentations/issues) or join the conversation on our [Discord server for Albumentations](https://discord.gg/AmMnDBdzYs)

Transforms
Added [**D4 transform**](https://albumentations.ai/docs/api_reference/full_reference/?h=d4#albumentations.augmentations.geometric.transforms.D4)

![image](https://github.com/albumentations-team/albumentations/assets/5481618/3ad12afa-991d-4b96-a976-066b59e8eea9)

Applies one of the eight possible D4 dihedral group transformations to a square-shaped input, maintaining the square shape. These transformations correspond to the symmetries of a square, including rotations and reflections by ternaus

The D4 group transformations include:
- `e` (identity): No transformation is applied.
- `r90` (rotation by 90 degrees counterclockwise)
- `r180` (rotation by 180 degrees)
- `r270` (rotation by 270 degrees counterclockwise)
- `v` (reflection across the vertical midline)
- `hvt` (reflection across the anti-diagonal)
- `h` (reflection across the horizontal midline)
- `t` (reflection across the main diagonal)

Could be applied to:
- image
- mask
- bounding boxes
- key points

Does not generate interpolation artifacts as there is no interpolation.

Provides the most value in tasks where data is invariant to rotations and reflections like:
- Top view drone and satellite imagery
- Medical images

Example:

<img width="831" alt="Screenshot 2024-04-16 at 19 00 05" src="https://github.com/albumentations-team/albumentations/assets/5481618/141a778e-33d5-4804-8a96-167b9bcbe621">

Added new normalizations to [Normalize](https://albumentations.ai/docs/api_reference/augmentations/transforms/?#albumentations.augmentations.transforms.Normalize) transform

- `standard` - `subtract` fixed mean, divide by fixed `std`
- `image` - the same as `standard`, but `mean` and `std` computed for each image independently.
- `image_per_channel` - the same as before, but per channel
- `min_max` - subtract `min(image)`and divide by `max(image) - min(image)`
- `min_max_per_channel` - the same, but per channel
by ternaus

Changes in the interface of [RandomShadow](https://albumentations.ai/docs/api_reference/full_reference/?#albumentations.augmentations.transforms.RandomShadow)

New, preferred wat is to use `num_shadows_limit` instead of `num_shadows_lower` / `num_shadows_upper` by ayasyrev

Improvements and bug fixes

Added check for input parameters to transforms with Pydantic
Now all input parameters are validated and prepared with Pydantic. This will prevent bugs, when transforms are initialized without errors with parameters that are outside of allowed ranges.
by ternaus

Updates in [RandomGridShuffle](https://albumentations.ai/docs/api_reference/full_reference/#albumentations.augmentations.transforms.RandomGridShuffle)
1. Bugfix by ayasyrev
2. Transform updated to work even if side is not divisible by the number of tiles. by ternaus

Example:
![image](https://github.com/albumentations-team/albumentations/assets/5481618/fd89826f-a457-4b5f-bf84-3f8cdb7fc4ee)

New way to add additional targets
Standard way uses `additional_targets`
python
transform = A.Compose(
transforms=[A.Rotate(limit=(90.0, 90.0), p=1.0)],
keypoint_params=A.KeypointParams(
angle_in_degrees=True,
check_each_transform=True,
format="xyas",
label_fields=None,
remove_invisible=False,
),
additional_targets={"keypoints2": "keypoints"},
)

Now you can also add them using `add_targets`:

python
transform = A.Compose(
transforms=[A.Rotate(limit=(90.0, 90.0), p=1.0)],
keypoint_params=A.KeypointParams(
angle_in_degrees=True,
check_each_transform=True,
format="xyas",
label_fields=None,
remove_invisible=False,
),
)
transform.add_targets({"keypoints2": "keypoints"})

by ayasyrev

Small fixes

* Small speedup in the code for transforms that use `add_weighted` function by gogetron
* Fix in error message in [Affine transform](https://albumentations.ai/docs/api_reference/full_reference/?#albumentations.augmentations.geometric.transforms.Affine) by matsumotosan
* Bugfix in [Sequential](https://albumentations.ai/docs/api_reference/full_reference/?h=sequential#albumentations.core.composition.Sequential) by ayasyrev

Documentation
* Updated Contributor's guide. by ternaus
* Added [example notebook on how to apply D4](https://albumentations.ai/docs/examples/example_d4/) to images, masks, bounding boxes and key points. by ternaus
* Added [example notebook on how to apply RandomGridShuffle](https://albumentations.ai/docs/examples/example_gridshuffle/) to images, masks and keypoints. by ternaus

Page 3 of 7

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.