Albumentations

Latest version: v2.0.4

Safety actively analyzes 707009 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 8

1.4.23

* Support Our Work
* Core
* Transforms
* Bugfixes

Support Our Work
1. **Help Us Grow** - If you find value in Albumentations, consider [becoming a sponsor](https://github.com/sponsors/albumentations-team). Every contribution, no matter the size, helps us maintain and improve the library for everyone.
2. **Show Your Support** - If you enjoy using Albumentations, consider giving us a ⭐ on [GitHub](https://github.com/albumentations-team/albumentations). It helps others discover the library and motivates our team.
3. **Join Our Community** - Have suggestions or ran into issues? We welcome your input! Share your experience in our [GitHub issues](https://github.com/albumentations-team/albumentations/issues) or connect with us on [Discord](https://discord.gg/AmMnDBdzYs).

Core
Target `images` as numpy array
Now supports numpy arrays with shape `(num_images, height, width, num_channels)` or `(num_images, height, width)` as `images` in Compose
- Ideal for video processing applications
- Same transform applies to all images in the array

New 3D Data Support

- **volume**: `(depth, height, width)` or `(depth, height, width, num_channels)`
- **mask3d:** `(depth, height, width)` or `(depth, height, width, num_channels)`
- **volumes:** `(num_volumes, depth, height, width)` for batch processing
- **masks3d:** `(num_volumes, depth, height, width)` for batch processing

python
volume = np.random.rand(96, 256, 256) Your 3D medical volume
mask = np.zeros((96, 256, 256)) Your 3D segmentation mask
transformed = transform(volume=volume, mask3d=mask)
transformed_volume = transformed['volume']
transformed_mask = transformed['mask3d']


Transforms
Added 3D transforms by ternaus

Padding & Cropping

- **Pad3D**: Pad 3D volumes with flexible padding options
- **PadIfNeeded3D**: Conditional padding to meet minimum dimensions or divisibility requirements
- **CenterCrop3D**: Center cropping for 3D volumes
- **RandomCrop3D**: Random cropping of 3D volumes

python
transform = A.Compose([
Crop volume to a fixed size for memory efficiency
A.RandomCrop3D(size=(64, 128, 128), p=1.0),
Randomly remove cubic regions to simulate occlusions
A.CoarseDropout3D(
num_holes_range=(2, 6),
hole_depth_range=(0.1, 0.3),
hole_height_range=(0.1, 0.3),
hole_width_range=(0.1, 0.3),
p=0.5
),
])

volume = np.random.rand(96, 256, 256) Your 3D medical volume
mask = np.zeros((96, 256, 256)) Your 3D segmentation mask
transformed = transform(volume=volume, mask3d=mask)
transformed_volume = transformed['volume']
transformed_mask = transformed['mask3d']


Augmentation

- **CoarseDropout3D**: Random cuboid dropout regions for occlusion simulation
- **CubicSymmetry**: 48 possible cube symmetry transformations (24 rotations + 24 rotoreflections)

Fixes
- Added flexible brightness in [RandomSunFlare](https://explore.albumentations.ai/transform/RandomSunFlare) by momincks
- Bugfix in [CenterCrop](https://explore.albumentations.ai/transform/CenterCrop), [RandomCrop](https://explore.albumentations.ai/transform/RandomCrop) by iRyoka
- Fix in [Normalize](https://explore.albumentations.ai/transform/Normalize) docstring by mennohofste

1.4.22

* Support Our Work
* Transforms
* Core
* Bugfixes


Support Our Work
1. **Help Us Grow** - If you find value in Albumentations, consider [becoming a sponsor](https://github.com/sponsors/albumentations-team). Every contribution, no matter the size, helps us maintain and improve the library for everyone.
2. **Show Your Support** - If you enjoy using Albumentations, consider giving us a ⭐ on [GitHub](https://github.com/albumentations-team/albumentations). It helps others discover the library and motivates our team.
3. **Join Our Community** - Have suggestions or ran into issues? We welcome your input! Share your experience in our [GitHub issues](https://github.com/albumentations-team/albumentations/issues) or connect with us on [Discord](https://discord.gg/AmMnDBdzYs).

Transforms
[Elastic Transform](https://explore.albumentations.ai/transform/ElasticTransform)

1. Added argument `noise_distribution` that allows sampling displacement fields from `gaussian` and from `uniform` distributions.
2. Deprecated parameters `border_mode`, `value`, `mask_value` - you can specify them, but will not have any effect.

New transform [ShotNoise](https://explore.albumentations.ai/transform/ShotNoise)
<img width="831" alt="Screenshot 2024-12-06 at 10 34 34" src="https://github.com/user-attachments/assets/b1fd6ffc-ed35-4065-bafa-9ea679eea176">

python
Apply shot noise to the image by modeling photon counting as a Poisson process.

Shot noise (also known as Poisson noise) occurs in imaging due to the quantum nature of light.
When photons hit an imaging sensor, they arrive at random times following Poisson statistics.
This transform simulates this physical process in linear light space by:
1. Converting to linear space (removing gamma)
2. Treating each pixel value as an expected photon count
3. Sampling actual photon counts from a Poisson distribution
4. Converting back to display space (reapplying gamma)

The noise characteristics follow real camera behavior:
- Noise variance equals signal mean in linear space (Poisson statistics)
- Brighter regions have more absolute noise but less relative noise
- Darker regions have less absolute noise but more relative noise
- Noise is generated independently for each pixel and color channel


[RandomGridShuffle](https://explore.albumentations.ai/transform/RandomGridShuffle)
Addes support for bounding boxes

<img width="823" alt="Screenshot 2024-12-06 at 10 38 44" src="https://github.com/user-attachments/assets/e7fbeac0-f92b-4097-838f-d5ddaab9c68f">

CorseDropout

Added an option to inpaint holes using `inpaint_ns` and `inpaint_telea` from OpenCV

GridDropout

Added an option to inpaint holes using `inpaint_ns` and `inpaint_telea` from OpenCV

MaskDropout

Added an option to inpaint holes using `inpaint_ns` and `inpaint_telea` from OpenCV

XYMasking

Added an option to inpaint holes using `inpaint_ns` and `inpaint_telea` from OpenCV

New transform [TimeReverse](https://explore.albumentations.ai/transform/TimeReverse)

Added NewTransform [TimeReverse](https://explore.albumentations.ai/transform/TimeReverse)

python
Reverse the time axis of a spectrogram image, also known as time inversion.

Time inversion of a spectrogram is analogous to the random flip of an image,
an augmentation technique widely used in the visual domain. This can be relevant
in the context of audio classification tasks when working with spectrograms.
The technique was successfully applied in the AudioCLIP paper, which extended
CLIP to handle image, text, and audio inputs.

This transform is implemented as a subclass of HorizontalFlip since reversing
time in a spectrogram is equivalent to flipping the image horizontally.


New transform [TimeMasking](https://explore.albumentations.ai/transform/TimeMasking)

Added NewTransform [TimeMasking](https://explore.albumentations.ai/transform/TimeMasking)

python
Apply masking to a spectrogram in the time domain.

This transform masks random segments along the time axis of a spectrogram,
implementing the time masking technique proposed in the SpecAugment paper.
Time masking helps in training models to be robust against temporal variations
and missing information in audio signals.

This is a specialized version of XYMasking configured for time masking only.
For more advanced use cases (e.g., multiple masks, frequency masking, or custom
fill values), consider using XYMasking directly.

New transform [FrequencyMasking](https://explore.albumentations.ai/transform/FrequencyMasking)

python
Apply masking to a spectrogram in the frequency domain.

This transform masks random segments along the frequency axis of a spectrogram,
implementing the frequency masking technique proposed in the SpecAugment paper.
Frequency masking helps in training models to be robust against frequency variations
and missing spectral information in audio signals.

This is a specialized version of XYMasking configured for frequency masking only.
For more advanced use cases (e.g., multiple masks, time masking, or custom
fill values), consider using XYMasking directly.


Added NewTransform [FrequencyMasking](https://explore.albumentations.ai/transform/FrequencyMasking)

It is a specialized version of [XYMasking](https://explore.albumentations.ai/transform/XYMasking) that has the similar API as [FrequencyMasking from torchaudio](https://pytorch.org/audio/main/generated/torchaudio.transforms.FrequencyMasking.html)

New Transform [Pad](https://explore.albumentations.ai/transform/Pad)

<img width="1192" alt="Screenshot 2024-12-06 at 11 19 42" src="https://github.com/user-attachments/assets/60d597ac-9c3a-4324-9b30-d66c37c6dd18">

python
Pad the sides of an image by specified number of pixels.

Args:
padding (int, tuple[int, int] or tuple[int, int, int, int]): Padding values. Can be:
* int - pad all sides by this value
* tuple[int, int] - (pad_x, pad_y) to pad left/right by pad_x and top/bottom by pad_y
* tuple[int, int, int, int] - (left, top, right, bottom) specific padding per side


This is the generalization of the [torchvision transform with the same name](https://pytorch.org/vision/main/generated/torchvision.transforms.Pad.html)

New Transform [Erasing](https://explore.albumentations.ai/transform/Erasing)
<img width="1199" alt="Screenshot 2024-12-06 at 11 23 25" src="https://github.com/user-attachments/assets/8bf42b14-7c09-4bb3-8e61-2ea7b1ea16e7">

This is the generalization of the [similar torchvision transform](https://pytorch.org/vision/main/generated/torchvision.transforms.RandomErasing.html)

python
Randomly erases rectangular regions in an image, following the Random Erasing Data Augmentation technique.

This augmentation helps improve model robustness by randomly masking out rectangular regions in the image,
simulating occlusions and encouraging the model to learn from partial information. It's particularly
effective for image classification and person re-identification tasks.


New Transform [AdditiveNoise](https://explore.albumentations.ai/transform/AdditiveNoise)
<img width="1198" alt="Screenshot 2024-12-06 at 11 26 17" src="https://github.com/user-attachments/assets/557c6dff-01a7-4fe2-a0fd-9073568bcd87">

python
Apply random noise to image channels using various noise distributions.

This transform generates noise using different probability distributions and applies it
to image channels. The noise can be generated in three spatial modes and supports
multiple noise distributions, each with configurable parameters.

Args:
noise_type: Type of noise distribution to use. Options:
- "uniform": Uniform distribution, good for simple random perturbations
- "gaussian": Normal distribution, models natural random processes
- "laplace": Similar to Gaussian but with heavier tails, good for outliers
- "beta": Flexible bounded distribution, can be symmetric or skewed

spatial_mode: How to generate and apply the noise. Options:
- "constant": One noise value per channel, fastest
- "per_pixel": Independent noise value for each pixel and channel, slowest
- "shared": One noise map shared across all channels, medium speed


[Sharpen](https://explore.albumentations.ai/transform/Sharpen)
Added 'gaussian' method for image sharpening.

New transform [SaltAndPepper](https://explore.albumentations.ai/transform/SaltAndPepper)

<img width="1181" alt="Screenshot 2024-12-06 at 11 52 54" src="https://github.com/user-attachments/assets/b93c1863-7db1-4aac-ba18-97cefad43dad">


python
Apply salt and pepper noise to the input image.

Salt and pepper noise is a form of impulse noise that randomly sets pixels to either maximum value (salt)
or minimum value (pepper). The amount and proportion of salt vs pepper noise can be controlled.


New transform [PlasmaBrightNessContrast](https://explore.albumentations.ai/transform/PlasmaBrightnessContrast)
<img width="1169" alt="Screenshot 2024-12-06 at 11 54 34" src="https://github.com/user-attachments/assets/b783a2ad-3757-401d-8964-29728d829dd3">

python
Apply plasma fractal pattern to modify image brightness and contrast.

This transform uses the Diamond-Square algorithm to generate organic-looking fractal patterns
that are then used to create spatially-varying brightness and contrast adjustments.
The result is a natural-looking, non-uniform modification of the image.


New Transform [PlasmaShadow](https://explore.albumentations.ai/transform/PlasmaShadow)
<img width="1180" alt="Screenshot 2024-12-06 at 11 56 21" src="https://github.com/user-attachments/assets/fc4e6ab9-54e0-4442-9088-cb1e62d2cb8c">

python
Apply plasma-based shadow effect to the image.

Creates organic-looking shadows using plasma fractal noise pattern.
The shadow intensity varies smoothly across the image, creating natural-looking
darkening effects that can simulate shadows, shading, or lighting variations.


New args in [MotionBlur](https://explore.albumentations.ai/transform/MotionBlur)

Added `angle_range` and `direction_range` parameters.

python
Apply motion blur to the input image using a directional kernel.

This transform simulates motion blur effects that occur during image capture,
such as camera shake or object movement. It creates a directional blur using
a line-shaped kernel with controllable angle, direction, and position.

Args:
blur_limit (int | tuple[int, int]): Maximum kernel size for blurring.
Should be in range [3, inf).
- If int: kernel size will be randomly chosen from [3, blur_limit]
- If tuple: kernel size will be randomly chosen from [min, max]
Larger values create stronger blur effects.
Default: (3, 7)

angle_range (tuple[float, float]): Range of possible angles in degrees.
Controls the rotation of the motion blur line:
- 0°: Horizontal motion blur →
- 45°: Diagonal motion blur ↗
- 90°: Vertical motion blur ↑
- 135°: Diagonal motion blur ↖
Default: (0, 360)

direction_range (tuple[float, float]): Range for motion bias.
Controls how the blur extends from the center:
- -1.0: Blur extends only backward (←)
- 0.0: Blur extends equally in both directions (←→)
- 1.0: Blur extends only forward (→)
For example, with angle=0:
- direction=-1.0: ←•
- direction=0.0: ←•→
- direction=1.0: •→
Default: (-1.0, 1.0)


New Transform [ThinPlateSpline](https://explore.albumentations.ai/transform/ThinPlateSpline)
<img width="1207" alt="Screenshot 2024-12-06 at 12 00 10" src="https://github.com/user-attachments/assets/8b334b83-83ad-4397-9628-6bcad21fb7d0">

python
Apply Thin Plate Spline (TPS) transformation to create smooth, non-rigid deformations.

Imagine the image printed on a thin metal plate that can be bent and warped smoothly:
- Control points act like pins pushing or pulling the plate
- The plate resists sharp bending, creating smooth deformations
- The transformation maintains continuity (no tears or folds)
- Areas between control points are interpolated naturally

The transform works by:
1. Creating a regular grid of control points (like pins in the plate)
2. Randomly displacing these points (like pushing/pulling the pins)
3. Computing a smooth interpolation (like the plate bending)
4. Applying the resulting deformation to the image


New transform [Illumination](https://explore.albumentations.ai/transform/Illumination)

python
Apply various illumination effects to the image.

This transform simulates different lighting conditions by applying controlled
illumination patterns. It can create effects like:
- Directional lighting (linear mode)
- Corner shadows/highlights (corner mode)
- Spotlights or local lighting (gaussian mode)

These effects can be used to:
- Simulate natural lighting variations
- Add dramatic lighting effects
- Create synthetic shadows or highlights
- Augment training data with different lighting conditions

Args:
mode (Literal["linear", "corner", "gaussian"]): Type of illumination pattern:
- 'linear': Creates a smooth gradient across the image,
simulating directional lighting like sunlight
through a window
- 'corner': Applies gradient from any corner,
simulating light source from a corner
- 'gaussian': Creates a circular spotlight effect,
simulating local light sources
Default: 'linear'


[OpticalDistortion](https://explore.albumentations.ai/transform/OpticalDistortion)

- Added `fisheye` method.
- Deprecated `border_mode`, `value`, `mask_value`

New transform [Autocontrast](https://explore.albumentations.ai/transform/AutoContrast)

<img width="1177" alt="Screenshot 2024-12-06 at 12 11 54" src="https://github.com/user-attachments/assets/71d85dd7-2036-4e74-b544-8b2db6277af4">

python
Apply random auto contrast to images.

Auto contrast enhances image contrast by stretching the intensity range
to use the full range while preserving relative intensities. For each
color channel:
1. Compute histogram
2. Find cumulative percentiles
3. Clip and scale intensities to full range


Core
Unified naming for border_mode and filling constants.

- `value`, `fill_value`, `cval`, `pad_val` => `fill`
- `mask_value`, `cval_mask`, `fill_mask_value`, `pad_mask_value` => `fill_mask`
- `pad_mode`, `mode` => `border_mode`

by ternaus

Bugfixes
- BugFix in [RandomShadow](https://explore.albumentations.ai/transform/RandomShadow), bow, larger intensity parameter will correspond to a darker shadow
- Bugfix in Load From Hub by qubvel
- Bugfix in MotionBlur by huuquan1994

1.4.21

* Support Our Work
* Transforms
* Core
* Benchmark
* Speedups

Support Our Work
1. **Love the library?** You can contribute to its development by becoming a [sponsor for the library](https://github.com/sponsors/albumentations-team). Your support is invaluable, and every contribution makes a difference.
2. **Haven't starred our repo yet?** Show your support with a ⭐! It's just [only one mouse click away](https://github.com/albumentations-team/albumentations).
3. **Got ideas or facing issues?** We'd love to hear from you. Share your thoughts in our [issues](https://github.com/albumentations-team/albumentations/issues) or join the conversation on our [Discord server](https://discord.gg/AmMnDBdzYs)

Transforms
Auto padding in crops

Added option to pad the image if crop size is larger than the crop size

Old way
python

[
A.PadIfNeeded(min_height=1024, min_width=1024, p=1),
A.RandomCrop(height=1204, width=1024, p=1)
]


New way:

python
A.RandomCrop(height=1204, width=1024, p=1, pad_if_needed=True)


Works for:

- [RandomCrop](https://explore.albumentations.ai/transform/RandomCrop)
- [CenterCrop](https://explore.albumentations.ai/transform/CenterCrop)
- [Crop](https://explore.albumentations.ai/transform/Crop)

You may also use it to pad image to a desired size.

Core
Random state
Now random state for the pipeline does not depend on the global random state

Before
python
random.seed(seed)
np.random.seed(seed)

transform = A.Compose(...)


Now

python
transform = A.Compose(seed=seed, ...)


or

python
transform = A.Compose(...)
transform.set_random_seed(seed)


Saving used parameters
Now you can get exact parameters that were used in the pipeline on a given sample with

python
transform = A.Compose(save_applied_params=True, ...)

result = transform(image=image, bboxes=bboxes, mask=mask, keypoints=keypoints)

print(result["applied_transforms"])


Benchmark
Moved benchmark to a separate repo

https://github.com/albumentations-team/benchmark/


Current result for uint8 images:

| Transform | albumentations<br>1.4.20 | augly<br>1.0.0 | imgaug<br>0.4.0 | kornia<br>0.7.3 | torchvision<br>0.20.0 |
|:------------------|:---------------------------|:-----------------|:------------------|:------------------|:------------------------|
| HorizontalFlip | **8325 ± 955** | 4807 ± 818 | 6042 ± 788 | 390 ± 106 | 914 ± 67 |
| VerticalFlip | **20493 ± 1134** | 9153 ± 1291 | 10931 ± 1844 | 1212 ± 402 | 3198 ± 200 |
| Rotate | **1272 ± 12** | 1119 ± 41 | 1136 ± 218 | 143 ± 11 | 181 ± 11 |
| Affine | **967 ± 3** | - | 774 ± 97 | 147 ± 9 | 130 ± 12 |
| Equalize | **961 ± 4** | - | 581 ± 54 | 152 ± 19 | 479 ± 12 |
| RandomCrop80 | **118946 ± 741** | 25272 ± 1822 | 11503 ± 441 | 1510 ± 230 | 32109 ± 1241 |
| ShiftRGB | **1873 ± 252** | - | 1582 ± 65 | - | - |
| Resize | **2365 ± 153** | 611 ± 78 | 1806 ± 63 | 232 ± 24 | 195 ± 4 |
| RandomGamma | **8608 ± 220** | - | 2318 ± 269 | 108 ± 13 | - |
| Grayscale | **3050 ± 597** | 2720 ± 932 | 1681 ± 156 | 289 ± 75 | 1838 ± 130 |
| RandomPerspective | 410 ± 20 | - | **554 ± 22** | 86 ± 11 | 96 ± 5 |
| GaussianBlur | **1734 ± 204** | 242 ± 4 | 1090 ± 65 | 176 ± 18 | 79 ± 3 |
| MedianBlur | **862 ± 30** | - | 813 ± 30 | 5 ± 0 | - |
| MotionBlur | **2975 ± 52** | - | 612 ± 18 | 73 ± 2 | - |
| Posterize | **5214 ± 101** | - | 2097 ± 68 | 430 ± 49 | 3196 ± 185 |
| JpegCompression | **845 ± 61** | 778 ± 5 | 459 ± 35 | 71 ± 3 | 625 ± 17 |
| GaussianNoise | 147 ± 10 | 67 ± 2 | **206 ± 11** | 75 ± 1 | - |
| Elastic | 171 ± 15 | - | **235 ± 20** | 1 ± 0 | 2 ± 0 |
| Clahe | **423 ± 10** | - | 335 ± 43 | 94 ± 9 | - |
| CoarseDropout | **11288 ± 609** | - | 671 ± 38 | 536 ± 87 | - |
| Blur | **4816 ± 59** | 246 ± 3 | 3807 ± 325 | - | - |
| ColorJitter | **536 ± 41** | 255 ± 13 | - | 55 ± 18 | 46 ± 2 |
| Brightness | **4443 ± 84** | 1163 ± 86 | - | 472 ± 101 | 429 ± 20 |
| Contrast | **4398 ± 143** | 736 ± 79 | - | 425 ± 52 | 335 ± 35 |
| RandomResizedCrop | **2952 ± 24** | - | - | 287 ± 58 | 511 ± 10 |
| Normalize | **1016 ± 84** | - | - | 626 ± 40 | 519 ± 12 |
| PlankianJitter | **1844 ± 208** | - | - | 813 ± 211 | - |

Speedups
* Speedup in [PlankianJitter](https://explore.albumentations.ai/transform/PlanckianJitter) in uint8 mode
* Replaced `cv2.addWeighted` with `wsum` from [simsimd](https://github.com/ashvardanian/SimSIMD) package

1.4.20

Hotfix version.

- Fix in check_version
- Fix in [PieceWiseAffine](https://explore.albumentations.ai/transform/PiecewiseAffine)
- Fix in [RandomSizedCrop](https://explore.albumentations.ai/transform/RandomSizedCrop) and [RandomResizedCrop](https://explore.albumentations.ai/transform/RandomResizedCrop)
- Fix in `RandomOrder`

1.4.19

* Support Our Work
* Transforms
* Core
* Bug Fixes

Support Our Work
1. **Love the library?** You can contribute to its development by becoming a [sponsor for the library](https://github.com/sponsors/albumentations-team). Your support is invaluable, and every contribution makes a difference.
2. **Haven't starred our repo yet?** Show your support with a ⭐! It's just [only one mouse click away](https://github.com/albumentations-team/albumentations).
3. **Got ideas or facing issues?** We'd love to hear from you. Share your thoughts in our [issues](https://github.com/albumentations-team/albumentations/issues) or join the conversation on our [Discord server](https://discord.gg/AmMnDBdzYs)

Transforms
Added `mask_interpolation` to all transforms that use mask interpolation, including:

- [RandomSizedCrop](https://explore.albumentations.ai/transform/RandomSizedCrop)
- [RandomResizedCrop](https://explore.albumentations.ai/transform/RandomResizedCrop)
- [RandomSizedBBoxSafeCrop](https://explore.albumentations.ai/transform/RandomSizedBBoxSafeCrop)
- [CropAndPad](https://explore.albumentations.ai/transform/CropAndPad)
- [Resize](https://explore.albumentations.ai/transform/Resize)
- [RandomScale](https://explore.albumentations.ai/transform/RandomScale)
- [LongestMaxSize](https://explore.albumentations.ai/transform/LongestMaxSize)
- [SmallestMaxSize](https://explore.albumentations.ai/transform/SmallestMaxSize)
- [Rotate](https://explore.albumentations.ai/transform/Rotate)
- [SafeRotate](https://explore.albumentations.ai/transform/SafeRotate)
- [OpticalDistortion](https://explore.albumentations.ai/transform/OpticalDistortion)
- [GridDistortion](https://explore.albumentations.ai/transform/GridDistortion)
- [ElasticTransform](https://explore.albumentations.ai/transform/ElasticTransform)
- [Perspective](https://explore.albumentations.ai/transform/Perspective)
- [PiecewiseAffine](https://explore.albumentations.ai/transform/PiecewiseAffine)

by ternaus

Core
- Minimal supported python version is 3.9
- Removed dependency on scikit-image
- Updated Random number generator from np.random.state to np.random.generator. Second is 50% faster => speedups in all transforms that heavily use random generator
- Where possible moved from `cv2.LUT` to `stringzilla lut`
- Added parameter `mask_interpolation` to Compose that overrides mask interpolation value in all transforms in that Compose, now can use more accurate `cv2.INTER_NEAREST_EXACT` for semantic segmentation and can work with depth and heatmap estimation using cubic, area, linear, etc



BugFixes
- Bugfix in [ISONoise](https://explore.albumentations.ai/transform/ISONoise)
- Bugfix: Ensure that transforms masks are contiguous arrays, by Callidior
- Bugfix in [Solarize](https://explore.albumentations.ai/transform/Solarize)
- Bugfix in bounding box filtering
- Bugfix in [OpticalDistortion](https://explore.albumentations.ai/transform/OpticalDistortion)
- Bugfix in balanced scale in [Affine](https://explore.albumentations.ai/transform/Affine)

1.4.18

* Support Our Work
* Transforms
* Core
* Deprecations
* Bugfixes

Support Our Work
1. **Love the library?** You can contribute to its development by becoming a [sponsor for the library](https://github.com/sponsors/albumentations-team). Your support is invaluable, and every contribution makes a difference.
2. **Haven't starred our repo yet?** Show your support with a ⭐! It's just [only one mouse click away](https://github.com/albumentations-team/albumentations).
3. **Got ideas or facing issues?** We'd love to hear from you. Share your thoughts in our [issues](https://github.com/albumentations-team/albumentations/issues) or join the conversation on our [Discord server](https://discord.gg/AmMnDBdzYs)

Transforms
[GridDistortion](https://explore.albumentations.ai/transform/GridDistortion)
![Screenshot 2024-10-08 at 15 06 03](https://github.com/user-attachments/assets/68239933-3441-4417-b691-535136a2e7a2)

Added support for `keypoints`

[GridDropout](https://explore.albumentations.ai/transform/GridDropout)

![Screenshot 2024-10-08 at 15 08 04](https://github.com/user-attachments/assets/4cc91b8c-1a31-4e3a-9e5e-bd5690a4a213)

Added support for `keypoints` and `bounding boxes`

[GridElasticDeform](https://explore.albumentations.ai/transform/GridElasticDeform)

![Screenshot 2024-10-08 at 15 10 24](https://github.com/user-attachments/assets/a5807c47-6e91-4362-b15a-7a19874ba1df)

Added support for `keypoints` and `bounding boxes`

[MaskDropout](https://explore.albumentations.ai/transform/MaskDropout)

![Screenshot 2024-10-08 at 15 11 53](https://github.com/user-attachments/assets/d112c4b5-323f-4a8e-9d8f-e89845274d23)

Added support for `keypoints` and `bounding boxes`

[Morphological](https://explore.albumentations.ai/transform/Morphological)

![Screenshot 2024-10-08 at 15 13 36](https://github.com/user-attachments/assets/8d558125-653b-482e-8a08-779731fa6556)

Added support for `bounding boxes` and `keypoints`

[OpticalDistortion](https://explore.albumentations.ai/transform/OpticalDistortion)

![Screenshot 2024-10-08 at 15 18 23](https://github.com/user-attachments/assets/4afb9b67-86da-46ac-b5a4-243caa8809bf)

Added support for `keypoints`

[PixelDropout](https://explore.albumentations.ai/transform/PixelDropout)

![Screenshot 2024-10-08 at 15 19 46](https://github.com/user-attachments/assets/bdb2e7f6-74da-42fa-93bf-5d2b6dd4fd59)

Added support for `keypoints` and `bonding boxes`

[XYMasking](https://explore.albumentations.ai/transform/XYMasking)

![Screenshot 2024-10-08 at 15 21 52](https://github.com/user-attachments/assets/ae493a9c-b074-4df1-8fb5-bf3bfb4197fd)

Added support for `bounding boxes` and `keypoints`

Core

Added support for masks as numpy arrays of the shape `(num_masks, height, width)`

Now you can apply transforms to masks as:

python
masks = <numpy array with shape (num_masks, height, width)>

transform(image=image, masks=masks)


Deprecations
Removed MixUp as it was doing almost exactly the same as [TemplateTransform](https://explore.albumentations.ai/transform/TemplateTransform)

Bugfixes
* Bugfix in [RandomFog](https://explore.albumentations.ai/transform/RandomFog)
* Bugfix in [PlankianJitter](https://explore.albumentations.ai/transform/PlanckianJitter)
* Several people reported issue with masks as list of numpy arrays, I guess it was fixed as a part of some other work as I cannot reproduce it. Just in case added tests for that case.

Page 2 of 8

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.