Torchvision

Latest version: v0.20.1

Safety actively analyzes 688087 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 22 of 23

0.4.1

This minor release provides binaries compatible with PyTorch 1.3.

Compared to version 0.4.0, it contains a single bugfix for `HMDB51` and `UCF101` datasets, fixed in https://github.com/pytorch/vision/pull/1240

0.4.0

This release adds support for video models and datasets, and brings several improvements.


**Note**: torchvision 0.4 requires PyTorch 1.2 or newer

Highlights

Video and IO

Video is now a first-class citizen in torchvision. The 0.4 release includes:

* efficient IO primitives for reading and writing video files
* Kinetics-400, HMDB51 and UCF101 datasets for action recognition, which are compatible with `torch.utils.data.DataLoader`
* Pre-trained models for action recognition, trained on Kinetics-400
* Training and evaluation scripts for reproducing the training results.

Writing your own video dataset is easy. We provide an utility class `VideoClips` that simplifies the task of enumerating all possible clips of fixed size in a list of video files by creating an index of all clips in a set of videos. It additionally allows to specify a fixed frame-rate for the videos.

python
from torchvision.datasets.video_utils import VideoClips

class MyVideoDataset(object):
def __init__(self, video_paths):
self.video_clips = VideoClips(video_paths,
clip_length_in_frames=16,
frames_between_clips``=1,
frame_rate=15)

def __getitem__(self, idx):
video, audio, info, video_idx = self.video_clips.get_clip(idx)
return video, audio

def __len__(self):
return self.video_clips.num_clips()


We provide pre-trained models for action recognition, trained on Kinetics-400, which reproduce the results on the original papers where they have been first introduced, as well the corresponding training scripts.

|model |clip 1 |
|--- |--- |
|r3d_18 |52.748 |
|mc3_18 |53.898 |
|r2plus1d_18 |57.498 |

Bugfixes

* change aspect ratio calculation formula in `references/detection` (1194)
* bug fixes in ImageNet (1149)
* fix save_image when height or width equals 1 (1059)
* Fix STL10 `__repr__` (969)
* Fix wrong behavior of `GeneralizedRCNNTransform` in Python2. (960)

Datasets

New

* Add USPS dataset (961)(1117)
* Added support for the QMNIST dataset (995)
* Add HMDB51 and UCF101 datasets (1156)
* Add Kinetics400 dataset (1077)

Improvements

* Miscellaneous dataset fixes (1174)
* Standardize str argument verification in datasets (1167)
* Always pass `transform` and `target_transform` to abstract dataset (1126)
* Remove duplicate transform assignment in FakeDataset (1125)
* Automatic extraction for Cityscapes Dataset (1066) (1068)
* Use joint transform in Cityscapes (1024)(1045)
* CelebA: track attr names, support split="all", code cleanup (1008)
* Add folds option to STL10 (914)

Models

New

* Add pretrained Wide ResNet (912)
* Memory efficient densenet (1003) (1090)
* Implementation of the MNASNet family of models (829)(1043)(1092)
* Add VideoModelZoo models (1130)

Improvements

* Fix resnet fpn backbone for resnet18 and resnet34 (1147)
* Add checks to `roi_heads` in detection module (1091)
* Make shallow copy of input list in `GeneralizedRCNNTransform` (1085)(1111)(1084)
* Make MobileNetV2 number of channel divisible by 8 (1005)
* typo fix: ouput -> output in Inception and GoogleNet (1034)
* Remove empty proposals from the RPN (1026)
* Remove empty boxes before NMS (1019)
* Reduce code duplication in segmentation models (1009)
* allow user to define residual settings in MobileNetV2 (965)
* Use `flatten` instead of `view` (1134)

Documentation

* Consistency in detection box format (1110)
* Fix Mask R-CNN docs (1089)
* Add paper references to VGG and Resnet variants (1088)
* Doc, Test Fixes in `Normalize` (1063)
* Add transforms doc to more datasets (1038)
* Corrected typo: 5 to 0.5 (1041)
* Update doc for `torchvision.transforms.functional.perspective` (1017)
* Improve documentation for `fillcolor` option in `RandomAffine` (994)
* Fix `COCO_INSTANCE_CATEGORY_NAMES` (991)
* Added models information to documentation. (985)
* Add missing import in `faster_rcnn.py` documentation (979)
* Improve `make_grid` docs (964)

Tests

* Add test for SVHN (1086)
* Add tests for Cityscapes Dataset (1079)
* Update CI to Python 3.6 (1044)
* Make `test_save_image` more robust (1037)
* Add a generic test for the datasets (1015)
* moved fakedata generation to separate module (1014)
* Create imagenet fakedata on-the-fly (1012)
* Minor test refactorings (1011)
* Add test for CIFAR10(0) (1010)
* Mock MNIST download for less flaky tests (1004)
* Add test for ImageNet (976)(1006)
* Add tests for datasets (966)

Transforms

New

* Add Random Erasing for image augmentation (909) (1060) (1087) (1095)

Improvements

* Allowing 'F' mode for 1 channel FloatTensor in `ToPILImage` (1100)
* Add shear parallel to y-axis (1070)
* fix error message in `to_tensor` (1000)
* Fix TypeError in `RandomResizedCrop.get_params` (1036)
* Fix `normalize` for different `dtype` than `float32` (1021)

Ops

* Renamed `vision.h` files to `vision_cpu.h` and `vision_cuda.h` (1051)(1052)
* Optimize `nms_cuda` by avoiding extra `torch.cat` call (945)

Reference scripts

* Expose data-path in the detection reference scripts (1109)
* Make `utils.py` work with pytorch-cpu (1023)
* Add mixed precision training with Apex (972)(1124)
* Add reference code for similarity learning (1101)

Build

* Add windows build steps and wheel build scripts (998)
* add packaging scripts (996)
* Allow forcing GPU build with `FORCE_CUDA=1` (927)

Misc

* Misc lint fixes (1020)
* Reraise error on failed downloading (1013)
* add more hub models (974)
* make C extension lazy-import (971)

0.3.0

This release brings several new features to torchvision, including models for semantic segmentation, object detection, instance segmentation and person keypoint detection, and custom C++ / CUDA ops specific to computer vision.

**Note: torchvision 0.3 requires PyTorch 1.1 or newer**

Highlights

Reference training / evaluation scripts

We now provide under the `references/` folder scripts for training and evaluation of the following tasks: classification, semantic segmentation, object detection, instance segmentation and person keypoint detection.
Their purpose is twofold:

* serve as a log of how to train a specific model.
* provide baseline training and evaluation scripts to bootstrap research

They all have an entry-point `train.py` which performs both training and evaluation for a particular task. Other helper files, specific to each training script, are also present in the folder, and they might get integrated into the torchvision library in the future.

We expect users should copy-paste and modify those reference scripts and use them for their own needs.

TorchVision Ops

TorchVision now contains custom C++ / CUDA operators in `torchvision.ops`. Those operators are specific to computer vision, and make it easier to build object detection models.
Those operators currently do not support PyTorch script mode, but support for it is planned for future releases.

List of supported ops

* `roi_pool` (and the module version `RoIPool`)
* `roi_align` (and the module version `RoIAlign`)
* `nms`, for non-maximum suppression of bounding boxes
* `box_iou`, for computing the intersection over union metric between two sets of bounding boxes

All the other ops present in `torchvision.ops` and its subfolders are experimental, in particular:

* `FeaturePyramidNetwork` is a module that adds a FPN on top of a module that returns a set of feature maps.
* `MultiScaleRoIAlign` is a wrapper around `roi_align` that works with multiple feature map scales

Here are a few examples on using torchvision ops:
python
import torch
import torchvision

create 10 random boxes
boxes = torch.rand(10, 4) * 100
they need to be in [x0, y0, x1, y1] format
boxes[:, 2:] += boxes[:, :2]
create a random image
image = torch.rand(1, 3, 200, 200)
extract regions in `image` defined in `boxes`, rescaling
them to have a size of 3x3
pooled_regions = torchvision.ops.roi_align(image, [boxes], output_size=(3, 3))
check the size
print(pooled_regions.shape)
torch.Size([10, 3, 3, 3])

or compute the intersection over union between
all pairs of boxes
print(torchvision.ops.box_iou(boxes, boxes).shape)
torch.Size([10, 10])


Models for more tasks

The 0.3 release of torchvision includes pre-trained models for other tasks than image classification on ImageNet.
We include two new categories of models: region-based models, like Faster R-CNN, and dense pixelwise prediction models, like DeepLabV3.

Object Detection, Instance Segmentation and Person Keypoint Detection models

**Warning: The API is currently experimental and might change in future versions of torchvision**

The 0.3 release contains pre-trained models for Faster R-CNN, Mask R-CNN and Keypoint R-CNN, all of them using ResNet-50 backbone with FPN.
They have been trained on COCO train2017 following the reference scripts in `references/`, and give the following results on COCO val2017

Network | box AP | mask AP | keypoint AP
-- | -- | -- | --

0.2.2

This version introduces several improvements and fixes.

Support for arbitrary input sizes for models

It is now possible to feed larger images than 224x224 into the models in torchvision.
We added an adaptive pooling just before the classifier, which adapts the size of the feature maps before the last layer, allowing for larger input images.
Relevant PRs: 744 747 746 672 643

Bugfixes

* Fix invalid argument error when using lsun method in windows (508)
* Fix FashionMNIST loading MNIST (640)
* Fix inception v3 input transform for trace & onnx (621)

Datasets

* Add support for webp and tiff images in ImageFolder 736 724
* Add K-MNIST dataset 687
* Add Cityscapes dataset 695 725 739 700
* Add Flicker 8k and 30k datasets 674
* Add VOCDetection and VOCSegmentation datasets 663
* Add SBU Captioned Photo Dataset (665)
* Updated URLs for EMNIST 726
* MNIST and FashionMNIST now have their own 'raw' and 'processed' folder 601
* Add metadata to some datasets (501)

Improvements

* Allow RandomCrop to crop in the padded region 564
* ColorJitter now supports min/max values 548
* Generalize resnet to use block.extension 487
* Move area calculation out of for loop in RandomResizedCrop 641
* Add option to zero-init the residual branch in resnet (498)
* Improve error messages in to_pil_image 673
* Added the option of converting to tensor for numpy arrays having only two dimensions in to_tensor (686)
* Optimize _find_classes in DatasetFolder via scandir in Python3 (559)
* Add padding_mode to RandomCrop (489 512)
* Make DatasetFolder more generic (527)
* Add in-place option to normalize (699)
* Add Hamming and Box interpolations to transforms.py (693)
* Added the support of 2-channel Image modes such as 'LA' and adding a mode in 4 channel modes (688)
* Improve support for 'P' image mode in pad (683)
* Make torchvision depend on pillow-simd if already installed (522)
* Make tests run faster (745)
* Add support for non-square crops in RandomResizedCrop (715)

Breaking changes

* save_images now round to nearest integer 754

Misc

* Added code coverage to travis 703
* Add downloads and docs badge to README (702)
* Add progress to download_url 497 524 535
* Replace 'residual' with 'identity' in resnet.py (679)
* Consistency changes in the models
* Refactored MNIST and CIFAR to have data and target fields 578 594
* Update torchvision to newer versions of PyTorch
* Relax assertion in `transforms.Lambda.__init__` (637)
* Cast MNIST target to int (605)
* Change default target type of FakedDataset to long (581)
* Improve docs of functional transforms (602)
* Docstring improvements
* Add is_image_file to folder_dataset (507)
* Add deprecation warning in MNIST train[test]_labels[data] (742)
* Mention TORCH_MODEL_ZOO in models documentation. (624)
* Add scipy as a dependency to setup.py (675)
* Added size information for inception v3 (719)

0.2.1

This version introduces several fixes and improvements to the previous version.

Better printing of Datasets and Transforms

* Add descriptions to Transform objects.
python
Now T.Compose([T.RandomHorizontalFlip(), T.RandomCrop(224), T.ToTensor()]) prints
Compose(
RandomHorizontalFlip(p=0.5)
RandomCrop(size=(224, 224), padding=0)
ToTensor()
)

* Add descriptions to Datasets
python
now torchvision.datasets.MNIST('~') prints
Dataset MNIST
Number of datapoints: 60000
Split: train
Root Location: /private/home/fmassa
Transforms (if any): None
Target Transforms (if any): None

New transforms

* Add RandomApply, RandomChoice, RandomOrder transformations 402
* RandomApply: applies a list of transformation with a probability
* RandomChoice: choose randomly a single transformation from a list
* RandomOrder: apply transformations in a random order
* Add random affine transformation 411

* Add reflect, symmetric and edge padding to `transforms.pad` 460

Performance improvements

* Speedup MNIST preprocessing by a factor of 1000x
* make weight initialization optional to speed VGG construction. This makes loading pre-trained VGG models much faster
* Accelerate `transforms.adjust_gamma` by using PIL's point function instead of custom numpy-based implementation

New Datasets

* EMNIST - an extension of MNIST for hand-written letters
* OMNIGLOT - a dataset for one-shot learning, with 1623 different handwritten characters from 50 different alphabets
* Add a DatasetFolder class - generalization of ImageFolder

Miscellaneous improvements

* FakeData accepts a seed argument, so having multiple different FakeData instances is now possible
* Use consistent datatypes in Dataset targets. Now all datasets that returns labels will have them as int
* Add probability parameter in `RandomHorizontalFlip` and `RandomHorizontalFlip`
* Replace `np.random` by `random` in transforms - improves reproducibility in multi-threaded environments with default arguments
* Detect tif images in ImageFolder
* Add `pad_if_needed` to `RandomCrop`, so that if the crop size is larger than the image, the image is automatically padded
* Add support in `transforms.ToTensor` for PIL Images with mode '1'

Bugfixes

* Fix passing list of tensors to `utils.save_image`
* single images passed to `make_grid` now are now also normalized
* Fix PIL img close warnings
* Added missing weight initializations to densenet
* Avoid division by zero in `make_grid` when the image is constant
* Fix `ToTensor` when PIL Image has mode F
* Fix bug with `to_tensor` when the input is numpy array of type np.float32.

0.2.0

This version introduced a functional interface to the transforms, allowing for joint random transformation of inputs and targets. We also introduced a few breaking changes to some datasets and transforms (see below for more details).

Transforms
We have introduced a functional interface for the torchvision transforms, available under `torchvision.transforms.functional`. This now makes it possible to do joint random transformations on inputs and targets, which is especially useful in tasks like object detection, segmentation and super resolution. For example, you can now do the following:

python
from torchvision import transforms
import torchvision.transforms.functional as F
import random

def my_segmentation_transform(input, target):
i, j, h, w = transforms.RandomCrop.get_params(input, (100, 100))
input = F.crop(input, i, j, h, w)
target = F.crop(target, i, j, h, w)
if random.random() > 0.5:
input = F.hflip(input)
target = F.hflip(target)
F.to_tensor(input), F.to_tensor(target)
return input, target

The following transforms have also been added:
- [`F.vflip` and `RandomVerticalFlip`](http://pytorch.org/docs/master/torchvision/transforms.html#torchvision.transforms.RandomVerticalFlip)
- [FiveCrop](http://pytorch.org/docs/master/torchvision/transforms.html#torchvision.transforms.FiveCrop) and [TenCrop](http://pytorch.org/docs/master/torchvision/transforms.html#torchvision.transforms.TenCrop)
- Various color transformations:
- [`ColorJitter`](http://pytorch.org/docs/master/torchvision/transforms.html#torchvision.transforms.ColorJitter)
- `F.adjust_brightness`
- `F.adjust_contrast`
- `F.adjust_saturation`
- `F.adjust_hue`
- `LinearTransformation` for applications such as whitening
- `Grayscale` and `RandomGrayscale`
- `Rotate` and `RandomRotation`
- `ToPILImage` now supports `RGBA` images
- `ToPILImage` now accepts a `mode` argument so you can specify which colorspace the image should be
- `RandomResizedCrop` now accepts `scale` and `ratio` ranges as input parameters

Documentation
Documentation is now auto generated and publishing to [pytorch.org](http://pytorch.org/docs/master/torchvision/index.html)

Datasets:
SEMEION Dataset of handwritten digits added
Phototour dataset patches computed via multi-scale Harris corners now available by setting `name` equal to `notredame_harris`, `yosemite_harris` or `liberty_harris` in the `Phototour` dataset

Bug fixes:
- Pre-trained densenet models is now CPU compatible 251

Breaking changes:
This version also introduced some breaking changes:
- The `SVHN` dataset has now been made consistent with other datasets by making the label for the digit 0 be 0, instead of 10 (as it was previously) (see 194 for more details)
- the `labels` for the unlabelled `STL10` dataset is now an array filled with `-1`
- the order of the input args to the deprecated `Scale` transform has changed from `(width, height)` to `(height, width)` to be consistent with other transforms

Page 22 of 23

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.