Torch-xla

Latest version: v2.3.0

Safety actively analyzes 625566 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

2.3.0

Highlights
We are excited to announce the release of PyTorch XLA 2.3! PyTorch 2.3 offers experimental support for SPMD Auto Sharding on single TPU host, this allows user to shard their models on TPU with a single config change. We also add the experimental support for Pallas custom kernel for inference, which enables users to make use of the popular custom kernel like flash attention and paged attention on TPU.

Stable Features
PJRT
- Experimental GPU PJRT Plugin ([6240](https://github.com/pytorch/xla/pull/6240))
- Define PJRT plugin interface in C++ [(](https://github.com/pytorch/xla/commit/bd95eb1300c84efbf0a5885963a52a8aa7c861ae)[#6360](https://github.com/pytorch/xla/pull/6360)[)](https://github.com/pytorch/xla/commit/bd95eb1300c84efbf0a5885963a52a8aa7c861ae)
- Add limit to max inflight TPU computations ([6533](https://github.com/pytorch/xla/pull/6533))
- Remove TPU_C_API device type ([6435](https://github.com/pytorch/xla/pull/6435))

GSPMD
- Introduce global mesh ([6498](https://github.com/pytorch/xla/pull/6498))
- Introduce xla_distribute_module for DTensor integration [(](https://github.com/pytorch/xla/commit/b6b9c6dabe6359196596a9890e60d6c15c0b7a7d)[#6683](https://github.com/pytorch/xla/pull/6683)[)](https://github.com/pytorch/xla/commit/b6b9c6dabe6359196596a9890e60d6c15c0b7a7d)

Torch Compile
- Support activation sharding within torch.compile [(](https://github.com/pytorch/xla/commit/a80f1d7903ff9074c13e529ca1acf579206c2879)[#6524](https://github.com/pytorch/xla/pull/6524)[)](https://github.com/pytorch/xla/commit/a80f1d7903ff9074c13e529ca1acf579206c2879)
- Do not cache FX input args in dynamo bridge to avoid memory leak [(](https://github.com/pytorch/xla/commit/6aeab3006e48e2e70e8b7daa98b5add63824afc3)[#6553](https://github.com/pytorch/xla/pull/6553)[)](https://github.com/pytorch/xla/commit/6aeab3006e48e2e70e8b7daa98b5add63824afc3)
- Ignore non-XLA nodes and their direct dependents. ([6170](https://github.com/pytorch/xla/pull/6170))

Export
- Support of implicit broadcasting with unbounded dynamism ([6219](https://github.com/pytorch/xla/pull/6219))
- Support multiple StableHLO Composite outputs ([6295](https://github.com/pytorch/xla/pull/6295))
- Add support of dynamism for add [(](https://github.com/pytorch/xla/commit/8d91ff585c883e5ab6ffd11c1461d9029ca11263)[#6443](https://github.com/pytorch/xla/pull/6443)[)](https://github.com/pytorch/xla/commit/8d91ff585c883e5ab6ffd11c1461d9029ca11263)
- Enable unbounded dynamism on conv, softmax, addmm, slice ([6494](https://github.com/pytorch/xla/pull/6494))
- Handle constant variable [(](https://github.com/pytorch/xla/commit/0fa24a136c8db152273f40e85a79b37827f8b5df)[#6510](https://github.com/pytorch/xla/pull/6510)[)](https://github.com/pytorch/xla/commit/0fa24a136c8db152273f40e85a79b37827f8b5df)

Beta Features
CoreAtenOpSet
Support all Core Aten Ops used by `torch.export`
- Lower reflection_pad1d, reflection_pad1d_backward, reflection_pad3d and reflection_pad3d_backward [(](https://github.com/pytorch/xla/commit/13e8647c2cd3804ff7dc30f1c1652774941b0bfc)[#6588](https://github.com/pytorch/xla/pull/6588)[)](https://github.com/pytorch/xla/commit/13e8647c2cd3804ff7dc30f1c1652774941b0bfc)
- lower replication_pad3d and replication_pad3d_backward ([6566](https://github.com/pytorch/xla/pull/6566))
- Lower the embedding op ([6495](https://github.com/pytorch/xla/pull/6495))
- Lowering for _pdist_forward ([6507](https://github.com/pytorch/xla/pull/6507))
- Support mixed precision for torch.where ([6303](https://github.com/pytorch/xla/pull/6303))

Benchmark
- Unify PyTorch/XLA and Pytorch torchbench model configuration using the same [torchbench.yaml](https://github.com/pytorch/pytorch/blob/c797fbc4e1a5829c51630e72e8f55ae67a11cc16/benchmarks/dynamo/torchbench.yaml) ([#6881](https://github.com/pytorch/xla/pull/6881/))
- Align model data precision settings with [pytorch HUD](https://hud.pytorch.org/benchmark/compilers) ([#6447](https://github.com/pytorch/xla/pull/6447/), [#6518](https://github.com/pytorch/xla/pull/6518), [#6555](https://github.com/pytorch/xla/pull/6550))
- Fix some torchbench models configuration to make it runnable using XLA ([6509](https://github.com/pytorch/xla/pull/6509), [#6542](https://github.com/pytorch/xla/pull/6542), [#6558](https://github.com/pytorch/xla/pull/6558), [#6612](https://github.com/pytorch/xla/pull/6612)).

FSDP via SPMD
- Make FSDPv2 to use the global mesh API ([6500](https://github.com/pytorch/xla/pull/6500))
- Enable auto-wrapping([6499](https://github.com/pytorch/xla/pull/6499))

Distributed Checkpoint
- Add process group documentation for SPMD [(](https://github.com/pytorch/xla/commit/732a1c7f13912c63c0570db3e263b319e4950407)[#6469](https://github.com/pytorch/xla/pull/6469)[)](https://github.com/pytorch/xla/commit/732a1c7f13912c63c0570db3e263b319e4950407)

Usability
- Support `torch_xla.device` [(](https://github.com/pytorch/xla/commit/0ec5b91787adb0bfe3fcab7be8d9c464aa610e84)[#6571](https://github.com/pytorch/xla/pull/6571)[)](https://github.com/pytorch/xla/commit/0ec5b91787adb0bfe3fcab7be8d9c464aa610e84)

GPU
- Fix global_device_count(), local_device_count() for single process on CUDA([6022](https://github.com/pytorch/xla/pull/6022))
- Automatically use XLA:GPU if on a GPU machine [(](https://github.com/pytorch/xla/commit/cb4983e93d70319db56440872567e2dc98d0ce1f)[#6605](https://github.com/pytorch/xla/pull/6605)[)](https://github.com/pytorch/xla/commit/cb4983e93d70319db56440872567e2dc98d0ce1f)
- Add SPMD on GPU instructions [(](https://github.com/pytorch/xla/commit/cdd6466c9546075eb860ebfbbf944e4e9542102a)[#6684](https://github.com/pytorch/xla/pull/6684)[)](https://github.com/pytorch/xla/commit/cdd6466c9546075eb860ebfbbf944e4e9542102a)
- Build XLA:GPU as a separate Plugin ([6825](https://github.com/pytorch/xla/pull/6825))

Distributed
- Support tensor bucketing for all-gather and reduce-scatter for ZeRO1 [(](https://github.com/pytorch/xla/commit/a805505d8ff6745c2acbfdef5c67a1f93ab7cf3e)[#6025](https://github.com/pytorch/xla/pull/6025)[)](https://github.com/pytorch/xla/commit/a805505d8ff6745c2acbfdef5c67a1f93ab7cf3e)

Experimental Features
Pallas
- Introduce Flash Attention kernel using Pallas [(](https://github.com/pytorch/xla/commit/db7112af0b71f075b325d3e28fff52146f7f1bba)[#6827](https://github.com/pytorch/xla/pull/6827)[)](https://github.com/pytorch/xla/commit/db7112af0b71f075b325d3e28fff52146f7f1bba)
- Support Flash Attention kernel with casual mask ([6837](https://github.com/pytorch/xla/pull/6837))
- Support Flash Attention kernel with `torch.compile` ([6875](https://github.com/pytorch/xla/pull/6875))
- Support Pallas kernel ([6340](https://github.com/pytorch/xla/pull/6340))
- Support programmatically extracting the payload from Pallas kernel [(](https://github.com/pytorch/xla/commit/370679179aebaef6a0e68a26384926b7e6ee84a7)[#6696](https://github.com/pytorch/xla/pull/6696)[)](https://github.com/pytorch/xla/commit/370679179aebaef6a0e68a26384926b7e6ee84a7)
- Support Pallas kernel with `torch.compile` [(](https://github.com/pytorch/xla/commit/ce8ee38e508605ca33335e60cec238795d4d742a)[#6477](https://github.com/pytorch/xla/pull/6477)[)](https://github.com/pytorch/xla/commit/ce8ee38e508605ca33335e60cec238795d4d742a)
- Introduce helper to convert Pallas kernel to PyTorch/XLA callable ([6713](https://github.com/pytorch/xla/pull/6713))

GSPMD Auto-Sharding
- Support auto-sharding for single host TPU ([6719](https://github.com/pytorch/xla/pull/6719))
- Auto construct auto-sharding mesh ids ([6770](https://github.com/pytorch/xla/pull/6770))

Input Output Aliasing
- Support torch.compile for `dynamo_set_buffer_donor`
- Use XLA’s new API to alias graph input and output ([6855](https://github.com/pytorch/xla/pull/6855))

While Loop
- Support `torch._higher_order_ops.while_loop` with simple examples ([6532](https://github.com/pytorch/xla/pull/6532), [#6603](https://github.com/pytorch/xla/pull/6603))

Bug Fixes and Improvements
- Propagates requires_grad over to AllReduce output [(](https://github.com/pytorch/xla/commit/40727e4d367e183baccd9a2ce734ca7632ca09ac)[#6326](https://github.com/pytorch/xla/pull/6326)[)](https://github.com/pytorch/xla/commit/40727e4d367e183baccd9a2ce734ca7632ca09ac)
- Avoid fallback for avg_pool ([6409](https://github.com/pytorch/xla/pull/6409))
- Fix output tensor shape for argmin and argmax where keepdim=True and dim=None ([6536](https://github.com/pytorch/xla/pull/6536))
- Fix preserve_rng_state for activation checkpointing ([4690](https://github.com/pytorch/xla/pull/4690))
- Allow int data-type for Embedding indices [(](https://github.com/pytorch/xla/commit/5a7bcac7d5311ad8dd61bad01d2ce5a3e484a0de)[#6718](https://github.com/pytorch/xla/pull/6718)[)](https://github.com/pytorch/xla/commit/5a7bcac7d5311ad8dd61bad01d2ce5a3e484a0de)
- Don't terminate the whole process when Compile fails [(](https://github.com/pytorch/xla/commit/d503b71eda0fe15dfecb20d00ba0625f318ba0bf)[#6707](https://github.com/pytorch/xla/pull/6707)[)](https://github.com/pytorch/xla/commit/d503b71eda0fe15dfecb20d00ba0625f318ba0bf)
- Fix a incorrect assert on frame count for PT_XLA_DEBUG=1 ([6466](https://github.com/pytorch/xla/pull/6466))
- Refactor nms into TorchVision variant.([6814](https://github.com/pytorch/xla/pull/6814))

2.2.0

Cloud TPUs now support the [PyTorch 2.2 release](https://github.com/pytorch/pytorch/releases), via PyTorch/XLA integration. On top of the underlying improvements and bug fixes in the PyTorch 2.2 release, this release introduces several features, and PyTorch/XLA specific bug fixes.

Installing PyTorch and PyTorch/XLA 2.2.0 wheel:


pip install torch~=2.2.0 torch_xla[tpu]~=2.2.0 -f https://storage.googleapis.com/libtpu-releases/index.html

Please note that you might have to re-install the libtpu on your TPUVM depending on your previous installation:

pip install torch_xla[tpu] -f https://storage.googleapis.com/libtpu-releases/index.html


* Note: If you meet the error `RuntimeError: operator torchvision::nms does not exist` when using torchvision in the 2.2.0 docker image, please try the following command to fix the issue:

pip uninstall torch -y; pip install torch==2.2.0


Stable Features
PJRT
- `PJRT_DEVICE=GPU` has been renamed to `PJRT_DEVICE=CUDA` (https://github.com/pytorch/xla/pull/5754).
- `PJRT_DEVICE=GPU` will be removed in the 2.3 release.
- Optimize **Host to Device** transfer (https://github.com/pytorch/xla/pull/5772) and device to host transfer (https://github.com/pytorch/xla/pull/5825).
- Miscellaneous low-level refactoring and performance improvements ([5799](https://github.com/pytorch/xla/pull/5799), [#5737](https://github.com/pytorch/xla/pull/5737), [#5794](https://github.com/pytorch/xla/pull/5794), [#5793](https://github.com/pytorch/xla/pull/5793), [#5546](https://github.com/pytorch/xla/pull/5546)).

Beta Features
GSPMD
- Support **DTensor API** integration and move GSPMD out of experimental ([5776](https://github.com/pytorch/xla/pull/5776)).
- Enable debug visualization func `visualize_tensor_sharding` ([5742](https://github.com/pytorch/xla/pull/5742)), added [doc](https://github.com/pytorch/xla/blob/master/docs/spmd.md#spmd-debugging-tool).
- Support `mark_shard` scalar tensors ([6158](https://github.com/pytorch/xla/pull/6158)).
- Add `apply_backward_optimization_barrier` ([6157](https://github.com/pytorch/xla/pull/6157)).

Export
- Handled lifted constants in torch export (https://github.com/pytorch/xla/pull/6111).
- Run decomp before processing (https://github.com/pytorch/xla/pull/5713).
- Support export to `tf.saved_model` for models with unused params (https://github.com/pytorch/xla/pull/5694).
- Add an option to not save the weights ([5964](https://github.com/pytorch/xla/pull/5964)).
- Experimental support for dynamic dimension sizes in torch export to StableHLO ([5790](https://github.com/pytorch/xla/pull/5790), [openxla/xla#6897](https://github.com/openxla/xla/pull/6897)).

CoreAtenOpSet
- PyTorch/XLA aims to support all PyTorch core ATen ops in the 2.3 release. We’re actively working on this, remaining issues to be closed can be found at [issue list](https://github.com/pytorch/xla/issues?q=is%3Aopen+is%3Aissue+label%3A%22core+aten+opset%22).

Benchmark
- Support of benchmark running automation and metric report analysis on both TPU and GPU ([doc](https://github.com/pytorch/xla/blob/r2.2/benchmarks/README.md)).

Experimental Features
FSDP via SPMD
- Introduce **FSDP** via **SPMD**, or **FSDPv2** ([6187](https://github.com/pytorch/xla/pull/6187)). The RFC can be found ([#6379](https://github.com/pytorch/xla/issues/6379)).
- Add **FSDPv2** user guide ([6386](https://github.com/pytorch/xla/pull/6386)).

Distributed Op
- Support **all-gather** coalescing (https://github.com/pytorch/xla/pull/5950).
- Support **reduce-scatter** coalescing (https://github.com/pytorch/xla/pull/5956).

Persistent Compilation
- Enable persistent compilation caching (https://github.com/pytorch/xla/pull/6065).
- Document and introduce `xr.initialize_cache` python API (https://github.com/pytorch/xla/pull/6046).

Checkpointing
- Support auto checkpointing for TPU preemption (https://github.com/pytorch/xla/pull/5753).
- Support **Async** checkpointing through **CheckpointManager** (https://github.com/pytorch/xla/pull/5697).

Usability
- Document Compilation/Execution analysis (https://github.com/pytorch/xla/pull/6039).
- Add profiler API for async capture (https://github.com/pytorch/xla/pull/5969).

Quantization
- Lower **quant/dequant** torch op to StableHLO (https://github.com/pytorch/xla/pull/5763).

GPU
- Document **multihost** gpu training (https://github.com/pytorch/xla/pull/5704).
- Support **multinode** training via `torchrun` (https://github.com/pytorch/xla/pull/5657).

Bug Fixes and Improvements
- Pow precision issue (https://github.com/pytorch/xla/pull/6103).
- Handle negative dim for **Diagonal Scatter** (https://github.com/pytorch/xla/pull/6123).
- Fix `as_strided` for inputs smaller than the arguments specification (https://github.com/pytorch/xla/pull/5914).
- Fix **squeeze** op lowering issue when dim is not in sorted order (https://github.com/pytorch/xla/pull/5751).
- Optimize **RNG seed dtype** for better memory utilization (https://github.com/pytorch/xla/pull/5710).

Lowering
- `_prelu_kernel_backward` (https://github.com/pytorch/xla/pull/5724).

2.1.0

Cloud TPUs now support the [PyTorch 2.1 release](https://github.com/pytorch/pytorch/releases), via PyTorch/XLA integration. On top of the underlying improvements and bug fixes in the PyTorch 2.1 release, this release introduces several features, and PyTorch/XLA specific bug fixes.

PJRT is now PyTorch/XLA's officially supported runtime! PJRT brings improved performance, superior usability, and broader device support. PyTorch/XLA r2.1 will be the last release with XRT available as a legacy runtime. Our main release build will not include XRT, but it will be available in a separate package. In most cases, we expect the migration to PJRT to require minimal changes. For more information, see our [PJRT documentation](https://github.com/pytorch/xla/blob/r2.1/docs/pjrt.md).

GSPMD support has been added as an experimental feature to the PyTorch/XLA 2.1 release. GSPMD will transform the single device program into a partitioned one with proper collectives, based on the user provided sharding hints. This feature allows developers to write PyTorch programs as if they are on a single large device without any custom sharded computation ops and/or collective communications to scale. We published a [blog post](https://pytorch.org/blog/pytorch-xla-spmd/) explaining the technical details and expected usage, you can also find more detail in this [user guide](https://github.com/pytorch/xla/blob/r2.1/docs/spmd.md).

PyTorch/XLA has transitioned from depending on TensorFlow to depending on the new OpenXLA repo. This allows us to reduce our binary size and simplify our build system. Starting from 2.1, PyTorch/XLA will release our TPU whl on the [pypi](https://pypi.org/project/torch-xla/).

To install PyTorch/XLA 2.1.0 wheels, please find the installation instructions below.

Installing PyTorch and PyTorch/XLA 2.1.0 wheel:

pip install torch~=2.1.0 torch_xla[tpu]~=2.1.0 -f https://storage.googleapis.com/libtpu-releases/index.html


Please note that you might have to re-install the libtpu on your TPUVM depending on your previous installation:

pip install torch_xla[tpu] -f https://storage.googleapis.com/libtpu-releases/index.html


Stable Features
OpenXLA
* Migrate to pull XLA from TensorFlow to OpenXLA, TF pin dependency sunset ([5202](https://github.com/pytorch/xla/pull/5202))
* Instructions to build PyTorch/XLA with OpenXLA can be found in [this doc](https://github.com/pytorch/xla/blob/r2.1/CONTRIBUTING.md#building-manually).
PjRt Runtime
* Move PJRT APIs from experimental to `torch_xla.runtime` ([5011](https://github.com/pytorch/xla/pull/5011))
* Enable PJRT C API Client and other changes for Neuron ([5428](https://github.com/pytorch/xla/pull/5428))
* Enable PJRT C API Client for Intel XPU ([4891](https://github.com/pytorch/xla/pull/4891))
* Change pjrt:// init method to xla:// ([5560](https://github.com/pytorch/xla/pull/5560))
* Make TPU detection more robust ([5271](https://github.com/pytorch/xla/pull/5271))
* Add runtime.host_index ([5283](https://github.com/pytorch/xla/pull/5283))
Functionalization
* Functionalization integration ([4158](https://github.com/pytorch/xla/pull/4158))
* Add support for XLA_DISABLE_FUNCTIONALIZATION flag ([4792](https://github.com/pytorch/xla/pull/4792))
Improvements and additions
* Op Lowering
* squeeze_copy.dims ([5286](https://github.com/pytorch/xla/pull/5286))
* native_dropout ([5643](https://github.com/pytorch/xla/pull/5643))
* native_dropout_backward ([5642](https://github.com/pytorch/xla/pull/5642))
* count_nonzero ([5137](https://github.com/pytorch/xla/pull/5137))
* Build System
* Migrate the build system to Bazel ([4528](https://github.com/pytorch/xla/pull/4528))

Beta Features
AMP (Automatic MIxed Precision)
* Added bfloat16 support on TPUs. ([5161](https://github.com/pytorch/xla/pull/5161))
* Documentation can be found in [amp.md](https://github.com/pytorch/xla/blob/r2.1/docs/amp.md)

TorchDynamo
* Support CPU egaer fallback in Dynamo bridge ([5000](https://github.com/pytorch/xla/pull/5000))
* Support `torch.compile` with SPMD for inference ([5002](https://github.com/pytorch/xla/pull/5002))
* Update the dynamo backend name to `openxla` and `openxla_eval` ([5402](https://github.com/pytorch/xla/pull/5402))
* Inference optimization for SPMD inference + `torch.compile` ([5447](https://github.com/pytorch/xla/pull/5447), [#5446](https://github.com/pytorch/xla/pull/5446))

Traceable Collectives
* Adopts traceable `all_reduce` ([4915](https://github.com/pytorch/xla/pull/4915))
* Make xm.all_gather a single graph in Dynamo ([4922](https://github.com/pytorch/xla/pull/4922))

Experimental Features
GSPMD
* Add SPMD [user guide](https://github.com/pytorch/xla/blob/r2.1/docs/spmd.md)
* Enable Input-output aliasing ([5320](https://github.com/pytorch/xla/pull/5320))
* Introduce `global_runtime_device_count` to query the runtime device count ([5129](https://github.com/pytorch/xla/pull/5129))
* Support partial replication ([5411](https://github.com/pytorch/xla/pull/5411) )
* Support tuple partition spec ([5488](https://github.com/pytorch/xla/pull/5488))
* Support mark_sharding on IRs ([5301](https://github.com/pytorch/xla/pull/5301))
* Make IR sharding custom sharding op ([5433](https://github.com/pytorch/xla/pull/5433))
* Introduce Hybrid Device mesh creation ([5147](https://github.com/pytorch/xla/pull/5147))
* Introduce SPMD-friendly patched nn.Linear ([5491](https://github.com/pytorch/xla/pull/5491))
* Allow dumping post optimizations HLO ([5302](https://github.com/pytorch/xla/pull/5302))
* Allow sharding n-d tensor on (n+1)-d Mesh ([5268](https://github.com/pytorch/xla/pull/5268))
* Support synchronous distributed checkpointing ([5130](https://github.com/pytorch/xla/pull/5130), [#5170](https://github.com/pytorch/xla/pull/5170))

Serving Support
* SavedModel
* Added a script stablehlo-to-saved-model ([5493](https://github.com/pytorch/xla/pull/5493))
* docs:https://github.com/pytorch/xla/blob/r2.1/docs/stablehlo.md#convert-saved-stablehlo-for-serving

StableHLO
* Add StableHLO user guide ([5523](https://github.com/pytorch/xla/pull/5523))
* Add save_as_stablehlo and save_torch_model_as_stablehlo APIs ([5493](https://github.com/pytorch/xla/pull/5493))
* Make StableHLO executable ([5476](https://github.com/pytorch/xla/pull/5476))

Ongoing Development
TorchDynamo
* Enable single step graph for training
* Avoid inter-graph reshapes from aot_autograd
* Support GSPMD for activation checkpointing

GSPMD
* Support auto-sharding
* Benchmark and improving GSPMD for XLA:GPU
* Integrating to PyTorch’s Distributed Tensor API

GPU
* Support Multi-host GPU for PJRT runtime
* Improve performance on torchbench models

Quantization
* Support PyTorch PT2E quantization workflow

Bug Fixes and Improvements
* Fix unexpected Dynamo crash due to `clear_pending_ir` call([5582](https://github.com/pytorch/xla/pull/5582))
* Fix FSDP for Models with Frozen Weights ([5484](https://github.com/pytorch/xla/pull/5484))
* Fix data type in Pow with Scalar base and Tensor exponent ([5467](https://github.com/pytorch/xla/pull/5467))
* Fix the inplace op crash when applied on self tensors in dynamo ([5309](https://github.com/pytorch/xla/pull/5309))

2.0.0

Cloud TPUs now support the [PyTorch 2.0 release](https://github.com/pytorch/pytorch/releases), via PyTorch/XLA integration. On top of the underlying improvements and bug fixes in PyTorch's 2.0 release, this release introduces several features, and PyTorch/XLA specific bug fixes.

Beta Features
PJRT runtime
* Checkout our newest [document](https://github.com/pytorch/xla/blob/r2.0/docs/pjrt.md); PjRt is the default runtime in 2.0.
* New Implementation of xm.rendezvous with XLA collective communication which scales better ([4181](https://github.com/pytorch/xla/pull/4181))
* New PJRT TPU backend through the C-API ([4077](https://github.com/pytorch/xla/pull/4077))
* Use PJRT to default if no runtime is configured ([4599](https://github.com/pytorch/xla/pull/4599))
* Experimental support for torch.distributed and DDP on TPU v2 and v3 ([4520](https://github.com/pytorch/xla/pull/4520))

FSDP
* Add auto_wrap_policy into XLA FSDP for automatic wrapping ([4318](https://github.com/pytorch/xla/pull/4318))

Stable Features
Lazy Tensor Core Migration
* Migration is completed, checkout this [dev discussion](https://dev-discuss.pytorch.org/t/pytorch-xla-2022-q4-dev-update/961) for more detail.
* Naively inherits LazyTensor ([4271](https://github.com/pytorch/xla/pull/4271))
* Adopt even more LazyTensor interfaces ([4317](https://github.com/pytorch/xla/pull/4317))
* Introduce XLAGraphExecutor ([4270](https://github.com/pytorch/xla/pull/4270))
* Inherits LazyGraphExecutor ([4296](https://github.com/pytorch/xla/pull/4296))
* Adopt more LazyGraphExecutor virtual interfaces ([4314](https://github.com/pytorch/xla/pull/4314))
* Rollback to use xla::Shape instead of torch::lazy::Shape ([4111](https://github.com/pytorch/xla/pull/4111))
* Use TORCH_LAZY_COUNTER/METRIC ([4208](https://github.com/pytorch/xla/pull/4208))

Improvements & Additions
* Add an option to increase the worker thread efficiency for data loading ([4727](https://github.com/pytorch/xla/pull/4727))
* Improve numerical stability of torch.sigmoid ([4311](https://github.com/pytorch/xla/pull/4311))
* Add an api to clear counter and metrics ([4109](https://github.com/pytorch/xla/pull/4109))
* Add met.short_metrics_report to display more concise metrics report ([4148](https://github.com/pytorch/xla/pull/4148))
* Document environment variables ([4273](https://github.com/pytorch/xla/pull/4273))
* Op Lowering
* _linalg_svd ([4537](https://github.com/pytorch/xla/pull/4537))
* Upsample_bilinear2d with scale ([4464](https://github.com/pytorch/xla/pull/4464))

Experimental Features
TorchDynamo (torch.compile) support
* Checkout our newest [doc](https://github.com/pytorch/xla/blob/r2.0/docs/dynamo.md).
* Dynamo bridge python binding ([4119](https://github.com/pytorch/xla/pull/4119))
* Dynamo bridge backend implementation ([4523](https://github.com/pytorch/xla/pull/4523))
* Training optimization: make execution async ([4425](https://github.com/pytorch/xla/pull/4425))
* Training optimization: reduce graph execution per step ([4523](https://github.com/pytorch/xla/pull/4523))

PyTorch/XLA GSPMD on single host
* Preserve parameter sharding with sharded data placeholder ([4721)](https://github.com/pytorch/xla/pull/4721)
* Transfer shards from server to host ([4508](https://github.com/pytorch/xla/pull/4508))
* Store the sharding annotation within XLATensor([4390](https://github.com/pytorch/xla/pull/4390))
* Use d2d replication for more efficient input sharding ([4336](https://github.com/pytorch/xla/pull/4336))
* Mesh to support custom device order. ([4162](https://github.com/pytorch/xla/pull/4162))
* Introduce virtual SPMD device to avoid unpartitioned data transfer ([4091](https://github.com/pytorch/xla/pull/4091))

Ongoing development
Ongoing Dynamic Shape implementation
* Implement missing `XLASymNodeImpl::Sub` ([4551](https://github.com/pytorch/xla/pull/4551))
* Make empty_symint support dynamism. ([4550](https://github.com/pytorch/xla/pull/4550))
* Add dynamic shape support to SigmoidBackward ([4322](https://github.com/pytorch/xla/pull/4322))
* Add a forward pass NN model with dynamism test ([4256](https://github.com/pytorch/xla/pull/4256))
Ongoing SPMD multi host execution ([4573](https://github.com/pytorch/xla/pull/4573))

Bug fixes & improvements
* Support int as index type ([4602](https://github.com/pytorch/xla/pull/4602))
* Only alias inputs and outputs when force_ltc_sync == True ([4575](https://github.com/pytorch/xla/pull/4575))
* Fix race condition between execution and buffer tear down on GPU when using bfc_allocator ([4542](https://github.com/pytorch/xla/pull/4542))
* Release the GIL during TransferFromServer ([4504](https://github.com/pytorch/xla/pull/4504))
* Fix type annotations in FSDP ([4371](https://github.com/pytorch/xla/pull/4371))

1.13.0

Cloud TPUs now support the [PyTorch 1.13 release](https://github.com/pytorch/pytorch/releases), via PyTorch/XLA integration. The release has daily automated testing for the supported models: [Torchvision ResNet](https://cloud.google.com/tpu/docs/tutorials/resnet-pytorch), [FairSeq Transformer](https://cloud.google.com/tpu/docs/tutorials/transformer-pytorch) and [RoBERTa](https://cloud.google.com/tpu/docs/tutorials/roberta-pytorch), [HuggingFace GLUE and LM](https://github.com/huggingface/transformers), and [Facebook Research DLRM](https://cloud.google.com/tpu/docs/tutorials/pytorch-dlrm).

On top of the underlying improvements and bug fixes in PyTorch's 1.13 release, this release adds several features and PyTorch/XLA specified bug fixes.

New Features
- GPU enhancement
- Add upsample_nearest/bilinear implementation for CPU and GPU ([3990](https://github.com/pytorch/xla/pull/3990))
- Set three_fry as the default RNG for GPU ([3951](https://github.com/pytorch/xla/pull/3951))
- FSDP enhancement
- allow FSDP wrapping and sharding over modules on CPU devices ([3992](https://github.com/pytorch/xla/pull/3992))
- Support param sharding dim and pinning memory ([3830](https://github.com/pytorch/xla/pull/3830))
- Lower torch::einsum using xla::einsum which provide significant speedup ([3843](https://github.com/pytorch/xla/pull/3843))
- Support large models with >3200 graph input on TPU + PJRT ([3920](https://github.com/pytorch/xla/pull/3920))

Experimental Features
- PJRT experimental support on Cloud TPU v4
- Check the instruction and example code in [here](https://github.com/pytorch/xla/blob/r1.13/docs/pjrt.md)
- DDP experimental support on Cloud TPU and GPU
- Check the instruction, analysis and example code in [here](https://github.com/pytorch/xla/blob/r1.13/docs/ddp.md)

Ongoing development
- Ongoing Dynamic Shape implementation (POC completed)
- Ongoing SPMD implementation (POC completed)
- Ongoing LTC migration

Bug fixes and improvements
- Make XLA_HLO_DEBUG populate the scope metadata ([3985](https://github.com/pytorch/xla/pull/3985))

1.12.0

Cloud TPUs now support the [PyTorch 1.12 release](https://github.com/pytorch/pytorch/releases), via PyTorch/XLA integration. The release has daily automated testing for the supported models: [Torchvision ResNet](https://cloud.google.com/tpu/docs/tutorials/resnet-pytorch), [FairSeq Transformer](https://cloud.google.com/tpu/docs/tutorials/transformer-pytorch) and [RoBERTa](https://cloud.google.com/tpu/docs/tutorials/roberta-pytorch), [HuggingFace GLUE and LM](https://github.com/huggingface/transformers), and [Facebook Research DLRM](https://cloud.google.com/tpu/docs/tutorials/pytorch-dlrm).

On top of the underlying improvements and bug fixes in PyTorch's 1.12 release, this release adds several features and PyTorch/XLA specified bug fixes.

New feature
- FSDP
- Check the instruction and example code in [here](https://github.com/pytorch/xla/blob/r1.12/torch_xla/distributed/fsdp/README.md)
- FSDP support for PyTorch/XLA (https://github.com/pytorch/xla/pull/3431)
- Bfloat 16 and float 16 support in FSDP (https://github.com/pytorch/xla/pull/3617)
- PyTorch/XLA gradident checkpoint api (https://github.com/pytorch/xla/pull/3524)
- Optimization_barrier which enables gradient checkpointing (https://github.com/pytorch/xla/pull/3482)
- Ongoing LTC migration
- Device lock position optimization to speed up tracing (https://github.com/pytorch/xla/pull/3457)
- Experimental support for PJRT TPU client (https://github.com/pytorch/xla/pull/3550)
- Send/Recv CC op support (https://github.com/pytorch/xla/pull/3494)
- Performance profiling tool enhancement (https://github.com/pytorch/xla/pull/3498)
- TPU-V4 pod official support (https://github.com/pytorch/xla/pull/3440)
- Roll lowering (https://github.com/pytorch/xla/pull/3505)
- Celu, celu_, selu, selu_ lowering (https://github.com/pytorch/xla/pull/3547)


Bug fixes and improvements
- Fixed a view bug which will create unnecessary IR graph (https://github.com/pytorch/xla/pull/3411)

Page 1 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.