Tensorflow

Latest version: v2.18.0

Safety actively analyzes 682361 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 18

2.19.0

TensorFlow

<INSERT SMALL BLURB ABOUT RELEASE FOCUS AREA AND POTENTIAL TOOLCHAIN CHANGES>

Breaking Changes

* <DOCUMENT BREAKING CHANGES HERE>
* <THIS SECTION SHOULD CONTAIN API, ABI AND BEHAVIORAL BREAKING CHANGES>

Known Caveats

* <CAVEATS REGARDING THE RELEASE (BUT NOT BREAKING CHANGES).>
* <ADDING/BUMPING DEPENDENCIES SHOULD GO HERE>
* <KNOWN LACK OF SUPPORT ON SOME PLATFORM, SHOULD GO HERE>

Major Features and Improvements

* <INSERT MAJOR FEATURE HERE, USING MARKDOWN SYNTAX>
* <IF RELEASE CONTAINS MULTIPLE FEATURES FROM SAME AREA, GROUP THEM TOGETHER>

Bug Fixes and Other Changes

* <SIMILAR TO ABOVE SECTION, BUT FOR OTHER IMPORTANT CHANGES / BUG FIXES>
* <IF A CHANGE CLOSES A GITHUB ISSUE, IT SHOULD BE DOCUMENTED HERE>
* <NOTES SHOULD BE GROUPED PER AREA>

Keras

<INSERT SMALL BLURB ABOUT RELEASE FOCUS AREA AND POTENTIAL TOOLCHAIN CHANGES>

Breaking Changes

* <DOCUMENT BREAKING CHANGES HERE>
* <THIS SECTION SHOULD CONTAIN API, ABI AND BEHAVIORAL BREAKING CHANGES>

Known Caveats

* <CAVEATS REGARDING THE RELEASE (BUT NOT BREAKING CHANGES).>
* <ADDING/BUMPING DEPENDENCIES SHOULD GO HERE>
* <KNOWN LACK OF SUPPORT ON SOME PLATFORM, SHOULD GO HERE>

Major Features and Improvements

* <INSERT MAJOR FEATURE HERE, USING MARKDOWN SYNTAX>
* <IF RELEASE CONTAINS MULTIPLE FEATURES FROM SAME AREA, GROUP THEM TOGETHER>

Bug Fixes and Other Changes

* <SIMILAR TO ABOVE SECTION, BUT FOR OTHER IMPORTANT CHANGES / BUG FIXES>
* <IF A CHANGE CLOSES A GITHUB ISSUE, IT SHOULD BE DOCUMENTED HERE>
* <NOTES SHOULD BE GROUPED PER AREA>

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

<INSERT>, <NAME>, <HERE>, <USING>, <GITHUB>, <HANDLE>

2.18.0

TensorFlow

Breaking Changes

* `tf.lite`
* C API:
* An optional, fourth parameter was added `TfLiteOperatorCreate` as a step forward towards a cleaner API for `TfLiteOperator`. Function `TfLiteOperatorCreate` was added recently, in TensorFlow Lite version 2.17.0, released on 7/11/2024, and we do not expect there will be much code using this function yet. Any code breakages can be easily resolved by passing nullptr as the new, 4th parameter.

* TensorRT support is disabled in CUDA builds for code health improvement.

* TensorFlow now supports and is compiled with NumPy 2.0 by default. Please see the [NumPy 2 release notes](https://numpy.org/doc/stable/release/2.0.0-notes.html) and the [NumPy 2 migration guide](https://numpy.org/devdocs/numpy_2_0_migration_guide.html#numpy-2-migration-guide).
* Note that NumPy's type promotion rules have been changed(See [NEP 50](https://numpy.org/neps/nep-0050-scalar-promotion.html#nep50)for details). This may change the precision at which computations happen, leading either to type errors or to numerical changes to results.
* Tensorflow will continue to support NumPy 1.26 until 2025, aligning with community standard deprecation timeline [here](https://scientific-python.org/specs/spec-0000/).

* Hermetic CUDA support is added.

Hermetic CUDA uses a specific downloadable version of CUDA instead of the user’s locally installed CUDA. Bazel will download CUDA, CUDNN and NCCL distributions, and then use CUDA libraries and tools as dependencies in various Bazel targets. This enables more reproducible builds for Google ML projects and supported CUDA versions.

* Remove the `EnumNamesXNNPackFlags` function in `tensorflow/lite/acceleration/configuration/configuration_generated.h`.

This change is a bug fix in the automatically generated code. This change is automatically generated by the new flatbuffer generator. The flatbuffers library is updated to 24.3.25 in https://github.com/tensorflow/tensorflow/commit/c17d64df85a83c1bd0fd7dcc0b1230812b0d3d48. The new flatbuffers library includes the following change https://github.com/google/flatbuffers/pull/7813 which fixed a underlying flatbuffer code generator bug.


Known Caveats

Major Features and Improvements

* `tf.lite`:
* The LiteRT [repo](https://github.com/google-ai-edge/LiteRT) is live (see [announcement](https://developers.googleblog.com/en/tensorflow-lite-is-now-litert/)), which means that in the coming months there will be changes to the development experience for TFLite. The TF Lite Runtime source will be moved later this year, and sometime after that we will start accepting contributions through that repo.
* SignatureRunner is now supported for models with no signatures.

Bug Fixes and Other Changes

* `tf.data`
* Add optional `synchronous` argument to `map`, to specify that the `map` should run synchronously, as opposed to be parallelizable when `options.experimental_optimization.map_parallelization=True`. This saves memory compared to setting `num_parallel_calls=1`.
* Add optional `use_unbounded_threadpool` argument to `map`, to specify that the `map` should use an unbounded threadpool instead of the default pool that is based on the number of cores on the machine. This can improve throughput for map functions which perform IO or otherwise release the CPU.
* Add [`tf.data.experimental.get_model_proto`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/get_model_proto) to allow users to peek into the analytical model inside of a dataset iterator.

* `tf.lite`
* `Dequantize` op supports `TensorType_INT4`.
* This change includes per-channel dequantization.
* Add support for `stablehlo.composite`.
* `EmbeddingLookup` op supports per-channel quantization and `TensorType_INT4` values.
* `FullyConnected` op supports `TensorType_INT16` activation and `TensorType_Int4` weight per-channel quantization.
* Enable per-tensor quantization support in dynamic range quantization of `TRANSPOSE_CONV` layer. Fixes TFLite converter [bug](https://github.com/tensorflow/tensorflow/issues/76624).

* `tf.tensor_scatter_update`, `tf.tensor_scatter_add` and of other reduce types.
* Support `bad_indices_policy`.

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

Akhil Goel, akhilgoe, Alexander Pivovarov, Amir Samani, Andrew Goodbody, Andrey Portnoy, Anthony Platanios, bernardoArcari, Brett Taylor, buptzyb, Chao, Christian Clauss, Cocoa, Daniil Kutz, Darya Parygina, dependabot[bot], Dimitris Vardoulakis, Dragan Mladjenovic, Elfie Guo, eukub, Faijul Amin, flyingcat, Frédéric Bastien, ganyu.08, Georg Stefan Schmid, Grigory Reznikov, Harsha H S, Harshit Monish, Heiner, Ilia Sergachev, Jan, Jane Liu, Jaroslav Sevcik, Kaixi Hou, Kanvi Khanna, Kristof Maar, Kristóf Maár, LakshmiKalaKadali, Lbertho-Gpsw, lingzhi98, MarcoFalke, Masahiro Hiramori, Mmakevic-Amd, mraunak, Nobuo Tsukamoto, Notheisz57, Olli Lupton, Pearu Peterson, pemeliya, Peyara Nando, Philipp Hack, Phuong Nguyen, Pol Dellaiera, Rahul Batra, Ruturaj Vaidya, sachinmuradi, Sergey Kozub, Shanbin Ke, Sheng Yang, shengyu, Shraiysh, Shu Wang, Surya, sushreebarsa, Swatheesh-Mcw, syzygial, Tai Ly, terryysun, tilakrayal, Tj Xu, Trevor Morris, Tzung-Han Juang, wenchenvincent, wondertx, Xuefei Jiang, Ye Huang, Yimei Sun, Yunlong Liu, Zahid Iqbal, Zhan Lu, Zoranjovanovic-Ns, Zuri Obozuwa

2.17.1

Bug Fixes and Other Changes

* Add necessary header files in the aar library. These are needed if developers build apps with header files unpacked from tflite aar files from maven.
* Implement Name() for GCSWritableFile to fix the profiler trace viewer cache file generation.
* Fix `cstring.h` missing file issue with the Libtensorflow archive.

2.17.0

TensorFlow

Breaking Changes

* GPU
* Support for NVIDIA GPUs with compute capability 5.x (Maxwell generation) has been removed from TF binary distributions (Python wheels).

Major Features and Improvements

* Add `is_cpu_target_available`, which indicates whether or not TensorFlow was built with support for a given CPU target. This can be useful for skipping target-specific tests if a target is not supported.

* `tf.data`
* Support `data.experimental.distribued_save`. `distribued_save` uses tf.data service (https://www.tensorflow.org/api_docs/python/tf/data/experimental/service) to write distributed dataset snapshots. The call is non-blocking and returns without waiting for the snapshot to finish. Setting `wait=True` to `tf.data.Dataset.load` allows the snapshots to be read while they are being written.

Bug Fixes and Other Changes

* GPU
* Support for NVIDIA GPUs with compute capability 8.9 (e.g. L4 & L40) has been added to TF binary distributions (Python wheels).
* Replace `DebuggerOptions` of TensorFlow Quantizer, and migrate to `DebuggerConfig` of StableHLO Quantizer.
* Add TensorFlow to StableHLO converter to TensorFlow pip package.
* TensorRT support: this is the last release supporting TensorRT. It will be removed in the next release.
* NumPy 2.0 support: TensorFlow is going to support NumPy 2.0 in the next release. It may break some edge cases of TensorFlow API usage.

* `tf.lite`
* Quantization for `FullyConnected` layer is switched from per-tensor to per-channel scales for dynamic range quantization use case (`float32` inputs / outputs and `int8` weights). The change enables new quantization schema globally in the converter and inference engine. The new behaviour can be disabled via experimental flag `converter._experimental_disable_per_channel_quantization_for_dense_layers = True`.
* C API:
* The experimental `TfLiteRegistrationExternal` type has been renamed as `TfLiteOperator`, and likewise for the corresponding API functions.
* The Python TF Lite Interpreter bindings now have an option `experimental_default_delegate_latest_features` to enable all default delegate features.
* Flatbuffer version update:
* `GetTemporaryPointer()` bug fixed.

* `tf.data`
* Add `wait` to `tf.data.Dataset.load`. If `True`, for snapshots written with `distributed_save`, it reads the snapshot while it is being written. For snapshots written with regular `save`, it waits for the snapshot until it's finished. The default is `False` for backward compatibility. Users of `distributed_save` are recommended to set it to `True`.

* `tf.tpu.experimental.embedding.TPUEmbeddingV2`
* Add `compute_sparse_core_stats` for sparse core users to profile the data with this API to get the `max_ids` and `max_unique_ids`. These numbers will be needed to configure the sparse core embedding mid level api.
* Remove the `preprocess_features` method since that's no longer needed.

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

Abdulaziz Aloqeely, Ahmad-M-Al-Khateeb, Akhil Goel, akhilgoe, Alexander Pivovarov, Amir Samani, Andrew Goodbody, Andrey Portnoy, Ashiq Imran, Ben Olson, Chao, Chase Riley Roberts, Clemens Giuliani, dependabot[bot], Dimitris Vardoulakis, Dragan Mladjenovic, ekuznetsov139, Elfie Guo, Faijul Amin, Gauri1 Deshpande, Georg Stefan Schmid, guozhong.zhuang, Hao Wu, Haoyu (Daniel), Harsha H S, Harsha Hs, Harshit Monish, Ilia Sergachev, Jane Liu, Jaroslav Sevcik, Jinzhe Zeng, Justin Dhillon, Kaixi Hou, Kanvi Khanna, LakshmiKalaKadali, Learning-To-Play, lingzhi98, Lu Teng, Matt Bahr, Max Ren, Meekail Zain, Mmakevic-Amd, mraunak, neverlva, nhatle, Nicola Ferralis, Olli Lupton, Om Thakkar, orangekame3, ourfor, pateldeev, Pearu Peterson, pemeliya, Peng Sun, Philipp Hack, Pratik Joshi, prrathi, rahulbatra85, Raunak, redwrasse, Robert Kalmar, Robin Zhang, RoboSchmied, Ruturaj Vaidya, sachinmuradi, Shawn Wang, Sheng Yang, Surya, Thibaut Goetghebuer-Planchon, Thomas Preud'Homme, tilakrayal, Tj Xu, Trevor Morris, wenchenvincent, Yimei Sun, zahiqbal, Zhu Jianjiang, Zoranjovanovic-Ns

2.16.2

Bug Fixes and Other Changes

* Fixed: Incorrect dependency metadata in TensorFlow Python packages causing installation failures with certain package managers such as Poetry.

2.16.1

TensorFlow

* TensorFlow Windows Build:

* Clang is now the default compiler to build TensorFlow CPU wheels on the
Windows Platform starting with this release. The currently supported
version is LLVM/clang 17. The official Wheels-published on PyPI will be
based on Clang; however, users retain the option to build wheels using
the MSVC compiler following the steps mentioned in
https://www.tensorflow.org/install/source_windows as has been the case
before

Breaking Changes

* `tf.summary.trace_on` now takes a `profiler_outdir` argument. This must be
set if `profiler` arg is set to `True`.

* `tf.summary.trace_export`'s `profiler_outdir` arg is now a no-op.
Enabling the profiler now requires setting `profiler_outdir` in
`trace_on`.

* `tf.estimator`

* The tf.estimator API is removed.

* Keras 3.0 will be the default Keras version. You may need to update your
script to use Keras 3.0.

* Please refer to the new Keras documentation for Keras 3.0
(https://keras.io/keras_3).

* To continue using Keras 2.0, do the following.

* 1. Install tf-keras via pip install tf-keras~=2.16

1. To switch tf.keras to use Keras 2 (tf-keras), set the environment
variable TF_USE_LEGACY_KERAS=1 directly or in your python program by
import os;os.environ["TF_USE_LEGACY_KERAS"]=1. Please note that this
will set it for all packages in your Python runtime program

* 1. Change import of keras from tensorflow as follows
* import tensorflow.keras as keras and import keras to import tf_keras as
keras
* **Apple Silicon users:** If you previously installed TensorFlow using
`pip install tensorflow-macos`, please update your installation method. Use
`pip install tensorflow` from now on.
* **Mac x86 users:** Mac x86 builds are being deprecated and will no longer be
released as a Pip package from TF 2.17 onwards.

Known Caveats

* Full aarch64 Linux and Arm64 macOS wheels are now published to the
`tensorflow` pypi repository and no longer redirect to a separate package.

Major Features and Improvements

* Support for Python 3.12 has been added.
* [tensorflow-tpu](https://pypi.org/project/tensorflow-tpu/) package is now
available for easier TPU based installs.
* TensorFlow pip packages are now built with CUDA 12.3 and cuDNN 8.9.7
* Added experimental support for float16 auto-mixed precision using the new AMX-FP16 instruction set on X86 CPUs.


Bug Fixes and Other Changes

* `tf.lite`
* Added support for `stablehlo.gather`.
* Added support for `stablehlo.add`.
* Added support for `stablehlo.multiply`.
* Added support for `stablehlo.maximum`.
* Added support for `stablehlo.minimum`.
* Added boolean parameter support for `tfl.gather_nd`.
* C API:
* New API functions:
* `tensorflow/lite/c/c_api_experimental.h`:
* `TfLiteInterpreterGetVariableTensorCount`
* `TfLiteInterpreterGetVariableTensor`
* `TfLiteInterpreterGetBufferHandle`
* `TfLiteInterpreterSetBufferHandle`
* `tensorflow/lite/c/c_api_opaque.h`:
* `TfLiteOpaqueTensorSetAllocationTypeToDynamic`
* API functions promoted from experimental to stable:
* `tensorflow/lite/c/c_api.h`:
* `TfLiteInterpreterOptionsEnableCancellation`
* `TfLiteInterpreterCancel`
* C++ API:
* New virtual methods in the `tflite::SimpleDelegateInterface` class in `tensorflow/lite/delegates/utils/simple_delegate.h`,
and likewise in the `tflite::SimpleOpaqueDelegateInterface` class in `tensorflow/lite/delegates/utils/simple_opaque_delegate.h`:
* `CopyFromBufferHandle`
* `CopyToBufferHandle`
* `FreeBufferHandle`

* `tf.train.CheckpointOptions` and `tf.saved_model.SaveOptions`
* These now take in a new argument called `experimental_sharding_callback`.
This is a callback function wrapper that will be executed to determine how
tensors will be split into shards when the saver writes the checkpoint
shards to disk. `tf.train.experimental.ShardByTaskPolicy` is the default
sharding behavior, but `tf.train.experimental.MaxShardSizePolicy` can be
used to shard the checkpoint with a maximum shard file size. Users with
advanced use cases can also write their own custom
`tf.train.experimental.ShardingCallback`s.

* `tf.train.CheckpointOptions`
* Added `experimental_skip_slot_variables` (a boolean option) to skip
restoring of optimizer slot variables in a checkpoint.

* `tf.saved_model.SaveOptions`

* `SaveOptions` now takes a new argument called
`experimental_debug_stripper`. When enabled, this strips the debug nodes
from both the node defs and the function defs of the graph. Note that
this currently only strips the `Assert` nodes from the graph and
converts them into `NoOp`s instead.

* `tf.data`

* `tf.data` now has an `autotune_options.initial_parallelism` option to
control the initial parallelism setting used by autotune before the data
pipeline has started running. The default is 16. A lower value reduces
initial memory usage, while a higher value improves startup time.

Keras

* `keras.layers.experimental.DynamicEmbedding`
* Added `DynamicEmbedding` Keras layer
* Added 'UpdateEmbeddingCallback`
* `DynamicEmbedding` layer allows for the continuous updating of the
vocabulary and embeddings during the training process. This layer
maintains a hash table to track the most up-to-date vocabulary based on
the inputs received by the layer and the eviction policy. When this layer
is used with an `UpdateEmbeddingCallback`, which is a time-based callback,
the vocabulary lookup tensor is updated at the time interval set in the
`UpdateEmbeddingCallback` based on the most up-to-date vocabulary hash
table maintained by the layer. If this layer is not used in conjunction
with `UpdateEmbeddingCallback` the behavior of the layer would be same as
`keras.layers.Embedding`.
* `keras.optimizers.Adam`
* Added the option to set adaptive epsilon to match implementations with Jax
and PyTorch equivalents.

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

Aakar Dwivedi, Akhil Goel, Alexander Grund, Alexander Pivovarov, Andrew Goodbody, Andrey Portnoy, Aneta Kaczyńska, AnetaKaczynska, ArkadebMisra, Ashiq Imran, Ayan Moitra, Ben Barsdell, Ben Creech, Benedikt Lorch, Bhavani Subramanian, Bianca Van Schaik, Chao, Chase Riley Roberts, Connor Flanagan, David Hall, David Svantesson, David Svantesson-Yeung, dependabot[bot], Dr. Christoph Mittendorf, Dragan Mladjenovic, ekuznetsov139, Eli Kobrin, Eugene Kuznetsov, Faijul Amin, Frédéric Bastien, fsx950223, gaoyiyeah, Gauri1 Deshpande, Gautam, Giulio C.N, guozhong.zhuang, Harshit Monish, James Hilliard, Jane Liu, Jaroslav Sevcik, jeffhataws, Jerome Massot, Jerry Ge, jglaser, jmaksymc, Kaixi Hou, kamaljeeti, Kamil Magierski, Koan-Sin Tan, lingzhi98, looi, Mahmoud Abuzaina, Malik Shahzad Muzaffar, Meekail Zain, mraunak, Neil Girdhar, Olli Lupton, Om Thakkar, Paul Strawder, Pavel Emeliyanenko, Pearu Peterson, pemeliya, Philipp Hack, Pierluigi Urru, Pratik Joshi, radekzc, Rafik Saliev, Ragu, Rahul Batra, rahulbatra85, Raunak, redwrasse, Rodrigo Gomes, ronaghy, Sachin Muradi, Shanbin Ke, shawnwang18, Sheng Yang, Shivam Mishra, Shu Wang, Strawder, Paul, Surya, sushreebarsa, Tai Ly, talyz, Thibaut Goetghebuer-Planchon, Tj Xu, Tom Allsop, Trevor Morris, Varghese, Jojimon, weihanmines, wenchenvincent, Wenjie Zheng, Who Who Who, Yasir Ashfaq, yasiribmcon, Yoshio Soma, Yuanqiang Liu, Yuriy Chernyshov

Page 1 of 18

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.