Tensorflow

Latest version: v2.16.1

Safety actively analyzes 638396 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 12 of 17

1.15.0

Not secure
This is the last 1.x release for TensorFlow. We do not expect to update the 1.x
branch with features, although we will issue patch releases to fix
vulnerabilities for at least one year.

Major Features and Improvements

* As
[announced](https://groups.google.com/a/tensorflow.org/forum/#!topic/developers/iRCt5m4qUz0),
`tensorflow` pip package will by default include GPU support (same as
`tensorflow-gpu` now) for the platforms we currently have GPU support (Linux
and Windows). It will work on machines with and without Nvidia GPUs.
`tensorflow-gpu` will still be available, and CPU-only packages can be
downloaded at `tensorflow-cpu` for users who are concerned about package
size.
* TensorFlow 1.15 contains a complete implementation of the 2.0 API in its
`compat.v2` module. It contains a copy of the 1.15 main module (without
`contrib`) in the `compat.v1` module. TensorFlow 1.15 is able to emulate 2.0
behavior using the `enable_v2_behavior()` function. This enables writing
forward compatible code: by explicitly importing either
`tensorflow.compat.v1` or `tensorflow.compat.v2`, you can ensure that your
code works without modifications against an installation of 1.15 or 2.0.
* EagerTensor now supports numpy buffer interface for tensors.
* Add toggles `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()`
for enabling/disabling v2 control flow.
* Enable v2 control flow as part of `tf.enable_v2_behavior()` and
`TF2_BEHAVIOR=1`.
* AutoGraph translates Python control flow into TensorFlow expressions,
allowing users to write regular Python inside `tf.function`-decorated
functions. AutoGraph is also applied in functions used with `tf.data`,
`tf.distribute` and `tf.keras` APIS.
* Adds `enable_tensor_equality()`, which switches the behavior such that:
* Tensors are no longer hashable.
* Tensors can be compared with `==` and `!=`, yielding a Boolean Tensor
with element-wise comparison results. This will be the default behavior
in 2.0.

Breaking Changes

* Tensorflow code now produces 2 different pip packages: `tensorflow_core`
containing all the code (in the future it will contain only the private
implementation) and `tensorflow` which is a virtual pip package doing
forwarding to `tensorflow_core` (and in the future will contain only the
public API of tensorflow). We don't expect this to be breaking, unless you
were importing directly from the implementation.
* TensorFlow 1.15 is built using devtoolset7 (GCC7) on Ubuntu 16. This may
lead to ABI incompatibilities with extensions built against earlier versions
of TensorFlow.
* Deprecated the use of `constraint=` and `.constraint` with ResourceVariable.
* `tf.keras`:
* `OMP_NUM_THREADS` is no longer used by the default Keras config. To
configure the number of threads, use `tf.config.threading` APIs.
* `tf.keras.model.save_model` and `model.save` now defaults to saving a
TensorFlow SavedModel.
* `keras.backend.resize_images` (and consequently,
`keras.layers.Upsampling2D`) behavior has changed, a bug in the resizing
implementation was fixed.
* Layers now default to `float32`, and automatically cast their inputs to
the layer's dtype. If you had a model that used `float64`, it will
probably silently use `float32` in TensorFlow2, and a warning will be
issued that starts with Layer "layer-name" is casting an input tensor
from dtype float64 to the layer's dtype of float32. To fix, either set
the default dtype to float64 with
`tf.keras.backend.set_floatx('float64')`, or pass `dtype='float64'` to
each of the Layer constructors. See `tf.keras.layers.Layer` for more
information.
* Some `tf.assert_*` methods now raise assertions at operation creation
time (i.e. when this Python line executes) if the input tensors' values
are known at that time, not during the session.run(). When this happens,
a noop is returned and the input tensors are marked non-feedable. In
other words, if they are used as keys in `feed_dict` argument to
`session.run()`, an error will be raised. Also, because some assert ops
don't make it into the graph, the graph structure changes. A different
graph can result in different per-op random seeds when they are not
given explicitly (most often).

Bug Fixes and Other Changes

* `tf.estimator`:
* `tf.keras.estimator.model_to_estimator` now supports exporting to
`tf.train.Checkpoint` format, which allows the saved checkpoints to be
compatible with `model.load_weights`.
* Fix tests in canned estimators.
* Expose Head as public API.
* Fixes critical bugs that help with `DenseFeatures` usability in TF2
* `tf.data`:
* Promoting `unbatch` from experimental to core API.
* Adding support for datasets as inputs to `from_tensors` and
`from_tensor_slices` and batching and unbatching of nested datasets.
* `tf.keras`:
* `tf.keras.estimator.model_to_estimator` now supports exporting to
tf.train.Checkpoint format, which allows the saved checkpoints to be
compatible with `model.load_weights`.
* Saving a Keras Model using `tf.saved_model.save` now saves the list of
variables, trainable variables, regularization losses, and the call
function.
* Deprecated `tf.keras.experimental.export_saved_model` and
`tf.keras.experimental.function`. Please use
`tf.keras.models.save_model(..., save_format='tf')` and
`tf.keras.models.load_model` instead.
* Add an `implementation=3` mode for `tf.keras.layers.LocallyConnected2D`
and `tf.keras.layers.LocallyConnected1D` layers using `tf.SparseTensor`
to store weights, allowing a dramatic speedup for large sparse models.
* Enable the Keras compile API `experimental_run_tf_function` flag by
default. This flag enables single training/eval/predict execution path.
With this 1. All input types are converted to `Dataset`. 2. When
distribution strategy is not specified this goes through the no-op
distribution strategy path. 3. Execution is wrapped in tf.function
unless `run_eagerly=True` is set in compile.
* Raise error if `batch_size` argument is used when input is
dataset/generator/keras sequence.
* `tf.lite`
* Add `GATHER` support to NN API delegate.
* tflite object detection script has a debug mode.
* Add delegate support for `QUANTIZE`.
* Added evaluation script for COCO minival.
* Add delegate support for `QUANTIZED_16BIT_LSTM`.
* Converts hardswish subgraphs into atomic ops.
* Add support for defaulting the value of `cycle_length` argument of
`tf.data.Dataset.interleave` to the number of schedulable CPU cores.
* `parallel_for`: Add converter for `MatrixDiag`.
* Add `narrow_range` attribute to `QuantizeAndDequantizeV2` and V3.
* Added new op: `tf.strings.unsorted_segment_join`.
* Add HW acceleration support for `topK_v2`.
* Add new `TypeSpec` classes.
* CloudBigtable version updated to v0.10.0.
* Expose `Head` as public API.
* Update docstring for gather to properly describe the non-empty `batch_dims`
case.
* Added `tf.sparse.from_dense` utility function.
* Improved ragged tensor support in `TensorFlowTestCase`.
* Makes the a-normal form transformation in Pyct configurable as to which
nodes are converted to variables and which are not.
* `ResizeInputTensor` now works for all delegates.
* Add `EXPAND_DIMS` support to NN API delegate TEST: expand_dims_test
* `tf.cond` emits a StatelessIf op if the branch functions are stateless and
do not touch any resources.
* `tf.cond`, `tf.while` and `if` and `while` in AutoGraph now accept a
nonscalar predicate if has a single element. This does not affect non-V2
control flow.
* `tf.while_loop` emits a StatelessWhile op if the cond and body functions are
stateless and do not touch any resources.
* Refactors code in Quant8 LSTM support to reduce TFLite binary size.
* Add support of local soft device placement for eager op.
* Add HW acceleration support for `LogSoftMax`.
* Added a function `nested_value_rowids` for ragged tensors.
* Add guard to avoid acceleration of L2 Normalization with input rank != 4
* Add `tf.math.cumulative_logsumexp operation`.
* Add `tf.ragged.stack`.
* Fix memory allocation problem when calling `AddNewInputConstantTensor`.
* Delegate application failure leaves interpreter in valid state.
* Add check for correct memory alignment to
`MemoryAllocation::MemoryAllocation()`.
* Extracts `NNAPIDelegateKernel` from nnapi_delegate.cc
* Added support for `FusedBatchNormV3` in converter.
* A ragged to dense op for directly calculating tensors.
* Fix accidental quadratic graph construction cost in graph-mode
`tf.gradients()`.

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

a6802739, Aaron Ma, Abdullah Selek, Abolfazl Shahbazi, Ag Ramesh, Albert Z. Guo,
Albin Joy, Alex Itkes, Alex Sergeev, Alexander Pivovarov, Alexey Romanov,
alhkad, Amit Srivastava, amoitra, Andrew Lihonosov, Andrii Prymostka, Anuj
Rawat, Astropeak, Ayush Agrawal, Bairen Yi, Bas Aarts, Bastian Eichenberger, Ben
Barsdell, Benjamin Peterson, bhack, Bharat Raghunathan, Bhavani Subramanian,
Bryan Cutler, candy.dc, Cao Zongyan, Captain-Pool, Casper Da Costa-Luis, Chen
Guoyin, Cheng Chang, chengchingwen, Chong Yan, Choong Yin Thong, Christopher
Yeh, Clayne Robison, Coady, Patrick, Dan Ganea, David Norman, Denis Khalikov,
Deven Desai, Diego Caballero, Duncan Dean, Duncan Riach, Dwight J Lyle, Eamon
Ito-Fisher, eashtian3, EFanZh, ejot, Elroy Ashtian Jr, Eric Schweitz, Fangjun
Kuang, Fei Hu, fo40225, formath, Fred Reiss, Frederic Bastien, Fredrik Knutsson,
G. Hussain Chinoy, Gabriel, gehring, George Grzegorz Pawelczak, Gianluca
Varisco, Gleb Popov, Greg Peatfield, Guillaume Klein, Gurpreet Singh, Gustavo
Lima Chaves, haison, Haraldur TóMas HallgríMsson, HarikrishnanBalagopal, HåKon
Sandsmark, I-Hong, Ilham Firdausi Putra, Imran Salam, Jason Zaman, Jason
Zavaglia, jayhpark530, jefby, Jeff Daily, Jeffrey Poznanovic, Jekyll Lai, Jeroen
BéDorf, Jerry Shih, jerryyin, jiakai, JiangXIAO, Joe Bowser, Joel Shapiro, Johan
Gunnarsson, Jojimon Varghese, Joon, Josh Beal, Julian Niedermeier, Jun Wan,
Junqin Zhang, Junyuan Xie, Justin Tunis, Kaixi Hou, Karl Lessard, Karthik
Muthuraman, Kbhute-Ibm, khanhlvg, Koock Yoon, kstuedem, Kyuwon Kim, Lakshay
Tokas, leike666666, leonard951, Leslie-Fang, Leslie-Fang-Intel, Li, Guizi, Lukas
Folle, Lukas Geiger, Mahmoud Abuzaina, Manraj Singh Grover, Margaret
Maynard-Reid, Mark Ryan, Matt Conley, Matthew Bentham, Matthew Denton, mbhuiyan,
mdfaijul, Mei Jie, merturl, MichaelKonobeev, Michal W. Tarnowski, minds, mpppk,
musikisomorphie, Nagy Mostafa, Nayana Thorat, Neil, Niels Ole Salscheider,
Niklas SilfverströM, Niranjan Hasabnis, ocjosen, olramde, Pariksheet Pinjari,
Patrick J. Lopresti, Patrik Gustavsson, per1234, PeterLee, Phan Van Nguyen Duc,
Phillip Kravtsov, Pooya Davoodi, Pranav Marathe, Putra Manggala, Qingqing Cao,
Rajeshwar Reddy T, Ramon ViñAs, Rasmus Diederichsen, Reuben Morais, richardbrks,
robert, RonLek, Ryan Jiang, saishruthi, Saket Khandelwal, Saleem Abdulrasool,
Sami Kama, Sana-Damani, Sergii Khomenko, Severen Redwood, Shubham Goyal, Sigrid
Keydana, Siju Samuel, sleighsoft, smilu97, Son Tran, Srini511,
srinivasan.narayanamoorthy, Sumesh Udayakumaran, Sungmann Cho, Tae-Hwan Jung,
Taehoon Lee, Takeshi Watanabe, TengLu, terryky, TheMindVirus, ThisIsIsaac, Till
Hoffmann, Timothy Liu, Tomer Gafner, Tongxuan Liu, Trent Lo, Trevor Morris, Uday
Bondhugula, Vasileios Lioutas, vbvg2008, Vishnuvardhan Janapati, Vivek
Suryamurthy, Wei Wang, Wen-Heng (Jack) Chung, wenxizhu, William D. Irons,
winstonq, wyzhao, Xiaoming (Jason) Cui, Xinan Jiang, Xinping Wang, Yann-Yy,
Yasir Modak, Yong Tang, Yongfeng Gu, Yuchen Ying, Yuxin Wu, zyeric, 王振华 (Zhenhua
Wang)

1.14.0

Not secure
Major Features and Improvements

* This is the first 1.x release containing the compat.v2 module. This module
is required to allow libraries to publish code which works in both 1.x and
2.x. After this release, no backwards incompatible changes are allowed in
the 2.0 Python API.
* Turn on MKL-DNN contraction kernels by default. MKL-DNN dynamically
dispatches the best kernel implementation based on CPU vector architecture.
To disable them, build with --define=tensorflow_mkldnn_contraction_kernel=0.

Behavioral changes

* Set default loss reduction as `AUTO` for improving reliability of loss
scaling with distribution strategy and custom training loops. `AUTO`
indicates that the reduction option will be determined by the usage context.
For almost all cases this defaults to `SUM_OVER_BATCH_SIZE`. When used in
distribution strategy scope, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, we expect reduction value to be 'None' or
'SUM'. Using other values will raise an error.
* Wraps losses passed to the `compile` API (strings and v1 losses) which are
not instances of v2 `Loss` class in `LossWrapper` class. => All losses will
now use `SUM_OVER_BATCH_SIZE` reduction as default.
* Disable `run_eagerly` and distribution strategy if there are symbolic
tensors added to the model using `add_metric` or `add_loss`.
* tf.linspace(start, stop, num) now always uses "stop" as last value (for
num > 1)
* `ResourceVariable` and `Variable` no longer accepts `constraint` in the
constructor, nor expose it as a property.
* The behavior of tf.gather is now correct when axis=None and batch_dims<0.
* Only create a GCS directory object if the object does not already exist.
* In `map_vectorization` optimization, reduce the degree of parallelism in the
vectorized map node.
* Bug fix: loss and gradients should now more reliably be correctly scaled
w.r.t. the global batch size when using a tf.distribute.Strategy.
* Updating cosine similarity loss - removed the negate sign from cosine
similarity.
* DType is no longer convertible to an int. Use dtype.as_datatype_enum instead
of int(dtype) to get the same result.
* Changed default for gradient accumulation for TPU embeddings to true.
* Callbacks now log values in eager mode when a deferred build model is used.
* Transitive dependencies on :pooling_ops were removed. Some users may need to
add explicit dependencies on :pooling_ops if they reference the operators
from that library.
* tf.keras.optimizers default learning rate changes:
* Adadelta: 1.000 to 0.001
* Adagrad: 0.01 to 0.001
* Adamax: 0.002 to 0.001
* NAdam: 0.002 to 0.001

Bug Fixes and Other Changes

* Documentation
* Deprecations and Symbol renames.
* Remove unused StringViewVariantWrapper
* Delete unused Fingerprint64Map op registration
* SignatureDef util functions have been deprecated.
* Renamed tf.image functions to remove duplicate "image" where it is
redundant.
* tf.keras.experimental.export renamed to
tf.keras.experimental.export_saved_model
* Standardize the LayerNormalization API by replacing the args `norm_axis`
and `params_axis` with `axis`.
* Tensor::UnsafeCopyFromInternal deprecated in favor Tensor::BitcastFrom
* Keras & Python API
* Add v2 module aliases for:
* tf.initializers => tf.keras.initializers
* tf.losses => tf.keras.losses & tf.metrics => tf.keras.metrics
* tf.optimizers => tf.keras.optimizers
* Add tf.keras.layers.AbstractRNNCell as the preferred implementation of
RNN cell for TF v2. User can use it to implement RNN cell with custom
behavior.
* Adding `clear_losses` API to be able to clear losses at the end of
forward pass in a custom training loop in eager.
* Add support for passing list of lists to the `metrics` param in Keras
`compile`.
* Added top-k to precision and recall to keras metrics.
* Adding public APIs for `cumsum` and `cumprod` keras backend functions.
* Fix: model.add_loss(symbolic_tensor) should work in ambient eager.
* Add name argument to tf.string_split and tf.strings_split
* Minor change to SavedModels exported from Keras using
tf.keras.experimental.export. (SignatureDef key for evaluation mode is
now "eval" instead of "test"). This will be reverted back to "test" in
the near future.
* Updates binary cross entropy logic in Keras when input is probabilities.
Instead of converting probabilities to logits, we are using the cross
entropy formula for probabilities.
* Raw TensorFlow functions can now be used in conjunction with the Keras
Functional API during model creation. This obviates the need for users
to create Lambda layers in most cases when using the Functional API.
Like Lambda layers, TensorFlow functions that result in Variable
creation or assign ops are not supported.
* Keras training and validation curves are shown on the same plot.
* Introduce `dynamic` constructor argument in Layer and Model, which
should be set to True when using imperative control flow in the `call`
method.
* Removing of dtype in the constructor of initializers and partition_info
in call.
* New ops and improved op functionality
* Add OpKernels for some stateless maps
* Add v2 APIs for AUCCurve and AUCSummationMethod
enums. tf-metrics-convergence
* Add tf.math.nextafter op.
* Add CompositeTensor base class.
* Add tf.linalg.tridiagonal_solve op.
* Add opkernel templates for common table operations.
* Added support for TFLite in TensorFlow 2.0.
* Adds summary trace API for collecting graph and profile information.
* Add batch_dims argument to tf.gather.
* Add support for `add_metric` in the graph function mode.
* Add C++ Gradient for BatchMatMulV2.
* Added tf.random.binomial
* Added gradient for SparseToDense op.
* Add legacy string flat hash map op kernels
* Add a ragged size op and register it to the op dispatcher
* Add broadcasting support to tf.matmul.
* Add ellipsis (...) support for tf.einsum()
* Added LinearOperator.adjoint and LinearOperator.H (alias).
* Added GPU implementation of tf.linalg.tridiagonal_solve.
* Added strings.byte_split
* Add RaggedTensor.placeholder()
* Add a new "result_type" parameter to tf.strings.split
* `add_update` can now be passed a zero-arg callable in order to support
turning off the update when setting `trainable=False` on a Layer of a
Model compiled with `run_eagerly=True`.
* Add variant wrapper for absl::string_view
* Add expand_composites argument to all nest.* methods.
* Add pfor converter for Squeeze.
* Bug fix for tf.tile gradient
* Expose CriticalSection in core as tf.CriticalSection.
* Update Fingerprint64Map to use aliases
* ResourceVariable support for gather_nd.
* ResourceVariable's gather op supports batch dimensions.
* Variadic reduce is supported on CPU
* Extend tf.function with basic support for CompositeTensors arguments
(such as SparseTensor and RaggedTensor).
* Add templates and interfaces for creating lookup tables
* Post-training quantization tool supports quantizing weights shared by
multiple operations. The models made with versions of this tool will use
INT8 types for weights and will only be executable interpreters from
this version onwards.
* Malformed gif images could result in an access out of bounds in the
color palette of the frame. This has been fixed now
* image.resize now considers proper pixel centers and has new kernels
(incl. anti-aliasing).
* Added an isotonic regression solver (tf.nn.isotonic_regression).
* Performance
* Turn on MKL-DNN contraction kernels by default. MKL-DNN dynamically
dispatches the best kernel implementation based on CPU vector
architecture. To disable them, build with
--define=tensorflow_mkldnn_contraction_kernel=0.
* Support for multi-host ncclAllReduce in Distribution Strategy.
* Expose a flag that allows the number of threads to vary across Python
benchmarks.
* TensorFlow 2.0 Development
* Add v2 sparse categorical crossentropy metric.
* Allow non-Tensors through v2 losses.
* Add UnifiedGRU as the new GRU implementation for tf2.0. Change the
default recurrent activation function for GRU from 'hard_sigmoid' to
'sigmoid', and 'reset_after' to True in 2.0. Historically recurrent
activation is 'hard_sigmoid' since it is fast than 'sigmoid'. With new
unified backend between CPU and GPU mode, since the CuDNN kernel is
using sigmoid, we change the default for CPU mode to sigmoid as well.
With that, the default GRU will be compatible with both CPU and GPU
kernel. This will enable user with GPU to use CuDNN kernel by default
and get a 10x performance boost in training. Note that this is
checkpoint breaking change. If user want to use their 1.x pre-trained
checkpoint, please construct the layer with
GRU(recurrent_activation='hard_sigmoid', reset_after=False) to fallback
to 1.x behavior.
* TF 2.0 - Update metric name to always reflect what the user has given in
compile. Affects following cases 1. When name is given as
'accuracy'/'crossentropy' 2. When an aliased function name is used eg.
'mse' 3. Removing the `weighted` prefix from weighted metric names.
* Begin adding Go wrapper for C Eager API
* image.resize in 2.0 now supports gradients for the new resize kernels.
* removed tf.string_split from v2 API
* Expose tf.contrib.proto.* ops in tf.io (they will exist in TF2)
* "Updates the TFLiteConverter API in 2.0. Changes from_concrete_function
to from_concrete_functions."
* Enable tf.distribute.experimental.MultiWorkerMirroredStrategy working in
eager mode.
* Support both binary and -1/1 label input in v2 hinge and squared hinge
losses.
* TensorFlow Lite
* "Adds support for tflite_convert in 2.0."
* "Remove lite.OpHint, lite.experimental, and lite.constant from 2.0 API."
* tf.contrib
* Added Neural Turing Implementation as described in
https://arxiv.org/abs/1807.08518.
* Remove tf.contrib.timeseries dependency on TF distributions.
* tf.data
* Add num_parallel_reads and passing in a Dataset containing filenames
into TextLineDataset and FixedLengthRecordDataset
* Going forward we operate in TF 2.0, this change is part of the effort to
slowly converting XYZDataset to DatasetV2 type which is the official
version going to be used in TF 2.0 and motivated by some compatibility
issue found, _BigtableXYZDataset (of type DatasetV2) does not implement
the _as_variant_tensor() of DatasetV1, when moving contrib.bigtable to
tensorflow_io. Converting into DatasetV2 removes the overheads to
maintain V1 while we are moving into TF 2.0.
* Add dataset ops to the graph (or create kernels in Eager execution)
during the python Dataset object creation instead doing it during
Iterator creation time.
* Add support for TensorArrays to tf.data Dataset.
* Switching tf.data functions to use `defun`, providing an escape hatch to
continue using the legacy `Defun`.
* Toolchains
* CUDNN_INSTALL_PATH, TENSORRT_INSTALL_PATH, NCCL_INSTALL_PATH,
NCCL_HDR_PATH are deprecated. Use TF_CUDA_PATHS instead which supports a
comma-separated list of base paths that are searched to find CUDA
libraries and headers.
* TF code now resides in `tensorflow_core` and `tensorflow` is just a
virtual pip package. No code changes are needed for projects using
TensorFlow, the change is transparent
* XLA
* XLA HLO graphs can be inspected with interactive_graphviz tool now.
* Estimator
* Use tf.compat.v1.estimator.inputs instead of tf.estimator.inputs
* Replace contrib references with tf.estimator.experimental.* for apis in
early_stopping.py

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

1e100, 4d55397500, a6802739, abenmao, Adam Weiss, Ag Ramesh, Alan Du, Albin Joy,
Alex, Aman Patel, Amit, Amit Kumar Jaiswal, Amit Srivastava, Andreas Eberle,
Andy Craze, Anthony Platanios, Armen Poghosov, armenpoghosov, arp95, Arpit Shah,
Ashwin Ramaswami, Aurelien Geron, AuréLien Geron, aweers, awesomealex1, Ayush
Agrawal, Ben Barsdell, Bharat Raghunathan, Bhavani Subramanian, blairhan,
BléNesi Attila, Brandon Carter, candy.dc, Chao Liu, chenchc, chie8842, Christian
Hansen, Christian Sigg, Clayne Robison, crafet, csukuangfj, ctiijima, Dan
Jarvis, Dan Lazewatsky, Daniel Ingram, Daniel Salvadori, Dave Airlie, David
Norman, Dayananda V, Dayananda-V, delock, Denis Khalikov, Deven Desai, Dheeraj
Rajaram Reddy, dmitrievanthony, Donovan Ong, Drew Szurko, Duncan Riach, Dustin
Neighly, Edward Forgacs, EFanZh, Fei Hu, Felix Lemke, Filip Matzner, fo40225,
frreiss, Gautam, gehring, Geoffrey Irving, Grzegorz George Pawelczak, Grzegorz
Pawelczak, Gyoung-Yoon Ryoo, HanGuo97, Hanton Yang, Hari Shankar, hehongliang,
Heungsub Lee, Hoeseong Kim, I-Hong Jhuo, Ilango R, Innovimax, Irene Dea, Jacky
Ko, Jakub Lipinski, Jason Zaman, jcf94, Jeffrey Poznanovic, Jens Elofsson,
Jeroen BéDorf, Jia Qingtong, Jiankang, Joe Q, Joe Quadrino, Joeran Beel, Jonas
Rauber, Jonathan, Jonathan Kyl, Joppe Geluykens, Joseph Friedman, jtressle, jwu,
K Yasaswi Sri Chandra Gandhi, K. Hodges, Kaixi Hou, Karl Lessard, Karl
Weinmeister, Karthik Muthuraman, Kashif Rasul, KDR, Keno Fischer, Kevin Mader,
kjopek, Koan-Sin Tan, kouml, ktaebum, Lakshay Tokas, Laurent Le Brun, Letian
Kang, Li, Guizi, Loo Rong Jie, Lucas Hendren, Lukas Geiger, Luke Han, luxupu,
Ma, Guokai, Mahmoud Abuzaina, Mandar Deshpande, manhyuk, Marco Gaido, Marek
Drozdowski, Mark Collier, Mark Ryan, mars20, Mateusz Chudyk, Matt Conley,
MattConley, mbhuiyan, mdfaijul, Melissa Grueter, Michael KäUfl, MickaëL
Schoentgen, Miguel Morin, Mihail Salnikov, Mike Arpaia, Mike Holcomb, monklof,
Moses Marin, Mshr-H, nammbash, Natalia Gimelshein, Nayana-Ibm, neargye, Neeraj
Pradhan, Nehal J Wani, Nick, Niels Ole Salscheider, Niranjan Hasabnis, nlewycky,
Nuka-137, Nutti, olicht, P Sudeepam, Palmer Lao, Pan Daoxin, Pariksheet Pinjari,
Pavel Samolysov, PENGWA, Pooya Davoodi, R S Nikhil Krishna, Rohit Gupta, Roman
Soldatow, rthadur, Ruizhe, Ryan Jiang, Samantha Andow, Sami Kama, Sana-Damani,
Saurabh Deoras, sdamani, seanshpark, Sebastien Iooss, Serv-Inc, Shahzad Lone,
Shashank Gupta, Shashi, shashvat, shashvatshahi1998, Siju, Siju Samuel,
Snease-Abq, Spencer Schaber, sremedios, srinivasan.narayanamoorthy, Steve Lang,
Steve Nesae, Sumesh Udayakumaran, Supriya Rao, Taylor Jakobson, Taylor Thornton,
Ted Chang, ThisIsPIRI, Thomas Deegan, Thomas Hagebols, tianyapiaozi, Tim Zaman,
tomguluson92, Tongxuan Liu, TungJerry, v1incent, Vagif, vcarpani, Vikram Tiwari,
Vishwak Srinivasan, Vitor-Alves, wangsiyu, wateryzephyr, WeberXie, WeijieSun,
Wen-Heng (Jack) Chung, wenxizhu, Will Battel, William D. Irons, wyzhao, Xin,
Yasuhiro Matsumoto, ymodak, Yong Tang, Younes Khoudli, Yuan Lin, Yves-Noel
Weweler, Zantares, zjjott, 卜居, 王振华 (Wang Zhenhua), 黄鑫

1.13.0

Major Features and Improvements

* TensorFlow Lite has moved from contrib to core. This means that Python
modules are under `tf.lite` and source code is now under `tensorflow/lite`
rather than `tensorflow/contrib/lite`.
* TensorFlow GPU binaries are now built against CUDA 10 and TensorRT 5.0.
* Support for Python3.7 on all operating systems.
* Moved NCCL to core.

Behavioral changes

* Disallow conversion of python floating types to uint32/64 (matching behavior
of other integer types) in `tf.constant`.
* Make the `gain` argument of convolutional orthogonal initializers
(`convolutional_delta_orthogonal`, `convolutional_orthogonal_1D`,
`convolutional_orthogonal_2D`, `convolutional_orthogonal_3D`) have
consistent behavior with the `tf.initializers.orthogonal` initializer, i.e.
scale the output l2-norm by `gain` and NOT by `sqrt(gain)`. (Note that these
functions are currently in `tf.contrib` which is not guaranteed backward
compatible).

Bug Fixes and Other Changes

* Documentation
* Update the doc with the details about the rounding mode used in
quantize_and_dequantize_v2.
* Clarify that tensorflow::port::InitMain() *should* be called before
using the TensorFlow library. Programs failing to do this are not
portable to all platforms.
* Deprecations and Symbol renames.
* Removing deprecations for the following endpoints: `tf.acos`,
`tf.acosh`, `tf.add`, `tf.as_string`, `tf.asin`, `tf.asinh`, `tf.atan`,
`tf.atan2`, `tf.atanh`, `tf.cos`, `tf.cosh`, `tf.equal`, `tf.exp`,
`tf.floor`, `tf.greater`, `tf.greater_equal`, `tf.less`,
`tf.less_equal`, `tf.log`, `tf.logp1`, `tf.logical_and`,
`tf.logical_not`, `tf.logical_or`, `tf.maximum`, `tf.minimum`,
`tf.not_equal`, `tf.sin`, `tf.sinh`, `tf.tan`
* Deprecate `tf.data.Dataset.shard`.
* Deprecate `saved_model.loader.load` which is replaced by
`saved_model.load` and `saved_model.main_op`, which will be replaced by
`saved_model.main_op` in V2.
* Deprecate tf.QUANTIZED_DTYPES. The official new symbol is
tf.dtypes.QUANTIZED_DTYPES.
* Update sklearn imports for deprecated packages.
* Deprecate `Variable.count_up_to` and `tf.count_up_to` in favor of
`Dataset.range`.
* Export `confusion_matrix` op as `tf.math.confusion_matrix` instead of
`tf.train.confusion_matrix`.
* Add `tf.dtypes.` endpoint for every constant in dtypes.py. Moving
endpoints in versions.py to corresponding endpoints in `tf.sysconfig.`
and `tf.version.`. Moving all constants under `tf.saved_model`
submodules to `tf.saved_model` module. New endpoints are added in V1 and
V2 but existing endpoint removals are only applied in V2.
* Deprecates behavior where device assignment overrides collocation
constraints inside a collocation context manager.
* Keras & Python API
* Add to Keras functionality analogous to
`tf.register_tensor_conversion_function`.
* Subclassed Keras models can now be saved through
`tf.contrib.saved_model.save_keras_model`.
* `LinearOperator.matmul` now returns a new `LinearOperator`.
* New ops and improved op functionality
* Add a Nearest Neighbor Resize op.
* Add an `ignore_unknown` argument to `parse_values` which suppresses
ValueError for unknown hyperparameter types. Such * Add
`tf.linalg.matvec` convenience function.
* `tf.einsum()`raises `ValueError` for unsupported equations like
`"ii->"`.
* Add DCT-I and IDCT-I in `tf.signal.dct` and `tf.signal.idct`.
* Add LU decomposition op.
* Add quantile loss to gradient boosted trees in estimator.
* Add `round_mode` to `QuantizeAndDequantizeV2` op to select rounding
algorithm.
* Add `unicode_encode`, `unicode_decode`, `unicode_decode_with_offsets`,
`unicode_split`, `unicode_split_with_offset`, and `unicode_transcode`
ops. Amongst other things, this Op adds the ability to encode, decode,
and transcode a variety of input text encoding formats into the main
Unicode encodings (UTF-8, UTF-16-BE, UTF-32-BE)
* Add "unit" attribute to the substr op, which allows obtaining the
substring of a string containing unicode characters.
* Broadcasting support for Ragged Tensors.
* `SpaceToDepth` supports uint8 data type.
* Support multi-label quantile regression in estimator.
* We now use "div" as the default partition_strategy in
`tf.nn.safe_embedding_lookup_sparse`, `tf.nn.sampled_softmax` and
`tf.nn.nce_loss`. hyperparameter are ignored.
* Performance
* Improve performance of GPU cumsum/cumprod by up to 300x.
* Added support for weight decay in most TPU embedding optimizers,
including AdamW and MomentumW.
* TensorFlow 2.0 Development
* Add a command line tool to convert to TF2.0, tf_upgrade_v2
* Merge `tf.spectral` into `tf.signal` for TensorFlow 2.0.
* Change the default recurrent activation function for LSTM from
'hard_sigmoid' to 'sigmoid' in 2.0. Historically recurrent activation is
'hard_sigmoid' since it is fast than 'sigmoid'. With new unified backend
between CPU and GPU mode, since the CuDNN kernel is using sigmoid, we
change the default for CPU mode to sigmoid as well. With that, the
default LSTM will be compatible with both CPU and GPU kernel. This will
enable user with GPU to use CuDNN kernel by default and get a 10x
performance boost in training. Note that this is checkpoint breaking
change. If user want to use their 1.x pre-trained checkpoint, please
construct the layer with LSTM(recurrent_activation='hard_sigmoid') to
fallback to 1.x behavior.
* TensorFlow Lite
* Move from `tensorflow/contrib/lite` to `tensorflow/lite`.
* Add experimental Java API for injecting TensorFlow Lite delegates
* Add support for strings in TensorFlow Lite Java API.
* `tf.contrib`:
* Add Apache Ignite Filesystem plugin to support accessing Apache IGFS.
* Dropout now takes `rate` argument, `keep_prob` is deprecated.
* Estimator occurrences references `tf.contrib.estimator` were changed to
`tf.estimator`:
* `tf.contrib.estimator.BaselineEstimator` with
`tf.estimator.BaselineEstimator`
* `tf.contrib.estimator.DNNLinearCombinedEstimator` with
`tf.estimator.DNNLinearCombinedEstimator`
* `tf.contrib.estimator.DNNEstimator` with `tf.estimator.DNNEstimator`
* `tf.contrib.estimator.LinearEstimator` with
`tf.estimator.LinearEstimator`
* `tf.contrib.estimator.InMemoryEvaluatorHook` and
tf.estimator.experimental.InMemoryEvaluatorHook`.
* `tf.contrib.estimator.make_stop_at_checkpoint_step_hook` with
`tf.estimator.experimental.make_stop_at_checkpoint_step_hook`.
* Expose `tf.distribute.Strategy as the new name for
tf.contrib.distribute.DistributionStrategy.
* Migrate linear optimizer from contrib to core.
* Move `tf.contrib.signal` to `tf.signal` (preserving aliases in
tf.contrib.signal).
* Users of `tf.contrib.estimator.export_all_saved_models` and related
should switch to
`tf.estimator.Estimator.experimental_export_all_saved_models`.
* tf.data:
* Add `tf.data.experimental.StatsOptions()`, to configure options to
collect statistics from `tf.data.Dataset` pipeline using
`StatsAggregator`. Add nested option, `experimental_stats` (which takes
a `tf.data.experimen tal.StatsOptions` object), to `tf.data.Options`.
Deprecates `tf.data.experimental.set_stats_agregator`.
* Performance optimizations:
* Add `tf.data.experimental.OptimizationOptions()`, to configure options
to enable `tf.data` performance optimizations. Add nested option,
`experimental_optimization` (which takes a
`tf.data.experimental.OptimizationOptions` object), to
`tf.data.Options`. Remove performance optimization options from
`tf.data.Options`, and add them under
`tf.data.experimental.OptimizationOptions` instead.
* Enable `map_and_batch_fusion` and `noop_elimination` optimizations by
default. They can be disabled by configuring
`tf.data.experimental.OptimizationOptions` to set `map_and_batch =
False` or `noop_elimination = False` respectively. To disable all
default optimizations, set `apply_default_optimizations = False`.
* Support parallel map in `map_and_filter_fusion`.
* Disable static optimizations for input pipelines that use non-resource
`tf.Variable`s.
* Add NUMA-aware MapAndBatch dataset.
* Deprecate `tf.data.Dataset.make_one_shot_iterator()` in V1, removed it
from V2, and added tf.compat.v1.data.make_one_shot_iterator()`.
* Deprecate `tf.data.Dataset.make_initializable_iterator()` in V1, removed
it from V2, and added `tf.compat.v1.data.make_initializable_iterator()`.
* Enable nested dataset support in core `tf.data` transformations.
* For `tf.data.Dataset` implementers: Added
`tf.data.Dataset._element_structured property` to replace
`Dataset.output_{types,shapes,classes}`.
* Make `num_parallel_calls` of `tf.data.Dataset.interleave` and
`tf.data.Dataset.map` work in Eager mode.
* Toolchains
* Fixed OpenSSL compatibility by avoiding `EVP_MD_CTX_destroy`.
* Added bounds checking to printing deprecation warnings.
* Upgraded CUDA dependency to 10.0
* To build with Android NDK r14b, add "include <linux/compiler.h>" to
android-ndk-r14b/platforms/android-14/arch-*/usr/include/linux/futex.h
* Removed `:android_tensorflow_lib_selective_registration*` targets, use
`:android_tensorflow_lib_lite*` targets instead.
* XLA
* Move `RoundToEven` function to xla/client/lib/math.h.
* A new environment variable `TF_XLA_DEBUG_OPTIONS_PASSTHROUGH` set to "1"
or "true" allows the debug options passed within an XRTCompile op to be
passed directly to the XLA compilation backend. If such variable is not
set (service side), only a restricted set will be passed through.
* Allow the XRTCompile op to return the ProgramShape resulted form the XLA
compilation as a second return argument.
* XLA HLO graphs can now be rendered as SVG/HTML.
* Estimator
* Replace all occurrences of `tf.contrib.estimator.BaselineEstimator` with
`tf.estimator.BaselineEstimator`
* Replace all occurrences of
`tf.contrib.estimator.DNNLinearCombinedEstimator` with
`tf.estimator.DNNLinearCombinedEstimator`
* Replace all occurrences of `tf.contrib.estimator.DNNEstimator` with
`tf.estimator.DNNEstimator`
* Replace all occurrences of `tf.contrib.estimator.LinearEstimator` with
`tf.estimator.LinearEstimator`
* Users of `tf.contrib.estimator.export_all_saved_models` and related
should switch to
`tf.estimator.Estimator.experimental_export_all_saved_models`.
* Update `regression_head` to the new Head API for Canned Estimator V2.
* Switch `multi_class_head` to Head API for Canned Estimator V2.
* Replace all occurrences of `tf.contrib.estimator.InMemoryEvaluatorHook`
and `tf.contrib.estimator.make_stop_at_checkpoint_step_hook` with
`tf.estimator.experimental.InMemoryEvaluatorHook` and
`tf.estimator.experimental.make_stop_at_checkpoint_step_hook`
* Migrate linear optimizer from contrib to core.

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

Abhinav Upadhyay, Ag Ramesh, akikaaa, Alexis Louis, Anders Huss, Andreas Madsen,
Andrew Banchich, Andy Craze, Anton Dmitriev, Artem Malykh, Avijit-Nervana,
Balint Cristian, Benjamin Tan Wei Hao, Bhavani Subramanian, Brendan Finan, Brian
Nemsick, Bryan Cutler, By Shen, Cao Zongyan, Castiel, Chris Antaki, Christian
Goll, Cibifang, Clayne Robison, Codrut Grosu, Cong Xu, Dalmo Cirne, Daniel
Hunter, Dougal J. Sutherland, Edvard Fagerholm, EFanZh, Erik Smistad, Evgeniy
Polyakov, Feiyang Chen, franklin5, Fred Reiss, Gautam, gehring, Geoffrey Irving,
George Sterpu, Gitea, Grzegorz George Pawelczak, Guozhong Zhuang, himkt,
Hoeseong Kim, Huan Li (李卓桓), HuiyangFei, hyunyoung, Isaac Burbank, jackonan,
Jacky Ko, Jason Furmanek, Jason Zaman, Javier Luraschi, Jiang,Zhoulong, joaak,
John Lin, Jonathan Wyatt Hoech, josephyearsley, Josh Gordon, Julian Niedermeier,
Karl Lessard, Keno Fischer, lanhin, Leon Graser, leondgarse, Li, Guizi, Li,
Yiqiang, lxl910915, Mahmoud Abuzaina, manhyuk, Marcela Morales Quispe,
margaretmz, Matt Conley, Max Pumperla, mbhuiyan, mdfaijul, Meng, Peng, Michael,
Michael Gielda, mrTsjolder, Muhammad Wildan, neargye, Nehal J Wani, NEWPLAN,
Niranjan Hasabnis, Nutti, olicht, Pan Daoxin, Pedro Monreal, Peng Yu,
pillarpond, Pooya Davoodi, qiezi, Rholais Lii, Richard Yu, Rin Arakaki, Roger
Iyengar, sahilbadyal, Sami Kama, Sandip Giri, Scott Leishman, Serge Panev,
Seunghoon Park, Shafi Dayatar, shengfuintel, Shimin Guo, Siju, silent567, Stefan
Dyulgerov, steven, Tao Wei, Thor Johnsen, Tingbo Lu, tomguluson92, Tongxuan Liu,
Trevor Morris, Ubuntu, Vadim Borisov, vanderliang, wangsiyu, Wen Yun, Wen-Heng
(Jack) Chung, wenxizhu, William D. Irons, Xiaoming (Jason) Cui, Yan Facai (颜发才),
Yanbo Liang, Yaniv Blumenfeld, Yash Gaurkar, Yicheng Fan, Yong Tang, Yongjoon
Lee, Yuan (Terry) Tang, Yuxin Wu, zldrobit

1.12.3

Not secure
Bug Fixes and Other Changes

* Updates `png_archive` dependency to 1.6.37 to not be affected by
CVE-2019-7317, CVE-2018-13785, and CVE-2018-14048.
* Updates `sqlite` dependency to 3.28.0 to not be affected by CVE-2018-20506,
CVE-2018-20346, and CVE-2018-20505.

1.12.2

Not secure
Bug Fixes and Other Changes

* Fixes a potential security vulnerability where carefully crafted GIF images
can produce a null pointer dereference during decoding.

1.12.0

Not secure
Major Features and Improvements

* Keras models can now be directly exported to the SavedModel
format(`tf.contrib.saved_model.save_keras_model()`) and used with Tensorflow
Serving.
* Keras models now support evaluating with a `tf.data.Dataset`.
* TensorFlow binaries are built with XLA support linked in by default.
* Ignite Dataset added to contrib/ignite that allows to work with Apache
Ignite.

Bug Fixes and Other Changes

* tf.data:
* tf.data users can now represent, get, and set options of TensorFlow
input pipelines using `tf.data.Options()`, `tf.data.Dataset.options()`,
and `tf.data.Dataset.with_options()` respectively.
* New `tf.data.Dataset.reduce()` API allows users to reduce a finite
dataset to a single element using a user-provided reduce function.
* New `tf.data.Dataset.window()` API allows users to create finite windows
of input dataset; when combined with the `tf.data.Dataset.reduce()` API,
this allows users to implement customized batching.
* All C++ code moves to the `tensorflow::data` namespace.
* Add support for `num_parallel_calls` to `tf.data.Dataset.interleave`.
* `tf.contrib`:
* Remove `tf.contrib.linalg`. `tf.linalg` should be used instead.
* Replace any calls to `tf.contrib.get_signature_def_by_key(metagraph_def,
signature_def_key)` with
`meta_graph_def.signature_def[signature_def_key]`. Catching a ValueError
exception thrown by `tf.contrib.get_signature_def_by_key` should be
replaced by catching a KeyError exception.
* `tf.contrib.data`
* Deprecate, and replace by tf.data.experimental.
* Other:
* Instead of jemalloc, revert back to using system malloc since it
simplifies build and has comparable performance.
* Remove integer types from `tf.nn.softplus` and `tf.nn.softsign` OpDefs.
This is a bugfix; these ops were never meant to support integers.
* Allow subslicing Tensors with a single dimension.
* Add option to calculate string length in Unicode characters.
* Add functionality to SubSlice a tensor.
* Add searchsorted (ie lower/upper_bound) op.
* Add model explainability to Boosted Trees.
* Support negative positions for tf.substr.
* There was previously a bug in the bijector_impl where the
_reduce_jacobian_det_over_event does not handle scalar ILDJ
implementations properly.
* In tf eager execution, allow re-entering a GradientTape context.
* Add tf_api_version flag. If --define=tf_api_version=2 flag is passed in,
then bazel will build TensorFlow API version 2.0. Note that TensorFlow
2.0 is under active development and has no guarantees at this point.
* Add additional compression options to TfRecordWriter.
* Performance improvements for regex full match operations.
* Replace tf.GraphKeys.VARIABLES with `tf.GraphKeys.GLOBAL_VARIABLES`.
* Remove unused dynamic learning rate support.

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

(David) Siu-Kei Muk, Ag Ramesh, Anton Dmitriev, Artem Sobolev, Avijit-Nervana,
Bairen Yi, Bruno Goncalves, By Shen, candy.dc, Cheng Chen, Clayne Robison,
coder3101, Dao Zhang, Elms, Fei Hu, feiquan, Geoffrey Irving, Guozhong Zhuang,
hellcom, Hoeseong Kim, imsheridan, Jason Furmanek, Jason Zaman, Jenny Sahng,
jiefangxuanyan, Johannes Bannhofer, Jonathan Homer, Koan-Sin Tan, kouml, Loo
Rong Jie, Lukas Geiger, manipopopo, Ming Li, Moritz KröGer, Naurril, Niranjan
Hasabnis, Pan Daoxin, Peng Yu, pengwa, rasmi, Roger Xin, Roland Fernandez, Sami
Kama, Samuel Matzek, Sangjung Woo, Sergei Lebedev, Sergii Khomenko, shaohua,
Shaohua Zhang, Shujian2015, Sunitha Kambhampati, tomguluson92, ViníCius Camargo,
wangsiyu, weidankong, Wen-Heng (Jack) Chung, William D. Irons, Xin Jin, Yan
Facai (颜发才), Yanbo Liang, Yash Katariya, Yong Tang, 在原佐为

Page 12 of 17

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.