Tensorflow

Latest version: v2.16.1

Safety actively analyzes 638418 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 14 of 17

1.6.0

Not secure
Breaking Changes

* Prebuilt binaries are now built against CUDA 9.0 and cuDNN 7.
* Prebuilt binaries will use AVX instructions. This may break TF on older
CPUs.

Major Features And Improvements

* New Optimizer internal API for non-slot variables. Descendants of
AdamOptimizer that access _beta[12]_power will need to be updated.
* `tf.estimator.{FinalExporter,LatestExporter}` now export stripped
SavedModels. This improves forward compatibility of the SavedModel.
* FFT support added to XLA CPU/GPU.

Bug Fixes and Other Changes

* Documentation updates:
* Added a second version of Getting Started, which is aimed at ML
newcomers.
* Clarified documentation on `resize_images.align_corners` parameter.
* Additional documentation for TPUs.
* Google Cloud Storage (GCS):
* Add client-side throttle.
* Add a `FlushCaches()` method to the FileSystem interface, with an
implementation for GcsFileSystem.
* Other:
* Add `tf.contrib.distributions.Kumaraswamy`.
* `RetryingFileSystem::FlushCaches()` calls the base FileSystem's
`FlushCaches()`.
* Add `auto_correlation` to distributions.
* Add `tf.contrib.distributions.Autoregressive`.
* Add SeparableConv1D layer.
* Add convolutional Flipout layers.
* When both inputs of `tf.matmul` are bfloat16, it returns bfloat16,
instead of float32.
* Added `tf.contrib.image.connected_components`.
* Add `tf.contrib.framework.CriticalSection` that allows atomic variable
access.
* Output variance over trees predictions for classifications tasks.
* For `pt` and `eval` commands, allow writing tensor values to filesystem
as numpy files.
* gRPC: Propagate truncated errors (instead of returning gRPC internal
error).
* Augment `parallel_interleave` to support 2 kinds of prefetching.
* Improved XLA support for C64-related ops log, pow, atan2, tanh.
* Add probabilistic convolutional layers.

API Changes

* Introducing `prepare_variance` boolean with default setting to False for
backward compatibility.
* Move `layers_dense_variational_impl.py` to `layers_dense_variational.py`.

Known Bugs

* Using XLA:GPU with CUDA 9 and CUDA 9.1 results in garbage results and/or
`CUDA_ILLEGAL_ADDRESS` failures.

Google discovered in mid-December 2017 that the PTX-to-SASS compiler in CUDA
9 and CUDA 9.1 sometimes does not properly compute the carry bit when
decomposing 64-bit address calculations with large offsets (e.g. `load [x +
large_constant]`) into 32-bit arithmetic in SASS.

As a result, these versions of `ptxas` miscompile most XLA programs which
use more than 4GB of temp memory. This results in garbage results and/or
`CUDA_ERROR_ILLEGAL_ADDRESS` failures.

A fix in CUDA 9.1.121 is expected in late February 2018. We do not expect a
fix for CUDA 9.0.x. Until the fix is available, the only workaround is to
[downgrade](https://developer.nvidia.com/cuda-toolkit-archive) to CUDA 8.0.x
or disable XLA:GPU.

TensorFlow will print a warning if you use XLA:GPU with a known-bad version
of CUDA; see e00ba24c4038e7644da417ddc639169b6ea59122.

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

4d55397500, Ag Ramesh, Aiden Scandella, Akimasa Kimura, Alex Rothberg, Allen
Goodman, amilioto, Andrei Costinescu, Andrei Nigmatulin, Anjum Sayed, Anthony
Platanios, Anush Elangovan, Armando Fandango, Ashish Kumar Ram, Ashwini Shukla,
Ben, Bhavani Subramanian, Brett Koonce, Carl Thomé, cclauss, Cesc, Changming
Sun, Christoph Boeddeker, Clayne Robison, Clemens Schulz, Clint (Woonhyuk Baek),
codrut3, Cole Gerdemann, Colin Raffel, Daniel Trebbien, Daniel Ylitalo, Daniel
Zhang, Daniyar, Darjan Salaj, Dave Maclachlan, David Norman, Dong--Jian,
dongsamb, dssgsra, Edward H, eladweiss, elilienstein, Eric Lilienstein, error.d,
Eunji Jeong, fanlu, Florian Courtial, fo40225, Fred, Gregg Helt, Guozhong
Zhuang, Hanchen Li, hsm207, hyunyoung2, ImSheridan, Ishant Mrinal Haloi, Jacky
Ko, Jay Young, Jean Flaherty, Jerome, JerrikEph, Jesse Kinkead, jfaath, Jian
Lin, jinghuangintel, Jiongyan Zhang, Joel Hestness, Joel Shor, Johnny Chan,
Julian Niedermeier, Julian Wolff, JxKing, K-W-W, Karl Lessard, Kasper Marstal,
Keiji Ariyama, Koan-Sin Tan, Loki Der Quaeler, Loo Rong Jie, Luke Schaefer, Lynn
Jackson, ManHyuk, Matt Basta, Matt Smith, Matthew Schulkind, Michael,
michaelkhan3, Miguel Piedrafita, Mikalai Drabovich, Mike Knapp, mjwen, mktozk,
Mohamed Aly, Mohammad Ashraf Bhuiyan, Myungjoo Ham, Naman Bhalla, Namrata-Ibm,
Nathan Luehr, nathansilberman, Netzeband, Niranjan Hasabnis, Omar Aflak, Ozge
Yalcinkaya, Parth P Panchal, patrickzzy, Patryk Chrabaszcz, Paul Van Eck, Paweł
Kapica, Peng Yu, Philip Yang, Pierre Blondeau, Po-Hsien Chu, powderluv, Puyu
Wang, Rajendra Arora, Rasmus, Renat Idrisov, resec, Robin Richtsfeld, Ronald
Eddy Jr, Sahil Singh, Sam Matzek, Sami Kama, sandipmgiri, Santiago Castro, Sayed
Hadi Hashemi, Scott Tseng, Sergii Khomenko, Shahid, Shengpeng Liu, Shreyash
Sharma, Shrinidhi Kl, Simone Cirillo, simsicon, Stanislav Levental,
starsblinking, Stephen Lumenta, Steven Hickson, Su Tang, Taehoon Lee, Takuya
Wakisaka, Ted Chang, Ted Ying, Tijmen Verhulsdonck, Timofey Kondrashov, vade,
vaibhav, Valentin Khrulkov, vchigrin, Victor Costan, Viraj Navkal, Vivek Rane,
wagonhelm, Yan Facai (颜发才), Yanbo Liang, Yaroslav Bulatov, yegord, Yong Tang,
Yoni Tsafir, yordun, Yuan (Terry) Tang, Yuxin Wu, zhengdi, Zhengsheng Wei, 田传武

1.5.0

Not secure
Breaking Changes

* Prebuilt binaries are now built against CUDA 9.0 and cuDNN 7.
* Starting from 1.6 release, our prebuilt binaries will use AVX instructions.
This may break TF on older CPUs.

Major Features And Improvements

* [Eager execution](https://github.com/tensorflow/tensorflow/tree/r1.5/tensorflow/contrib/eager)
preview version is now available.
* [TensorFlow Lite](https://github.com/tensorflow/tensorflow/tree/r1.5/tensorflow/lite)
dev preview is now available.
* CUDA 9.0 and cuDNN 7 support.
* Accelerated Linear Algebra (XLA):
* Add `complex64` support to XLA compiler.
* `bfloat` support is now added to XLA infrastructure.
* Make `ClusterSpec` propagation work with XLA devices.
* Use a deterministic executor to generate XLA graph.
* `tf.contrib`:
* `tf.contrib.distributions`:
* Add `tf.contrib.distributions.Autoregressive`.
* Make `tf.contrib.distributions` QuadratureCompound classes support batch
* Infer `tf.contrib.distributions.RelaxedOneHotCategorical` `dtype` from
arguments.
* Make `tf.contrib.distributions` quadrature family parameterized by
`quadrature_grid_and_prob` vs `quadrature_degree`.
* `auto_correlation` added to `tf.contrib.distributions`
* Add `tf.contrib.bayesflow.layers`, a collection of probabilistic
(neural) layers.
* Add `tf.contrib.bayesflow.halton_sequence`.
* Add `tf.contrib.data.make_saveable_from_iterator.`
* Add `tf.contrib.data.shuffle_and_repeat`.
* Add new custom transformation: `tf.contrib.data.scan()`.
* `tf.contrib.distributions.bijectors`:
* Add `tf.contrib.distributions.bijectors.MaskedAutoregressiveFlow`.
* Add `tf.contrib.distributions.bijectors.Permute`.
* Add `tf.contrib.distributions.bijectors.Gumbel`.
* Add `tf.contrib.distributions.bijectors.Reshape`.
* Support shape inference (i.e., shapes containing -1) in the Reshape
bijector.
* Add `streaming_precision_recall_at_equal_thresholds,` a method for computing
streaming precision and recall with `O(num_thresholds + size of
predictions)` time and space complexity.
* Change `RunConfig` default behavior to not set a random seed, making random
behavior independently random on distributed workers. We expect this to
generally improve training performance. Models that do rely on determinism
should set a random seed explicitly.
* Replaced the implementation of `tf.flags` with `absl.flags`.
* Add support for `CUBLAS_TENSOR_OP_MATH` in fp16 GEMM
* Add support for CUDA on NVIDIA Tegra devices

Bug Fixes and Other Changes

* Documentation updates:
* Clarified that you can only install TensorFlow on 64-bit machines.
* Added a short doc explaining how `Estimator`s save checkpoints.
* Add documentation for ops supported by the `tf2xla` bridge.
* Fix minor typos in the doc of `SpaceToDepth` and `DepthToSpace`.
* Updated documentation comments in `mfcc_mel_filterbank.h` and `mfcc.h`
to clarify that the input domain is squared magnitude spectra and the
weighting is done on linear magnitude spectra (sqrt of inputs).
* Change `tf.contrib.distributions` docstring examples to use `tfd` alias
rather than `ds`, `bs`.
* Fix docstring typos in `tf.distributions.bijectors.Bijector`.
* `tf.assert_equal` no longer raises `ValueError.` It now raises
`InvalidArgumentError,` as documented.
* Update Getting Started docs and API intro.
* Google Cloud Storage (GCS):
* Add userspace DNS caching for the GCS client.
* Customize request timeouts for the GCS filesystem.
* Improve GCS filesystem caching.
* Bug Fixes:
* Fix bug where partitioned integer variables got their wrong shapes.
Before
* Fix correctness bug in CPU and GPU implementations of Adadelta.
* Fix a bug in `import_meta_graph`'s handling of partitioned variables
when importing into a scope. WARNING: This may break loading checkpoints
of graphs with partitioned variables saved after using
`import_meta_graph` with a non-empty `import_scope` argument.
* Fix bug in offline debugger which prevented viewing events.
* Added the `WorkerService.DeleteWorkerSession` method to the gRPC
interface, to fix a memory leak. Ensure that your master and worker
servers are running the same version of TensorFlow to avoid
compatibility issues.
* Fix bug in peephole implementation of BlockLSTM cell.
* Fix bug by casting dtype of `log_det_jacobian` to match `log_prob` in
`TransformedDistribution`.
* Fix a bug in `import_meta_graph`'s handling of partitioned variables
when
* Ensure `tf.distributions.Multinomial` doesn't underflow in `log_prob`.
Before this change, all partitions of an integer variable were
initialized with the shape of the unpartitioned variable; after this
change they are initialized correctly.
* Other:
* Add necessary shape util support for bfloat16.
* Add a way to run ops using a step function to MonitoredSession.
* Add `DenseFlipout` probabilistic layer.
* A new flag `ignore_live_threads` is available on train. If set to
`True`, it will ignore threads that remain running when tearing down
infrastructure after successfully completing training, instead of
throwing a RuntimeError.
* Restandardize `DenseVariational` as simpler template for other
probabilistic layers.
* `tf.data` now supports `tf.SparseTensor` components in dataset elements.
* It is now possible to iterate over `Tensor`s.
* Allow `SparseSegmentReduction` ops to have missing segment IDs.
* Modify custom export strategy to account for multidimensional sparse
float splits.
* `Conv2D`, `Conv2DBackpropInput`, `Conv2DBackpropFilter` now supports
arbitrary dilations with GPU and cuDNNv6 support.
* `Estimator` now supports `Dataset`: `input_fn` can return a `Dataset`
instead of `Tensor`s.
* Add `RevBlock`, a memory-efficient implementation of reversible residual
layers.
* Reduce BFCAllocator internal fragmentation.
* Add `cross_entropy` and `kl_divergence` to
`tf.distributions.Distribution`.
* Add `tf.nn.softmax_cross_entropy_with_logits_v2` which enables backprop
w.r.t. the labels.
* GPU back-end now uses `ptxas` to compile generated PTX.
* `BufferAssignment`'s protocol buffer dump is now deterministic.
* Change embedding op to use parallel version of `DynamicStitch`.
* Add support for sparse multidimensional feature columns.
* Speed up the case for sparse float columns that have only 1 value.
* Allow sparse float splits to support multivalent feature columns.
* Add `quantile` to `tf.distributions.TransformedDistribution`.
* Add `NCHW_VECT_C` support for `tf.depth_to_space` on GPU.
* Add `NCHW_VECT_C` support for `tf.space_to_depth` on GPU.

API Changes

* Rename `SqueezeDims` attribute to `Axis` in C++ API for Squeeze op.
* `Stream::BlockHostUntilDone` now returns Status rather than bool.
* Minor refactor: move stats files from `stochastic` to `common` and remove
`stochastic`.

Known Bugs

* Using XLA:GPU with CUDA 9 and CUDA 9.1 results in garbage results and/or
`CUDA_ILLEGAL_ADDRESS` failures.

Google discovered in mid-December 2017 that the PTX-to-SASS compiler in CUDA
9 and CUDA 9.1 sometimes does not properly compute the carry bit when
decomposing 64-bit address calculations with large offsets (e.g. `load [x +
large_constant]`) into 32-bit arithmetic in SASS.

As a result, these versions of `ptxas` miscompile most XLA programs which
use more than 4GB of temp memory. This results in garbage results and/or
`CUDA_ERROR_ILLEGAL_ADDRESS` failures.

A fix in CUDA 9.1.121 is expected in late February 2018. We do not expect a
fix for CUDA 9.0.x. Until the fix is available, the only workaround is to
[downgrade](https://developer.nvidia.com/cuda-toolkit-archive) to CUDA 8.0.x
or disable XLA:GPU.

TensorFlow will print a warning if you use XLA:GPU with a known-bad version
of CUDA; see e00ba24c4038e7644da417ddc639169b6ea59122.

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

Adam Zahran, Ag Ramesh, Alan Lee, Alan Yee, Alex Sergeev, Alexander, Amir H.
Jadidinejad, Amy, Anastasios Doumoulakis, Andrei Costinescu, Andrei Nigmatulin,
Anthony Platanios, Anush Elangovan, arixlin, Armen Donigian, ArtëM Sobolev,
Atlas7, Ben Barsdell, Bill Prin, Bo Wang, Brett Koonce, Cameron Thomas, Carl
Thomé, Cem Eteke, cglewis, Changming Sun, Charles Shenton, Chi-Hung, Chris
Donahue, Chris Filo Gorgolewski, Chris Hoyean Song, Chris Tava, Christian Grail,
Christoph Boeddeker, cinqS, Clayne Robison, codrut3, concerttttt, CQY, Dan
Becker, Dan Jarvis, Daniel Zhang, David Norman, dmaclach, Dmitry Trifonov,
Donggeon Lim, dongpilYu, Dr. Kashif Rasul, Edd Wilder-James, Eric Lv, fcharras,
Felix Abecassis, FirefoxMetzger, formath, FredZhang, Gaojin Cao, Gary Deer,
Guenther Schmuelling, Hanchen Li, Hanmin Qin, hannesa2, hyunyoung2, Ilya
Edrenkin, Jackson Kontny, Jan, Javier Luraschi, Jay Young, Jayaram Bobba, Jeff,
Jeff Carpenter, Jeremy Sharpe, Jeroen BéDorf, Jimmy Jia, Jinze Bai, Jiongyan
Zhang, Joe Castagneri, Johan Ju, Josh Varty, Julian Niedermeier, JxKing, Karl
Lessard, Kb Sriram, Keven Wang, Koan-Sin Tan, Kyle Mills, lanhin, LevineHuang,
Loki Der Quaeler, Loo Rong Jie, Luke Iwanski, LáSzló Csomor, Mahdi Abavisani,
Mahmoud Abuzaina, ManHyuk, Marek ŠUppa, MathSquared, Mats Linander, Matt Wytock,
Matthew Daley, Maximilian Bachl, mdymczyk, melvyniandrag, Michael Case, Mike
Traynor, miqlas, Namrata-Ibm, Nathan Luehr, Nathan Van Doorn, Noa Ezra, Nolan
Liu, Oleg Zabluda, opensourcemattress, Ouwen Huang, Paul Van Eck, peisong, Peng
Yu, PinkySan, pks, powderluv, Qiao Hai-Jun, Qiao Longfei, Rajendra Arora, Ralph
Tang, resec, Robin Richtsfeld, Rohan Varma, Ryohei Kuroki, SaintNazaire, Samuel
He, Sandeep Dcunha, sandipmgiri, Sang Han, scott, Scott Mudge, Se-Won Kim, Simon
Perkins, Simone Cirillo, Steffen Schmitz, Suvojit Manna, Sylvus, Taehoon Lee,
Ted Chang, Thomas Deegan, Till Hoffmann, Tim, Toni Kunic, Toon Verstraelen,
Tristan Rice, Urs KöSter, Utkarsh Upadhyay, Vish (Ishaya) Abrams, Winnie Tsang,
Yan Chen, Yan Facai (颜发才), Yi Yang, Yong Tang, Youssef Hesham, Yuan (Terry)
Tang, Zhengsheng Wei, zxcqwe4906, 张志豪, 田传武

We are also grateful to all who filed issues or helped resolve them, asked and
answered questions, and were part of inspiring discussions.

1.4.1

Not secure
Bug Fixes and Other Changes

* `LinearClassifier` fix.

1.4.0

Not secure
Major Features And Improvements

* `tf.keras` is now part of the core TensorFlow API.
* [`tf.data`](http://tensorflow.org/guide/data) is now part of the core
TensorFlow API.
* The API is now subject to backwards compatibility guarantees.
* For a guide to migrating from the `tf.contrib.data` API, see the
[README](https://github.com/tensorflow/tensorflow/blob/r1.4/tensorflow/contrib/data/README.md).
* Major new features include `Dataset.from_generator()` (for building an
input pipeline from a Python generator), and the `Dataset.apply()`
method for applying custom transformation functions.
* Several custom transformation functions have been added, including
`tf.contrib.data.batch_and_drop_remainder()` and
`tf.contrib.data.sloppy_interleave()`.
* Add `train_and_evaluate` for simple distributed `Estimator` training.
* Add `tf.spectral.dct` for computing the DCT-II.
* Add Mel-Frequency Cepstral Coefficient support to `tf.contrib.signal` (with
GPU and gradient support).
* Add a self-check on `import tensorflow` for Windows DLL issues.
* Add NCHW support to `tf.depth_to_space` on GPU.
* TensorFlow Debugger (tfdbg):
* Add `eval` command to allow evaluation of arbitrary Python/numpy
expressions in tfdbg command-line interface. See
[Debugging TensorFlow Programs](https://www.tensorflow.org/guide/debugger)
for more details.
* Usability improvement: The frequently used tensor filter
`has_inf_or_nan` is now added to `Session` wrappers and hooks by
default. So there is no need for clients to call
`.add_tensor_filter(tf_debug.has_inf_or_nan)` anymore.
* SinhArcsinh (scalar) distribution added to `contrib.distributions`.
* Make `GANEstimator` opensource.
* `Estimator.export_savedmodel()` now includes all valid serving signatures
that can be constructed from the Serving Input Receiver and all available
ExportOutputs. For instance, a classifier may provide regression- and
prediction-flavored outputs, in addition to the classification-flavored one.
Building signatures from these allows TF Serving to honor requests using the
different APIs (Classify, Regress, and Predict). Furthermore,
`serving_input_receiver_fn()` may now specify alternative subsets of nodes
that may act as inputs. This allows, for instance, producing a prediction
signature for a classifier that accepts raw `Tensors` instead of a
serialized `tf.Example`.
* Add `tf.contrib.bayesflow.hmc`.
* Add `tf.contrib.distributions.MixtureSameFamily`.
* Make `Dataset.shuffle()` always reshuffles after each iteration by default.
* Add `tf.contrib.bayesflow.metropolis_hastings`.
* Add `log_rate` parameter to `tf.contrib.distributions.Poisson`.
* Extend `tf.contrib.distributions.bijector` API to handle some non-injective
transforms.
* Java:
* Generics (e.g., `Tensor<Integer>`) for improved type-safety (courtesy
andrewcmyers).
* Support for multi-dimensional string tensors.
* Support loading of custom operations (e.g. many in `tf.contrib`) on
Linux and OS X
* All our prebuilt binaries have been built with CUDA 8 and cuDNN 6. We
anticipate releasing TensorFlow 1.5 with CUDA 9 and cuDNN 7.

Bug Fixes and Other Changes

* `tf.nn.rnn_cell.DropoutWrapper` is now more careful about dropping out LSTM
states. Specifically, it no longer ever drops the `c` (memory) state of an
`LSTMStateTuple`. The new behavior leads to proper dropout behavior for
LSTMs and stacked LSTMs. This bug fix follows recommendations from published
literature, but is a behavioral change. State dropout behavior may be
customized via the new `dropout_state_filter_visitor` argument.
* Removed `tf.contrib.training.python_input`. The same behavior, in a more
flexible and reproducible package, is available via the new
`tf.contrib.data.Dataset.from_generator` method!
* Fix `tf.contrib.distributions.Affine` incorrectly computing
log-det-jacobian.
* Fix `tf.random_gamma` incorrectly handling non-batch, scalar draws.
* Resolved a race condition in TensorForest TreePredictionsV4Op.
* Google Cloud Storage file system, Amazon S3 file system, and Hadoop file
system support are now default build options.
* Custom op libraries must link against libtensorflow_framework.so (installed
at `tf.sysconfig.get_lib()`).
* Change `RunConfig` default behavior to not set a random seed, making random
behavior independently random on distributed workers. We expect this to
generally improve training performance. Models that do rely on determinism
should set a random seed explicitly.

Breaking Changes to the API

* The signature of the `tf.contrib.data.rejection_resample()` function has
been changed. It now returns a function that can be used as an argument to
`Dataset.apply()`.
* Remove `tf.contrib.data.Iterator.from_dataset()` method. Use
`Dataset.make_initializable_iterator()` instead.
* Remove seldom used and unnecessary `tf.contrib.data.Iterator.dispose_op()`.
* Reorder some TF-GAN loss functions in a non-backwards compatible way.

Known Issues

* In Python 3, `Dataset.from_generator()` does not support Unicode strings.
You must convert any strings to bytes objects before yielding them from the
generator.

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

4d55397500, Abdullah Alrasheed, abenmao, Adam Salvail, Aditya Dhulipala, Ag
Ramesh, Akimasa Kimura, Alan Du, Alan Yee, Alexander, Amit Kushwaha, Amy, Andrei
Costinescu, Andrei Nigmatulin, Andrew Erlichson, Andrew Myers, Andrew Stepanov,
Androbin, AngryPowman, Anish Shah, Anton Daitche, Artsiom Chapialiou, asdf2014,
Aseem Raj Baranwal, Ash Hall, Bart Kiers, Batchu Venkat Vishal, ben, Ben
Barsdell, Bill Piel, Carl Thomé, Catalin Voss, Changming Sun, Chengzhi Chen, Chi
Zeng, Chris Antaki, Chris Donahue, Chris Oelmueller, Chris Tava, Clayne Robison,
Codrut, Courtial Florian, Dalmo Cirne, Dan J, Darren Garvey, David
Kristoffersson, David Norman, David RöThlisberger, DavidNorman, Dhruv, DimanNe,
Dorokhov, Duncan Mac-Vicar P, EdwardDixon, EMCP, error.d, FAIJUL, Fan Xia,
Francois Xavier, Fred Reiss, Freedom" Koan-Sin Tan, Fritz Obermeyer, Gao, Xiang,
Guenther Schmuelling, Guo Yejun (郭叶军), Hans Gaiser, HectorSVC, Hyungsuk Yoon,
James Pruegsanusak, Jay Young, Jean Wanka, Jeff Carpenter, Jeremy Rutman, Jeroen
BéDorf, Jett Jones, Jimmy Jia, jinghuangintel, jinze1994, JKurland, Joel
Hestness, joetoth, John B Nelson, John Impallomeni, John Lawson, Jonas, Jonathan
Dekhtiar, joshkyh, Jun Luan, Jun Mei, Kai Sasaki, Karl Lessard, karlkubx.ca, Kb
Sriram, Kenichi Ueno, Kevin Slagle, Kongsea, Lakshay Garg, lhlmgr, Lin Min,
liu.guangcong, Loki Der Quaeler, Louie Helm, lucasmoura, Luke Iwanski, Lyndon
White, Mahmoud Abuzaina, Marcel Puyat, Mark Aaron Shirley, Michele Colombo,
MtDersvan, Namrata-Ibm, Nathan Luehr, Naurril, Nayana Thorat, Nicolas Lopez,
Niranjan Hasabnis, Nolan Liu, Nouce, Oliver Hennigh, osdamv, Patrik Erdes,
Patryk Chrabaszcz, Pavel Christof, Penghao Cen, postBG, Qingqing Cao, Qingying
Chen, qjivy, Raphael, Rasmi, raymondxyang, Renze Yu, resec, Roffel, Ruben
Vereecken, Ryohei Kuroki, sandipmgiri, Santiago Castro, Scott Kirkland, Sean
Vig, Sebastian Raschka, Sebastian Weiss, Sergey Kolesnikov, Sergii Khomenko,
Shahid, Shivam Kotwalia, Stuart Berg, Sumit Gouthaman, superzerg, Sven Mayer,
tetris, Ti Zhou, Tiago Freitas Pereira, Tian Jin, Tomoaki Oiki, Vaibhav Sood,
vfdev, Vivek Rane, Vladimir Moskva, wangqr, Weber Xie, Will Frey, Yan Facai
(颜发才), yanivbl6, Yaroslav Bulatov, Yixing Lao, Yong Tang, youkaichao, Yuan
(Terry) Tang, Yue Zhang, Yuxin Wu, Ziming Dong, ZxYuan, 黄璞

We are also grateful to all who filed issues or helped resolve them, asked and
answered questions, and were part of inspiring discussions.

1.3.0

Not secure
See also

1.2.1

Not secure
Bug Fixes and Other Changes

* Updating markdown version required to >= 2.6.8.
* Support tensors as dropout rates again, by removing the min(max(..))

Page 14 of 17

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.