Tensorflow

Latest version: v2.18.0

Safety actively analyzes 682387 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 15 of 18

1.5.0

Not secure
Breaking Changes

* Prebuilt binaries are now built against CUDA 9.0 and cuDNN 7.
* Starting from 1.6 release, our prebuilt binaries will use AVX instructions.
This may break TF on older CPUs.

Major Features And Improvements

* [Eager execution](https://github.com/tensorflow/tensorflow/tree/r1.5/tensorflow/contrib/eager)
preview version is now available.
* [TensorFlow Lite](https://github.com/tensorflow/tensorflow/tree/r1.5/tensorflow/lite)
dev preview is now available.
* CUDA 9.0 and cuDNN 7 support.
* Accelerated Linear Algebra (XLA):
* Add `complex64` support to XLA compiler.
* `bfloat` support is now added to XLA infrastructure.
* Make `ClusterSpec` propagation work with XLA devices.
* Use a deterministic executor to generate XLA graph.
* `tf.contrib`:
* `tf.contrib.distributions`:
* Add `tf.contrib.distributions.Autoregressive`.
* Make `tf.contrib.distributions` QuadratureCompound classes support batch
* Infer `tf.contrib.distributions.RelaxedOneHotCategorical` `dtype` from
arguments.
* Make `tf.contrib.distributions` quadrature family parameterized by
`quadrature_grid_and_prob` vs `quadrature_degree`.
* `auto_correlation` added to `tf.contrib.distributions`
* Add `tf.contrib.bayesflow.layers`, a collection of probabilistic
(neural) layers.
* Add `tf.contrib.bayesflow.halton_sequence`.
* Add `tf.contrib.data.make_saveable_from_iterator.`
* Add `tf.contrib.data.shuffle_and_repeat`.
* Add new custom transformation: `tf.contrib.data.scan()`.
* `tf.contrib.distributions.bijectors`:
* Add `tf.contrib.distributions.bijectors.MaskedAutoregressiveFlow`.
* Add `tf.contrib.distributions.bijectors.Permute`.
* Add `tf.contrib.distributions.bijectors.Gumbel`.
* Add `tf.contrib.distributions.bijectors.Reshape`.
* Support shape inference (i.e., shapes containing -1) in the Reshape
bijector.
* Add `streaming_precision_recall_at_equal_thresholds,` a method for computing
streaming precision and recall with `O(num_thresholds + size of
predictions)` time and space complexity.
* Change `RunConfig` default behavior to not set a random seed, making random
behavior independently random on distributed workers. We expect this to
generally improve training performance. Models that do rely on determinism
should set a random seed explicitly.
* Replaced the implementation of `tf.flags` with `absl.flags`.
* Add support for `CUBLAS_TENSOR_OP_MATH` in fp16 GEMM
* Add support for CUDA on NVIDIA Tegra devices

Bug Fixes and Other Changes

* Documentation updates:
* Clarified that you can only install TensorFlow on 64-bit machines.
* Added a short doc explaining how `Estimator`s save checkpoints.
* Add documentation for ops supported by the `tf2xla` bridge.
* Fix minor typos in the doc of `SpaceToDepth` and `DepthToSpace`.
* Updated documentation comments in `mfcc_mel_filterbank.h` and `mfcc.h`
to clarify that the input domain is squared magnitude spectra and the
weighting is done on linear magnitude spectra (sqrt of inputs).
* Change `tf.contrib.distributions` docstring examples to use `tfd` alias
rather than `ds`, `bs`.
* Fix docstring typos in `tf.distributions.bijectors.Bijector`.
* `tf.assert_equal` no longer raises `ValueError.` It now raises
`InvalidArgumentError,` as documented.
* Update Getting Started docs and API intro.
* Google Cloud Storage (GCS):
* Add userspace DNS caching for the GCS client.
* Customize request timeouts for the GCS filesystem.
* Improve GCS filesystem caching.
* Bug Fixes:
* Fix bug where partitioned integer variables got their wrong shapes.
Before
* Fix correctness bug in CPU and GPU implementations of Adadelta.
* Fix a bug in `import_meta_graph`'s handling of partitioned variables
when importing into a scope. WARNING: This may break loading checkpoints
of graphs with partitioned variables saved after using
`import_meta_graph` with a non-empty `import_scope` argument.
* Fix bug in offline debugger which prevented viewing events.
* Added the `WorkerService.DeleteWorkerSession` method to the gRPC
interface, to fix a memory leak. Ensure that your master and worker
servers are running the same version of TensorFlow to avoid
compatibility issues.
* Fix bug in peephole implementation of BlockLSTM cell.
* Fix bug by casting dtype of `log_det_jacobian` to match `log_prob` in
`TransformedDistribution`.
* Fix a bug in `import_meta_graph`'s handling of partitioned variables
when
* Ensure `tf.distributions.Multinomial` doesn't underflow in `log_prob`.
Before this change, all partitions of an integer variable were
initialized with the shape of the unpartitioned variable; after this
change they are initialized correctly.
* Other:
* Add necessary shape util support for bfloat16.
* Add a way to run ops using a step function to MonitoredSession.
* Add `DenseFlipout` probabilistic layer.
* A new flag `ignore_live_threads` is available on train. If set to
`True`, it will ignore threads that remain running when tearing down
infrastructure after successfully completing training, instead of
throwing a RuntimeError.
* Restandardize `DenseVariational` as simpler template for other
probabilistic layers.
* `tf.data` now supports `tf.SparseTensor` components in dataset elements.
* It is now possible to iterate over `Tensor`s.
* Allow `SparseSegmentReduction` ops to have missing segment IDs.
* Modify custom export strategy to account for multidimensional sparse
float splits.
* `Conv2D`, `Conv2DBackpropInput`, `Conv2DBackpropFilter` now supports
arbitrary dilations with GPU and cuDNNv6 support.
* `Estimator` now supports `Dataset`: `input_fn` can return a `Dataset`
instead of `Tensor`s.
* Add `RevBlock`, a memory-efficient implementation of reversible residual
layers.
* Reduce BFCAllocator internal fragmentation.
* Add `cross_entropy` and `kl_divergence` to
`tf.distributions.Distribution`.
* Add `tf.nn.softmax_cross_entropy_with_logits_v2` which enables backprop
w.r.t. the labels.
* GPU back-end now uses `ptxas` to compile generated PTX.
* `BufferAssignment`'s protocol buffer dump is now deterministic.
* Change embedding op to use parallel version of `DynamicStitch`.
* Add support for sparse multidimensional feature columns.
* Speed up the case for sparse float columns that have only 1 value.
* Allow sparse float splits to support multivalent feature columns.
* Add `quantile` to `tf.distributions.TransformedDistribution`.
* Add `NCHW_VECT_C` support for `tf.depth_to_space` on GPU.
* Add `NCHW_VECT_C` support for `tf.space_to_depth` on GPU.

API Changes

* Rename `SqueezeDims` attribute to `Axis` in C++ API for Squeeze op.
* `Stream::BlockHostUntilDone` now returns Status rather than bool.
* Minor refactor: move stats files from `stochastic` to `common` and remove
`stochastic`.

Known Bugs

* Using XLA:GPU with CUDA 9 and CUDA 9.1 results in garbage results and/or
`CUDA_ILLEGAL_ADDRESS` failures.

Google discovered in mid-December 2017 that the PTX-to-SASS compiler in CUDA
9 and CUDA 9.1 sometimes does not properly compute the carry bit when
decomposing 64-bit address calculations with large offsets (e.g. `load [x +
large_constant]`) into 32-bit arithmetic in SASS.

As a result, these versions of `ptxas` miscompile most XLA programs which
use more than 4GB of temp memory. This results in garbage results and/or
`CUDA_ERROR_ILLEGAL_ADDRESS` failures.

A fix in CUDA 9.1.121 is expected in late February 2018. We do not expect a
fix for CUDA 9.0.x. Until the fix is available, the only workaround is to
[downgrade](https://developer.nvidia.com/cuda-toolkit-archive) to CUDA 8.0.x
or disable XLA:GPU.

TensorFlow will print a warning if you use XLA:GPU with a known-bad version
of CUDA; see e00ba24c4038e7644da417ddc639169b6ea59122.

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

Adam Zahran, Ag Ramesh, Alan Lee, Alan Yee, Alex Sergeev, Alexander, Amir H.
Jadidinejad, Amy, Anastasios Doumoulakis, Andrei Costinescu, Andrei Nigmatulin,
Anthony Platanios, Anush Elangovan, arixlin, Armen Donigian, ArtëM Sobolev,
Atlas7, Ben Barsdell, Bill Prin, Bo Wang, Brett Koonce, Cameron Thomas, Carl
Thomé, Cem Eteke, cglewis, Changming Sun, Charles Shenton, Chi-Hung, Chris
Donahue, Chris Filo Gorgolewski, Chris Hoyean Song, Chris Tava, Christian Grail,
Christoph Boeddeker, cinqS, Clayne Robison, codrut3, concerttttt, CQY, Dan
Becker, Dan Jarvis, Daniel Zhang, David Norman, dmaclach, Dmitry Trifonov,
Donggeon Lim, dongpilYu, Dr. Kashif Rasul, Edd Wilder-James, Eric Lv, fcharras,
Felix Abecassis, FirefoxMetzger, formath, FredZhang, Gaojin Cao, Gary Deer,
Guenther Schmuelling, Hanchen Li, Hanmin Qin, hannesa2, hyunyoung2, Ilya
Edrenkin, Jackson Kontny, Jan, Javier Luraschi, Jay Young, Jayaram Bobba, Jeff,
Jeff Carpenter, Jeremy Sharpe, Jeroen BéDorf, Jimmy Jia, Jinze Bai, Jiongyan
Zhang, Joe Castagneri, Johan Ju, Josh Varty, Julian Niedermeier, JxKing, Karl
Lessard, Kb Sriram, Keven Wang, Koan-Sin Tan, Kyle Mills, lanhin, LevineHuang,
Loki Der Quaeler, Loo Rong Jie, Luke Iwanski, LáSzló Csomor, Mahdi Abavisani,
Mahmoud Abuzaina, ManHyuk, Marek ŠUppa, MathSquared, Mats Linander, Matt Wytock,
Matthew Daley, Maximilian Bachl, mdymczyk, melvyniandrag, Michael Case, Mike
Traynor, miqlas, Namrata-Ibm, Nathan Luehr, Nathan Van Doorn, Noa Ezra, Nolan
Liu, Oleg Zabluda, opensourcemattress, Ouwen Huang, Paul Van Eck, peisong, Peng
Yu, PinkySan, pks, powderluv, Qiao Hai-Jun, Qiao Longfei, Rajendra Arora, Ralph
Tang, resec, Robin Richtsfeld, Rohan Varma, Ryohei Kuroki, SaintNazaire, Samuel
He, Sandeep Dcunha, sandipmgiri, Sang Han, scott, Scott Mudge, Se-Won Kim, Simon
Perkins, Simone Cirillo, Steffen Schmitz, Suvojit Manna, Sylvus, Taehoon Lee,
Ted Chang, Thomas Deegan, Till Hoffmann, Tim, Toni Kunic, Toon Verstraelen,
Tristan Rice, Urs KöSter, Utkarsh Upadhyay, Vish (Ishaya) Abrams, Winnie Tsang,
Yan Chen, Yan Facai (颜发才), Yi Yang, Yong Tang, Youssef Hesham, Yuan (Terry)
Tang, Zhengsheng Wei, zxcqwe4906, 张志豪, 田传武

We are also grateful to all who filed issues or helped resolve them, asked and
answered questions, and were part of inspiring discussions.

1.4.1

Not secure
Bug Fixes and Other Changes

* `LinearClassifier` fix.

1.4.0

Not secure
Major Features And Improvements

* `tf.keras` is now part of the core TensorFlow API.
* [`tf.data`](http://tensorflow.org/guide/data) is now part of the core
TensorFlow API.
* The API is now subject to backwards compatibility guarantees.
* For a guide to migrating from the `tf.contrib.data` API, see the
[README](https://github.com/tensorflow/tensorflow/blob/r1.4/tensorflow/contrib/data/README.md).
* Major new features include `Dataset.from_generator()` (for building an
input pipeline from a Python generator), and the `Dataset.apply()`
method for applying custom transformation functions.
* Several custom transformation functions have been added, including
`tf.contrib.data.batch_and_drop_remainder()` and
`tf.contrib.data.sloppy_interleave()`.
* Add `train_and_evaluate` for simple distributed `Estimator` training.
* Add `tf.spectral.dct` for computing the DCT-II.
* Add Mel-Frequency Cepstral Coefficient support to `tf.contrib.signal` (with
GPU and gradient support).
* Add a self-check on `import tensorflow` for Windows DLL issues.
* Add NCHW support to `tf.depth_to_space` on GPU.
* TensorFlow Debugger (tfdbg):
* Add `eval` command to allow evaluation of arbitrary Python/numpy
expressions in tfdbg command-line interface. See
[Debugging TensorFlow Programs](https://www.tensorflow.org/guide/debugger)
for more details.
* Usability improvement: The frequently used tensor filter
`has_inf_or_nan` is now added to `Session` wrappers and hooks by
default. So there is no need for clients to call
`.add_tensor_filter(tf_debug.has_inf_or_nan)` anymore.
* SinhArcsinh (scalar) distribution added to `contrib.distributions`.
* Make `GANEstimator` opensource.
* `Estimator.export_savedmodel()` now includes all valid serving signatures
that can be constructed from the Serving Input Receiver and all available
ExportOutputs. For instance, a classifier may provide regression- and
prediction-flavored outputs, in addition to the classification-flavored one.
Building signatures from these allows TF Serving to honor requests using the
different APIs (Classify, Regress, and Predict). Furthermore,
`serving_input_receiver_fn()` may now specify alternative subsets of nodes
that may act as inputs. This allows, for instance, producing a prediction
signature for a classifier that accepts raw `Tensors` instead of a
serialized `tf.Example`.
* Add `tf.contrib.bayesflow.hmc`.
* Add `tf.contrib.distributions.MixtureSameFamily`.
* Make `Dataset.shuffle()` always reshuffles after each iteration by default.
* Add `tf.contrib.bayesflow.metropolis_hastings`.
* Add `log_rate` parameter to `tf.contrib.distributions.Poisson`.
* Extend `tf.contrib.distributions.bijector` API to handle some non-injective
transforms.
* Java:
* Generics (e.g., `Tensor<Integer>`) for improved type-safety (courtesy
andrewcmyers).
* Support for multi-dimensional string tensors.
* Support loading of custom operations (e.g. many in `tf.contrib`) on
Linux and OS X
* All our prebuilt binaries have been built with CUDA 8 and cuDNN 6. We
anticipate releasing TensorFlow 1.5 with CUDA 9 and cuDNN 7.

Bug Fixes and Other Changes

* `tf.nn.rnn_cell.DropoutWrapper` is now more careful about dropping out LSTM
states. Specifically, it no longer ever drops the `c` (memory) state of an
`LSTMStateTuple`. The new behavior leads to proper dropout behavior for
LSTMs and stacked LSTMs. This bug fix follows recommendations from published
literature, but is a behavioral change. State dropout behavior may be
customized via the new `dropout_state_filter_visitor` argument.
* Removed `tf.contrib.training.python_input`. The same behavior, in a more
flexible and reproducible package, is available via the new
`tf.contrib.data.Dataset.from_generator` method!
* Fix `tf.contrib.distributions.Affine` incorrectly computing
log-det-jacobian.
* Fix `tf.random_gamma` incorrectly handling non-batch, scalar draws.
* Resolved a race condition in TensorForest TreePredictionsV4Op.
* Google Cloud Storage file system, Amazon S3 file system, and Hadoop file
system support are now default build options.
* Custom op libraries must link against libtensorflow_framework.so (installed
at `tf.sysconfig.get_lib()`).
* Change `RunConfig` default behavior to not set a random seed, making random
behavior independently random on distributed workers. We expect this to
generally improve training performance. Models that do rely on determinism
should set a random seed explicitly.

Breaking Changes to the API

* The signature of the `tf.contrib.data.rejection_resample()` function has
been changed. It now returns a function that can be used as an argument to
`Dataset.apply()`.
* Remove `tf.contrib.data.Iterator.from_dataset()` method. Use
`Dataset.make_initializable_iterator()` instead.
* Remove seldom used and unnecessary `tf.contrib.data.Iterator.dispose_op()`.
* Reorder some TF-GAN loss functions in a non-backwards compatible way.

Known Issues

* In Python 3, `Dataset.from_generator()` does not support Unicode strings.
You must convert any strings to bytes objects before yielding them from the
generator.

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

4d55397500, Abdullah Alrasheed, abenmao, Adam Salvail, Aditya Dhulipala, Ag
Ramesh, Akimasa Kimura, Alan Du, Alan Yee, Alexander, Amit Kushwaha, Amy, Andrei
Costinescu, Andrei Nigmatulin, Andrew Erlichson, Andrew Myers, Andrew Stepanov,
Androbin, AngryPowman, Anish Shah, Anton Daitche, Artsiom Chapialiou, asdf2014,
Aseem Raj Baranwal, Ash Hall, Bart Kiers, Batchu Venkat Vishal, ben, Ben
Barsdell, Bill Piel, Carl Thomé, Catalin Voss, Changming Sun, Chengzhi Chen, Chi
Zeng, Chris Antaki, Chris Donahue, Chris Oelmueller, Chris Tava, Clayne Robison,
Codrut, Courtial Florian, Dalmo Cirne, Dan J, Darren Garvey, David
Kristoffersson, David Norman, David RöThlisberger, DavidNorman, Dhruv, DimanNe,
Dorokhov, Duncan Mac-Vicar P, EdwardDixon, EMCP, error.d, FAIJUL, Fan Xia,
Francois Xavier, Fred Reiss, Freedom" Koan-Sin Tan, Fritz Obermeyer, Gao, Xiang,
Guenther Schmuelling, Guo Yejun (郭叶军), Hans Gaiser, HectorSVC, Hyungsuk Yoon,
James Pruegsanusak, Jay Young, Jean Wanka, Jeff Carpenter, Jeremy Rutman, Jeroen
BéDorf, Jett Jones, Jimmy Jia, jinghuangintel, jinze1994, JKurland, Joel
Hestness, joetoth, John B Nelson, John Impallomeni, John Lawson, Jonas, Jonathan
Dekhtiar, joshkyh, Jun Luan, Jun Mei, Kai Sasaki, Karl Lessard, karlkubx.ca, Kb
Sriram, Kenichi Ueno, Kevin Slagle, Kongsea, Lakshay Garg, lhlmgr, Lin Min,
liu.guangcong, Loki Der Quaeler, Louie Helm, lucasmoura, Luke Iwanski, Lyndon
White, Mahmoud Abuzaina, Marcel Puyat, Mark Aaron Shirley, Michele Colombo,
MtDersvan, Namrata-Ibm, Nathan Luehr, Naurril, Nayana Thorat, Nicolas Lopez,
Niranjan Hasabnis, Nolan Liu, Nouce, Oliver Hennigh, osdamv, Patrik Erdes,
Patryk Chrabaszcz, Pavel Christof, Penghao Cen, postBG, Qingqing Cao, Qingying
Chen, qjivy, Raphael, Rasmi, raymondxyang, Renze Yu, resec, Roffel, Ruben
Vereecken, Ryohei Kuroki, sandipmgiri, Santiago Castro, Scott Kirkland, Sean
Vig, Sebastian Raschka, Sebastian Weiss, Sergey Kolesnikov, Sergii Khomenko,
Shahid, Shivam Kotwalia, Stuart Berg, Sumit Gouthaman, superzerg, Sven Mayer,
tetris, Ti Zhou, Tiago Freitas Pereira, Tian Jin, Tomoaki Oiki, Vaibhav Sood,
vfdev, Vivek Rane, Vladimir Moskva, wangqr, Weber Xie, Will Frey, Yan Facai
(颜发才), yanivbl6, Yaroslav Bulatov, Yixing Lao, Yong Tang, youkaichao, Yuan
(Terry) Tang, Yue Zhang, Yuxin Wu, Ziming Dong, ZxYuan, 黄璞

We are also grateful to all who filed issues or helped resolve them, asked and
answered questions, and were part of inspiring discussions.

1.3.0

Not secure
See also

1.2.1

Not secure
Bug Fixes and Other Changes

* Updating markdown version required to >= 2.6.8.
* Support tensors as dropout rates again, by removing the min(max(..))

1.2

is still available, just not as `tf.RewriterConfig`. Instead add an explicit
import.
* Breaking change to `tf.contrib.data.Dataset` APIs that expect a nested
structure. Lists are now converted to `tf.Tensor` implicitly. You may need
to change uses of lists to tuples in existing code. In addition, dicts are
now supported as a nested structure.

Changes to contrib APIs

* Adds tf.contrib.nn.rank_sampled_softmax_loss, a sampled-softmax variant that
can improve rank loss.
* `tf.contrib.metrics`.{streaming_covariance,streaming_pearson_correlation}
modified to return nan when they have seen less or equal to 1 unit of
weight.
* Adds time series models to contrib. See contrib/timeseries/README.md for
details.
* Adds FULLY_CONNECTED Op to tensorflow/lite/schema.fbs

Known Issues

* Tensorflow_gpu compilation fails with Bazel 0.5.3.

Bug Fixes and Other Changes

* Fixes `strides` and `begin` dtype mismatch when slicing using int64 Tensor
index in python.
* Improved convolution padding documentation.
* Add a tag constant, gpu, to present graph with GPU support.
* `saved_model.utils` now support SparseTensors transparently.
* A more efficient implementation of non-max suppression.
* Add support for the shrinkage-type L2 to FtrlOptimizer in addition to the
online L2 it already supports.
* Fix negative variance in moments calculation.
* Expand UniqueOp Benchmark Tests to cover more collision cases.
* Improves stability of GCS filesystem on Mac.
* Add time estimation to HloCostAnalysis.
* Fixed the bug in Estimator that params in constructor was not a deepcopy of
the user provided one. This bugs inadvertently enabled user to mutate the
params after the creation of Estimator, leading to potentially undefined
behavior.
* Added None check for save_path in `saver.restore`.
* Register devices under their legacy names in device_mgr to ease the
transition to clusterspec-propagated configurations.
* VectorExponential added to distributions.
* Add a bitwise module with bitwise_and, bitwise_or, bitwise_xor, and invert
functions.
* Add fixed-grid ODE integration routines.
* Allow passing bounds to ScipyOptimizerInterface.
* Correctness fixes for fft_length parameter to `tf.spectral.rfft` &
`tf.spectral.irfft`.
* Exported model signatures using the 'predict' method will no longer have
their input and output keys silently ignored and rewritten to 'inputs' and
'outputs'. If a model was exported with different names before 1.2, and is
now served with tensorflow/serving, it will accept requests using 'inputs'
and 'outputs'. Starting at 1.2, such a model will accept the keys specified
during export. Therefore, inference requests using 'inputs' and 'outputs'
may start to fail. To fix this, either update any inference clients to send
requests with the actual input and output keys used by the trainer code, or
conversely, update the trainer code to name the input and output Tensors
'inputs' and 'outputs', respectively. Signatures using the 'classify' and
'regress' methods are not affected by this change; they will continue to
standardize their input and output keys as before.
* Add in-memory caching to the Dataset API.
* Set default end_of_sequence variable in datasets iterators to false.
* [Performance] Increase performance of `tf.layers.conv2d` when setting
use_bias=True by 2x by using nn.bias_add.
* Update iOS examples to use CocoaPods, and moved to tensorflow/examples/ios.
* Adds a family= attribute in `tf.summary` ops to allow controlling the tab
name used in Tensorboard for organizing summaries.
* When GPU is configured, do not require --config=cuda, instead, automatically
build for GPU if this is requested in the configure script.
* Fix incorrect sampling of small probabilities in CPU/GPU multinomial.
* Add a list_devices() API on sessions to list devices within a cluster.
Additionally, this change augment the ListDevices master API to support
specifying a session.
* Allow uses of over-parameterized separable convolution.
* TensorForest multi-regression bug fix.
* Framework now supports armv7, cocoapods.org now displays correct page.
* Script to create iOS framework for CocoaPods.
* Android releases of TensorFlow are now pushed to jcenter for easier
integration into apps. See
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/android/inference_interface/README.md
for more details.
* TensorFlow Debugger (tfdbg):
* Fixed a bug that prevented tfdbg from functioning with multi-GPU setups.
* Fixed a bug that prevented tfdbg from working with
`tf.Session.make_callable`.

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

4F2E4A2E, Adriano Carmezim, Adrià Arrufat, Alan Yee, Alex Lattas, Alex Rothberg,
Alexandr Baranezky, Ali Siddiqui, Andreas Solleder, Andrei Costinescu, Andrew
Hundt, Androbin, Andy Kernahan, Anish Shah, Anthony Platanios, Arvinds-Ds, b1rd,
Baptiste Arnaud, Ben Mabey, Benedikt Linse, Beomsu Kim, Bo Wang, Boyuan Deng,
Brett Koonce, Bruno Rosa, Carl Thomé, Changming Sun, Chase Roberts, Chirag
Bhatia, Chris Antaki, Chris Hoyean Song, Chris Tava, Christos Nikolaou, Croath
Liu, cxx, Czxck001, Daniel Ylitalo, Danny Goodman, Darren Garvey, David
Brailovsky, David Norman, DavidNorman, davidpham87, ddurham2, Dhruv, DimanNe,
Drew Hintz, Dustin Tran, Earthson Lu, ethiraj, Fabian Winnen, Fei Sun, Freedom"
Koan-Sin Tan, Fritz Obermeyer, Gao, Xiang, Gautam, Guenther Schmuelling, Gyu-Ho
Lee, Hauke Brammer, horance, Humanity123, J Alammar, Jayeol Chun, Jeroen BéDorf,
Jianfei Wang, jiefangxuanyan, Jing Jun Yin, Joan Puigcerver, Joel Hestness,
Johannes Mayer, John Lawson, Johnson145, Jon Malmaud, Jonathan
Alvarez-Gutierrez, Juang, Yi-Lin, Julian Viereck, Kaarthik Sivashanmugam, Karl
Lessard, karlkubx.ca, Kevin Carbone, Kevin Van Der Burgt, Kongsea, ksellesk,
lanhin, Lef Ioannidis, Liangliang He, Louis Tiao, Luke Iwanski, LáSzló Csomor,
magixsno, Mahmoud Abuzaina, Marcel Hlopko, Mark Neumann, Maxwell Paul Brickner,
mdfaijul, MichaëL Defferrard, Michał JastrzęBski, Michele Colombo, Mike Brodie,
Mosnoi Ion, mouradmourafiq, myPrecious, Nayana Thorat, Neeraj Kashyap, Nelson
Liu, Niranjan Hasabnis, Olivier Moindrot, orome, Pankaj Gupta, Paul Van Eck,
peeyush18, Peng Yu, Pierre, preciousdp11, qjivy, Raingo, raoqiyu, ribx, Richard
S. Imaoka, Rishabh Patel, Robert Walecki, Rockford Wei, Ryan Kung, Sahil Dua,
Sandip Giri, Sayed Hadi Hashemi, sgt101, Shitian Ni, Shuolongbj, Siim PõDer,
Simon Perkins, sj6077, SOLARIS, Spotlight0xff, Steffen Eberbach, Stephen Fox,
superryanguo, Sven Mayer, Tapan Prakash, Tiago Morais Morgado, Till Hoffmann, Tj
Rana, Vadim Markovtsev, vhasanov, Wei Wu, windead, Yan (Asta) Li, Yan Chen, Yann
Henon, Yi Wang, Yong Tang, yorkie, Yuan (Terry) Tang, Yuxin Wu, zhengjiajin,
zhongzyd, 黄璞

We are also grateful to all who filed issues or helped resolve them, asked and
answered questions, and were part of inspiring discussions.

Page 15 of 18

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.