Tensorflow-probability

Latest version: v0.24.0

Safety actively analyzes 638437 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 7 of 10

0.8.0

Release notes

This is the 0.8 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.0.0 and 1.15.0rc1.


Change notes

- GPU-friendly "unrolled" NUTS: [`tfp.mcmc.NoUTurnSampler`](https://github.com/tensorflow/probability/blob/master/discussion/technical_note_on_unrolled_nuts.md)
- Open-source the unrolled implementation of the No U-Turn Sampler.
- Switch back to original U turn criteria in Hoffman & Gelman 2014.
- Bug fix in Unrolled NUTS to make sure it does not lose shape for event_shape=1.
- Bug fix of U turn check in Unrolled NUTS at the tree extension.
- Refactor U turn check in Unrolled NUTS.
- Fix dynamic shape bug in Unrolled NUTS.
- Move NUTS unrolled into mcmc, with additional clean up.
- Make sure the unrolled NUTS sampler handle scalar target_log_probs correctly.
- Change implementation of check U turn to using a tf.while_loop in unrolled NUTS.
- Implement multinomial sampling across tree (instead of Slice sampling) in unrolled NUTS.
- Expose additional diagnostics in `previous_kernel_results` in unrolled NUTS so that it works with `*_step_size_adaptation`.

- MCMC
- Modify the shape handling in DualAveragingStepSizeAdaptation so that it works with non-scalar event_shape.
- support structured samples in `tfp.monte_carlo.expectation`.
- Minor fix for docstring example in leapfrog_integrator

- VI
- Add utilities for fitting variational distributions.
- Improve Csiszar divergence support for joint variational distributions.
- ensure that joint distributions are correctly recognized as reparameterizable by `monte_carlo_csiszar_f_divergence`.
- Rename `monte_carlo_csiszar_f_divergence` to `monte_carlo_variational_loss`.
- Refactor tfp.vi.csiszar_vimco_helper to expose useful leave-one-out statistical tools.

- Distributions
- Added `tfp.distributions.GeneralizedPareto`
- Multinomial and DirichletMultinomial samplers are now reproducible.
- HMM samples are now reproducible.
- Cleaning up unneeded conversion to tensor in quantile().
- Added support for dynamic `num_steps` in `HiddenMarkovModel`
- Added implementation of quantile() for exponential distributions.
- Fix entropy of Categorical distribution when logits contains -inf.
- Annotate float-valued Deterministic distributions as reparameterized.
- Establish patterns which ensure that TFP objects are "GradientTape Safe."
- "GradientTape-safe" distributions: FiniteDiscrete, VonMises, Binomial, Dirichlet, Multinomial, DirichletMultinomial, Categorical, Deterministic
- Add `tfp.util.DeferredTensor` to delay Tensor operations on `tf.Variable`s (also works for `tf.Tensor`s).
- Add `probs_parameter`, `logits_parameter` member functions to Categorical-like distributions. In the future users should use these new functions rather than `probs`/`logits` properties because the properties might be `None` if that's how the distribution was parameterized.

- Bijectors
- Add `log_scale` parameter to AffineScalar bijector.
- Added `tfp.bijectors.RationalQuadraticSpline`.
- Add SoftFloor bijector. (Note: Known inverse bug WIP.)
- Allow using an arbitrary bijector in RealNVP for the coupling.
- Allow using an arbitrary bijector in MaskedAutoregressiveFlow for the coupling.

- Experimental auto-batching system: [`tfp.experimental.auto_batching`](https://github.com/tensorflow/probability/blob/master/tensorflow_probability/python/experimental/auto_batching/README.md)
- Open-source the program-counter-based auto-batching system.
- Added tfp.experimental.auto_batching, an experimental system to recover batch parallelism across recursive function invocations.
- Autobatched NUTS supports batching across consecutive trajectories.
- Add support for field references to autobatching.
- Increase the amount of Python syntax that "just works" in autobatched functions.
- pop-push fusion optimization in the autobatching system (also recently did tail-call optimization but forgot to add a relnote).
- Open-source the auto-batched implementation of the No U-Turn Sampler.

- STS
- Support TF2/Eager-mode fitting of STS models, and deprecate `build_factored_variational_loss`.
- Use dual averaging step size adaptation for STS HMC fitting.
- Add support for imputing missing values in structural time series models.
- Standardize parameter scales during STS inference.

- Layers
- Add WeightNorm layer wrapper.
- Fix gradients flowing through variables in the old style variational layers.
- `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel.

- Stats/Math
- Add calibration metrics to tfp.stats.
- Add output_gradients argument to value_and_gradient.
- Add Geyer initial positive sequence truncation criterion to tfp.mcmc.effective_sample_size.
- Resolve shape inconsistencies in PSDKernels API.
- Support dynamic-shaped results in `tfp.math.minimize`.
- ODE: Implement the Adjoint Method for gradients with respect to the initial state.


Huge thanks to all the contributors to this release!

- Alexey Radul
- Anudhyan Boral
- Arthur Lui
- Brian Patton
- Christopher Suter
- Colin Carroll
- Dan Moldovan
- Dave Moore
- Edward Loper
- Emily Fertig
- Gaurav Jain
- Ian Langmore
- Igor Ganichev
- Jacob Burnim
- Jeff Pollock
- Joshua V. Dillon
- Junpeng Lao
- Katherine Wu
- Mark Daoust
- Matthieu Coquet
- Parsiad Azimzadeh
- Pavel Sountsov
- Pavithra Vijay
- PJ Trainor
- prabhu prakash kagitha
- prakashkagitha
- Reed Wanderman-Milne
- refraction-ray
- Rif A. Saurous
- RJ Skerry-Ryan
- Saurabh Saxena
- Sharad Vikram
- Sigrid Keydana
- skeydan
- Srinivas Vasudevan
- Yash Katariya
- Zachary Nado

0.8.0rc0

This is the RC0 release candidate of the TensorFlow Probability 0.8 release.

0.7

Release notes

This is the 0.7 release of TensorFlow Probability. It is tested and stable against TensorFlow version 1.14.0.

Change notes

- Internal optimizations to HMC leapfrog integrator.
- Add FeatureTransformed, FeatureScaled, and KumaraswamyTransformed PSD kernels
- Added tfp.debugging.benchmarking.benchmark_tf_function.
- Added optional masking of observations for `hidden_markov_model` methods `posterior_marginals` and `posterior_mode`.
- Fixed evaluation order of distributions within `JointDistributionNamed`
- Rename tfb.AutoregressiveLayer to tfb.AutoregressiveNetwork.
- Support kernel and bias constraints/regularizers/initializers in tfb.AutoregressiveLayer.
- Created Backward Difference Formula (BDF) solver for stiff ODEs.
- Update Cumsum bijector.
- Add distribution layer for masked autoregressive flow in Keras.
- Shorten `repr`, `str` Distribution strings by using `"?"` instead of `"<unknown>"` to represent `None`.
- Implement FiniteDiscrete distribution
- Add Cumsum bijector.
- Make Seasonal STS more flexible to handle none constant num_steps_per_season for each season.
- In tfb.BatchNormalization, use keras layer over compat.v1 layer.
- Forward kwargs in MaskedAutoregressiveFlow.
- Added tfp.math.pivoted_cholesky for low rank preconditioning.
- Add `tfp.distributions.JointDistributionCoroutine` for specifying simple directed graphical models via Python generators.
- Complete the example notebook demonstrating multilevel modeling using TFP.
- Remove default `None` initializations for Beta and LogNormal parameters.
- Bug fix in __init__ method of Rational quadratic kernel
- Add Binomial.sample method.
- Add SparseLinearRegression structural time series component.
- Remove TFP support of KL Divergence calculation of tf.compat.v1.distributions which have been deprecated for 6 months.
- Added `tfp.math.cholesky_concat` (adds columns to a cholesky decomposition)
- Introduce SchurComplement PSD Kernel
- Add EllipticalSliceSampler as an experimental MCMC kernel.
- Remove intercepting/reuse of variables created within DistributionLambda.
- Support missing observations in structural time series models.
- Add Keras layer for masked autoregressive flows.
- Add code block to show recommended style of using JointDistribution.
- Added example notebook demonstrating multilevel modeling.
- Correctly decorate the training block in the VI part of the JointDistribution example notebook.
- Add `tfp.distributions.Sample` for specifying plates in tfd.JointDistribution*.
- Enable save/load of Keras models with DistributionLambda layers.
- Add example notebook to show how to use joint distribution sequential for small-median Bayesian graphical model.
- Add NaN propagation to tfp.stats.percentile.
- Add `tfp.distributions.JointDistributionSequential` for specifying simple directed graphical models.
- Enable save/load of models with IndependentX or MixtureX layers.
- Extend monte_carlo_csiszar_f_divergence so it also work with JointDistribution.
- Fix typo in `value_and_gradient` docstring.
- Add `SimpleStepSizeAdaptation`, deprecate `step_size_adaptation_fn`.
- batch_interp_regular_nd_grid added to tfp.math
- Adds IteratedSigmoidCentered bijector to unconstrain unit simplex.
- Add option to constrain seasonal effects to zero-sum in STS models, and enable by default.
- Add two-sample multivariate equality in distribution.
- Fix broadcasting errors when forecasting STS models with batch shape.
- Adds batch slicing support to most distributions in tfp.distributions.
- Add tfp.layers.VariationalGaussianProcess.
- Added `posterior_mode` to `HiddenMarkovModel`
- Add VariationalGaussianProcess distribution.
- Adds slicing of distributions batch axes as `dist[..., :2, tf.newaxis, 3]`
- Add tfp.layers.VariableLayer for making a Keras model which ignores inputs.
- `tfp.math.matrix_rank`.
- Add KL divergence between two blockwise distributions.
- `tf.function` decorate `tfp.bijectors`.
- Add `Blockwise` distribution for concatenating different distribution families.
- Add and begin using a utility for varying random seeds in tests when desired.
- Add two-sample calibrated statistical test for equality of CDFs, incl. support for duplicate samples.
- Deprecating obsolete `moving_mean_variance`. Use `assign_moving_mean_variance` and manage the variables explicitly.
- Migrate Variational SGD Optimizer to TF 2.0
- Migrate SGLD Optimizer to TF 2.0
- TF2 migration
- Make all test in MCMC TF2 compatible.
- Expose HMC parameters via kernel results.
- Implement a new version of sample_chain with optional tracing.
- Make MCMC diagnostic tests Eager/TF2 compatible.
- Implement Categorical to Discrete Values bijector, which maps integer x (0<=x<K) to values[x], where values is a predefined 1D tensor with size K.
- Run dense, conv variational layer tests in eager mode.
- Add Empirical distribution to Edward2 (already exists as a TFP distribution).
- Ensure Gumbel distribution does not produce `inf` samples.
- Hid tensor shapes from operators in HMM tests
- Added `Empirical` distribution
- Add the `Blockwise` bijector.
- Add `MixtureNormal` and `MixtureLogistic` distribution layers.
- Experimental support for implicit reparameterization gradients in MixtureSameFamily
- Fix parameter broadcasting in `DirichletMultinomial`.
- Add `tfp.math.clip_by_value_preserve_gradient`.
- Rename InverseGamma `rate` parameter to `scale`, to match its semantics.
- Added option 'input_output_cholesky' to LKJ distribution.
- Add a semi-local linear trend STS model component.
- Added Proximal Hessian Sparse Optimizer (a variant of Newton-Raphson).
- find_bins(x, edges, ...) added to tfp.stats.
- Disable explicit caching in masked_autoregressive in eager mode.
- Add a local level STS model component.
- Docfix: Fix constraint on valid range of reinterpreted_batch_dims for Independent.

Huge thanks to all the contributors to this release!

- Alexey Radul
- Anudhyan Boral
- axch
- Brian Patton
- cclauss
- Chikanaga Tomoyuki
- Christopher Suter
- Clive Chan
- Dave Moore
- Gaurav Jain
- harrismirza
- Harris Mirza
- Ian Langmore
- Jacob Burnim
- Janosh Riebesell
- Jeff Pollock
- Jiri Simsa
- joeyhaohao
- johndebugger
- Joshua V. Dillon
- Juan A. Navarro P?rez
- Junpeng Lao
- Matej Rizman
- Matthew O'Kelly
- MG92
- Nicola De Cao
- Parsiad Azimzadeh
- Pavel Sountsov
- Philip Pham
- PJ Trainor
- Rif A. Saurous
- Sergei Lebedev
- Sigrid Keydana
- Sophia Gu
- Srinivas Vasudevan
- ykkawana

0.7.0rc0

This is the 0.7.0-rc0 release of TensorFlow Probability. It is
tested and stable against TensorFlow version 1.14-rc0 and 2.0.0-alpha

0.6.0

Release notes

This is the 0.6 release of TensorFlow Probability. It is
tested and stable against TensorFlow version 1.13.1.

Change notes

- Adds tfp.positive_semidefinite_kernels.RationalQuadratic
- Support float64 in tfpl.MultivariateNormalTriL.
- Add IndependentLogistic and IndependentPoisson distribution layers.
- Add `make_value_setter` interceptor to set values of Edward2 random variables.
- Implementation of Kalman Smoother, as a member function of LinearGaussianStateSpaceModel.
- Bijector caching is enabled only in one direction when executing in eager mode. May cause some performance regression in eager mode if repeatedly computing `forward(x)` or `inverse(y)` with the same `x` or `y` value.
- Handle rank-0/empty event_shape in tfpl.Independent{Bernoulli,Normal}.
- Run additional tests in eager mode.
- quantiles(x, n, ...) added to tfp.stats.
- Makes tensorflow_probability compatible with Tensorflow 2.0 TensorShape indexing.
- Use scipy.special functions when testing KL divergence for Chi, Chi2.
- Add methods to create forecasts from STS models.
- Add a MixtureSameFamily distribution layer.
- Add Chi distribution.
- Fix doc typo `tfp.Distribution` -> `tfd.Distribution`.
- Add Gumbel-Gumbel KL divergence.
- Add HalfNormal-HalfNormal KL divergence.
- Add Chi2-Chi2 KL divergence unit tests.
- Add Exponential-Exponential KL divergence unit tests.
- Add sampling test for Normal-Normal KL divergence.
- Add an IndependentNormal distribution layer.
- Added `posterior_marginals` to `HiddenMarkovModel`
- Add Pareto-Pareto KL divergence.
- Add LinearRegression component for structural time series models.
- Add dataset ops to the graph (or create kernels in Eager execution) during the python Dataset object creation instead doing it during Iterator creation time.
- Text messages HMC benchmark.
- Add example notebook encoding a switching Poisson process as an HMM for multiple changepoint detection.
- Require `num_adaptation_steps` argument to `make_simple_step_size_update_policy`.
- s/eight_hmc_schools/eight_schools_hmc/ in printed benchmark string.
- Add `tfp.layers.DistributionLambda` to enable plumbing `tfd.Distribution` instances through Keras models.
- Adding tfp.math.batch_interp_regular_1d_grid.
- Update description of fill_triangular to include an in-depth example.
- Enable bijector/distribution composition, eg, `tfb.Exp(tfd.Normal(0,1))`.
- linear and midpoint interpolation added to tfp.stats.percentile.
- Make distributions include only the bijectors they use.
- tfp.math.interp_regular_1d_grid added
- tfp.stats.correlation added (Pearson correlation).
- Update list of edward2 RVs to include recently added Distributions.
- Density of continuous Uniform distribution includes the upper endpoint.
- Add support for batched inputs in tfp.glm.fit_sparse.
- interp_regular_1d_grid added to tfp.math.
- Added HiddenMarkovModel distribution.
- Add Student's T Process.
- Optimize LinearGaussianStateSpaceModel by avoiding matrix ops when the observations are statically known to be scalar.
- stddev, cholesky added to tfp.stats.
- Add methods to fit structual time series models to data with variational inference and HMC.
- Add Expm1 bijector (Y = Exp(X) - 1).
- New stats namespace. covariance and variance added to tfp.stats
- Make all available MCMC kernels compatible with TransformedTransitionKernel.

Huge thanks to all the contributors to this release!

- Adam Wood
- Alexey Radul
- Anudhyan Boral
- Ashish Saxena
- Billy Lamberta
- Brian Patton
- Christopher Suter
- Cyril Chimisov
- Dave Moore
- Eugene Zhulenev
- Griffin Tabor
- Ian Langmore
- Jacob Burnim
- Jakub Arnold
- Jiahao Yao
- Jihun
- Jiming Ye
- Joshua V. Dillon
- Juan A. Navarro Pérez
- Julius Kunze
- Julius Plenz
- Kristian Hartikainen
- Kyle Beauchamp
- Matej Rizman
- Pavel Sountsov
- Peter Roelants
- Rif A. Saurous
- Rohan Jain
- Roman Ring
- Rui Zhao
- Sergio Guadarrama
- Shuhei Iitsuka
- Shuming Hu
- Srinivas Vasudevan
- Tabor473
- ValentinMouret
- Youngwook Kim
- Yuki Nagae

0.6.0rc1

This is the 0.6.0-rc1 release candidate of TensorFlow Probability. It is tested against TensorFlow 1.13.0-rc2.

Page 7 of 10

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.