Release notes
This is the 0.7 release of TensorFlow Probability. It is tested and stable against TensorFlow version 1.14.0.
Change notes
- Internal optimizations to HMC leapfrog integrator.
- Add FeatureTransformed, FeatureScaled, and KumaraswamyTransformed PSD kernels
- Added tfp.debugging.benchmarking.benchmark_tf_function.
- Added optional masking of observations for `hidden_markov_model` methods `posterior_marginals` and `posterior_mode`.
- Fixed evaluation order of distributions within `JointDistributionNamed`
- Rename tfb.AutoregressiveLayer to tfb.AutoregressiveNetwork.
- Support kernel and bias constraints/regularizers/initializers in tfb.AutoregressiveLayer.
- Created Backward Difference Formula (BDF) solver for stiff ODEs.
- Update Cumsum bijector.
- Add distribution layer for masked autoregressive flow in Keras.
- Shorten `repr`, `str` Distribution strings by using `"?"` instead of `"<unknown>"` to represent `None`.
- Implement FiniteDiscrete distribution
- Add Cumsum bijector.
- Make Seasonal STS more flexible to handle none constant num_steps_per_season for each season.
- In tfb.BatchNormalization, use keras layer over compat.v1 layer.
- Forward kwargs in MaskedAutoregressiveFlow.
- Added tfp.math.pivoted_cholesky for low rank preconditioning.
- Add `tfp.distributions.JointDistributionCoroutine` for specifying simple directed graphical models via Python generators.
- Complete the example notebook demonstrating multilevel modeling using TFP.
- Remove default `None` initializations for Beta and LogNormal parameters.
- Bug fix in __init__ method of Rational quadratic kernel
- Add Binomial.sample method.
- Add SparseLinearRegression structural time series component.
- Remove TFP support of KL Divergence calculation of tf.compat.v1.distributions which have been deprecated for 6 months.
- Added `tfp.math.cholesky_concat` (adds columns to a cholesky decomposition)
- Introduce SchurComplement PSD Kernel
- Add EllipticalSliceSampler as an experimental MCMC kernel.
- Remove intercepting/reuse of variables created within DistributionLambda.
- Support missing observations in structural time series models.
- Add Keras layer for masked autoregressive flows.
- Add code block to show recommended style of using JointDistribution.
- Added example notebook demonstrating multilevel modeling.
- Correctly decorate the training block in the VI part of the JointDistribution example notebook.
- Add `tfp.distributions.Sample` for specifying plates in tfd.JointDistribution*.
- Enable save/load of Keras models with DistributionLambda layers.
- Add example notebook to show how to use joint distribution sequential for small-median Bayesian graphical model.
- Add NaN propagation to tfp.stats.percentile.
- Add `tfp.distributions.JointDistributionSequential` for specifying simple directed graphical models.
- Enable save/load of models with IndependentX or MixtureX layers.
- Extend monte_carlo_csiszar_f_divergence so it also work with JointDistribution.
- Fix typo in `value_and_gradient` docstring.
- Add `SimpleStepSizeAdaptation`, deprecate `step_size_adaptation_fn`.
- batch_interp_regular_nd_grid added to tfp.math
- Adds IteratedSigmoidCentered bijector to unconstrain unit simplex.
- Add option to constrain seasonal effects to zero-sum in STS models, and enable by default.
- Add two-sample multivariate equality in distribution.
- Fix broadcasting errors when forecasting STS models with batch shape.
- Adds batch slicing support to most distributions in tfp.distributions.
- Add tfp.layers.VariationalGaussianProcess.
- Added `posterior_mode` to `HiddenMarkovModel`
- Add VariationalGaussianProcess distribution.
- Adds slicing of distributions batch axes as `dist[..., :2, tf.newaxis, 3]`
- Add tfp.layers.VariableLayer for making a Keras model which ignores inputs.
- `tfp.math.matrix_rank`.
- Add KL divergence between two blockwise distributions.
- `tf.function` decorate `tfp.bijectors`.
- Add `Blockwise` distribution for concatenating different distribution families.
- Add and begin using a utility for varying random seeds in tests when desired.
- Add two-sample calibrated statistical test for equality of CDFs, incl. support for duplicate samples.
- Deprecating obsolete `moving_mean_variance`. Use `assign_moving_mean_variance` and manage the variables explicitly.
- Migrate Variational SGD Optimizer to TF 2.0
- Migrate SGLD Optimizer to TF 2.0
- TF2 migration
- Make all test in MCMC TF2 compatible.
- Expose HMC parameters via kernel results.
- Implement a new version of sample_chain with optional tracing.
- Make MCMC diagnostic tests Eager/TF2 compatible.
- Implement Categorical to Discrete Values bijector, which maps integer x (0<=x<K) to values[x], where values is a predefined 1D tensor with size K.
- Run dense, conv variational layer tests in eager mode.
- Add Empirical distribution to Edward2 (already exists as a TFP distribution).
- Ensure Gumbel distribution does not produce `inf` samples.
- Hid tensor shapes from operators in HMM tests
- Added `Empirical` distribution
- Add the `Blockwise` bijector.
- Add `MixtureNormal` and `MixtureLogistic` distribution layers.
- Experimental support for implicit reparameterization gradients in MixtureSameFamily
- Fix parameter broadcasting in `DirichletMultinomial`.
- Add `tfp.math.clip_by_value_preserve_gradient`.
- Rename InverseGamma `rate` parameter to `scale`, to match its semantics.
- Added option 'input_output_cholesky' to LKJ distribution.
- Add a semi-local linear trend STS model component.
- Added Proximal Hessian Sparse Optimizer (a variant of Newton-Raphson).
- find_bins(x, edges, ...) added to tfp.stats.
- Disable explicit caching in masked_autoregressive in eager mode.
- Add a local level STS model component.
- Docfix: Fix constraint on valid range of reinterpreted_batch_dims for Independent.
Huge thanks to all the contributors to this release!
- Alexey Radul
- Anudhyan Boral
- axch
- Brian Patton
- cclauss
- Chikanaga Tomoyuki
- Christopher Suter
- Clive Chan
- Dave Moore
- Gaurav Jain
- harrismirza
- Harris Mirza
- Ian Langmore
- Jacob Burnim
- Janosh Riebesell
- Jeff Pollock
- Jiri Simsa
- joeyhaohao
- johndebugger
- Joshua V. Dillon
- Juan A. Navarro P?rez
- Junpeng Lao
- Matej Rizman
- Matthew O'Kelly
- MG92
- Nicola De Cao
- Parsiad Azimzadeh
- Pavel Sountsov
- Philip Pham
- PJ Trainor
- Rif A. Saurous
- Sergei Lebedev
- Sigrid Keydana
- Sophia Gu
- Srinivas Vasudevan
- ykkawana