Tensorflow-federated

Latest version: v0.87.0

Safety actively analyzes 701507 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 15

0.87.0

Added

* Added an implementation of AdamW to `tff.learning.optimizers`.

Changed

* Support `None` gradients in `tff.learning.optimizers`. This mimics the
behavior of `tf.keras.optimizers` - gradients that are `None` will be
skipped, and their corresponding optimizer output (e.g. momentum and
weights) will not be updated.
* The behavior of `DPGroupingFederatedSum::Clamp`: it now sets negatives to 0.
Associated test code has been updated. Reason: sensitivity calculation for
DP noise was calibrated for non-negative values.
* Change tutorials to use `tff.learning.optimizers` in conjunction with
`tff.learning` computations.
* `tff.simulation.datasets.TestClientData` only accepts dictionaries whose
leaf nodes are not `tf.Tensor`s.

Fixed

* A bug where `tff.learning.optimizers.build_adafactor` would update its step
counter twice upon every invocation of `.next()`.
* A bug where tensor learning rates for `tff.learning.optimizers.build_sgdm`
would fail with mixed dtype gradients.
* A bug where different optimizers had different behavior on empty weights
structures. TFF optimizers now consistently accept and function as no-ops on
empty weight structures.
* A bug where `tff.simulation.datasets.TestClientData.dataset_computation`
yielded datasets of indeterminate shape.

Removed

* `tff.jax_computation`, use `tff.jax.computation` instead.
* `tff.profiler`, this API is not used.
* Removed various stale tutorials.
* Removed `structure` from `tff.program.SavedModelFileReleaseManager`'s
`get_value` method parameters.
* Removed support for `tf.keras.optimizers` in `tff.learning`.

0.86.0

Added

* `tff.tensorflow.transform_args` and `tff.tensorflow.transform_result`, these
functions are intended to be used when instantiating and execution context
in a TensorFlow environment.

Changed

* Replaced the `tensor` on the `Value` protobuf with an `array` field and
updated the serialization logic to use this new field.
* `tff.program.FileProgramStateManager` to be able to keep program states at a
specified interval (every k states).

0.85.0

Added

* The `dp_noise_mechanisms` header and source files: contains functions that
generate `differential_privacy::LaplaceMechanism` or
`differential_privacy::GaussianMechanism`, based upon privacy parameters and
norm bounds. Each of these functions return a `DPHistogramBundle` struct,
which contains the mechanism, the threshold needed for DP open-domain
histograms, and a boolean indicating whether Laplace noise was used.
* Added some TFF executor classes to the public API (CPPExecutorFactory,
ResourceManagingExecutorFactory, RemoteExecutor, RemoteExecutorGrpcStub).
* Added support for `bfloat16` dtypes from the `ml_dtypes` package.

Fixed

* A bug where `tf.string` was mistakenly allowed as a dtype to
`tff.types.TensorType`. This now must be `np.str_`.

Changed

* `tff.Computation` and `tff.framework.ConcreteComputation` to be able to
transform the arguments to the computation and result of the computation.
* `DPClosedDomainHistogram::Report` and `DPOpenDomainHistogram::Report`: they
both use the `DPHistogramBundles` produced by the `CreateDPHistogramBundle`
function in `dp_noise_mechanisms`.
* `DPGroupByFactory::CreateInternal`: when `delta` is not provided, check if
the right norm bounds are provided to compute L1 sensitivity (for the
Laplace mech).
* CreateRemoteExecutorStack now allows the composing executor to be specified
and assigns client values to leaf executors such that all leaf executors
receive the same number of clients, except for potentially the last leaf
executor, which may receive fewer clients.
* Allow `tff.learning.programs.train_model` to accept a `should_discard_round`
function to decide whether a round should be discarded and retried.

Removed

* `tff.structure.to_container_recursive`, this should not be used externally.

0.84.0

Added

* TFF executor classes to the public API (`ComposingExecutor`,
`ExecutorTestBase`, `MockExecutor`, `ThreadPool`).
* Compiler transformation helper functions to the public API
(`replace_intrinsics_with_bodies`, `unique_name_generator`,
`transform_preorder`, `to_call_dominant`).
* A method to get the number of checkpoints aggregated to the
`CheckpointAggregator` API.
* The function `DPClosedDomainHistogram::IncrementDomainIndices`. It allows
calling code to iterate through the domain of composite keys (in a do-while
loop).

Changed

* Renamed the boolean `use_experimental_simulation_loop` parameter to
`loop_implementation` that accepts an `tff.learning.LoopImplementation` enum
for all `tff.learning.algorithms` methods.
* Modified the model output release frequency to every 10 rounds and the final
round in `tff.learning.programs.train_model`.
* Loosened the `kEpsilonThreshold` constant and updated the tests of
`DPOpenDomainHistogram` accordingly.
* The behavior of `DPClosedDomainHistogram::Report()`: it now produces an
aggregate for each possible combinations of keys. Those composite keys that
`GroupByAggregator` did not already assign an aggregate to are assigned 0.
Future CL will add noise.
* Modified `tff.learning.algorithms.build_weighted_fed_avg` to generate
different training graphs when `use_experimental_simulation_loop=True` and
`model_fn` is of type `tff.learning.models.FunctionalModel`.

Fixed

* `tff.learning.programs.EvaluationManager` raised an error when the version
IDs of two state-saving operations were the same.
* `tff.jax.computation` raised an error when the computation has unused
arguments.
* The `tff.backends.xla` execution stack raised an error when single element
structures are returned from `tff.jax.computation` wrapped methods.

0.83.0

Changed

* The `tff.learning.programs.train_model` program logic to save a deep copy of
the data source iterator within the program state.
* The file-backed native program components to not flatten and unflatten
values.

Removed

* Unused functions from `tensorflow_utils`.
* Serializing raw `tf.Tensor` values to the `Value` protobuf.
* Partial support for `dataclasses`.

0.82.0

Added

* A serialized raw array content field to the Array proto.
* A function to `DPCompositeKeyCombiner` that allows retrieval of an ordinal.
Intended for use by the closed-domain DP histogram aggregation core.
* Constants for invalid ordinals and default `l0_bound_`.
* New `DPClosedDomainHistogram` class. Sibling of `DPOpenDomainHistogram` that
is constructed from DP parameters plus domain information. No noising yet.

Changed

* How `DPCompositeKeyCombiner` handles invalid `l0_bound_` values.
* The default `l0_bound_` value in `DPCompositeKeyCombiner` to new constant.
* Organization of DP histogram code. Previously, open-domain histogram class +
factory class lived side-by-side in `dp_group_by_aggregator.h/cc`. Now split
into `dp_open_domain_histogram.h/cc` and `dp_group_by_factory.h/cc`, which
will ease future addition of code for closed-domain histogram.
* Moved `tff.federated_secure_modular_sum` to the mapreduce backend, use
`tff.backends.mapreduce.federated_secure_modular_sum` instead.
* `DPGroupByAggregator` changes how it checks the intrinsic based on number of
domain tensors in the parameter field.
* `DPGroupByFactory` is now responsible for checking number and type of the
parameters in the `DPGroupingFederatedSum` intrinsic, since the factory is
now accessing those parameters.
* Type of `domain_tensors` in `DPCompositeKeyCombiner::GetOrdinal` is now
`TensorSpan` (alias of `absl::Span<const Tensor>`). This will make it
possible to retrieve the slice of `intrinsic.parameters` that contains the
domain information and pass it to `DPClosedDomainHistogram`.
* Switched type of `indices` in `GetOrdinal` from `FixedArray<size_t>` to
`FixedArray<int64_t>`, to better align with internal standards.

Page 1 of 15

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.