Thinc

Latest version: v9.1.1

Safety actively analyzes 693883 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 14

8.1.1

✨ New features and improvements

* Use [confection](https://github.com/explosion/confection) for configurations (#745).
* Add the [Dish activation](https://thinc.ai/docs/api-backends#dish) function and [layer](https://thinc.ai/docs/api-layers#dish) (719).
* Add the [`with_signpost_interval`](https://thinc.ai/docs/api-layers#with_signpost_interval) layer to support layer profiling with macOS Instruments (711).
* Add [`remap_ids.v2`](https://thinc.ai/docs/api-layers#remap_ids) layer which allows more types of inputs (726).
* Extend BLIS support to version 0.9.x (736).
* Improve performance when gradient scaling is used (746).
* Improve [MaxOut](https://thinc.ai/docs/api-layers#maxout) performance by unrolling `argmax` in `maxout` (702).

🔴 Bug fixes

* Fix issue 720: Improve type inference by replacing `FloatsType` in `Ops` by a `TypeVar`.
* Fix issue 739: Fix typing of `Ops.asarrayDf` methods.
* Fix issue 757: Improve compatibility with supported Tensorflow versions.

👥 Contributors

adrianeboyd, cclauss, danieldk, honnibal, ines, kadarakos, polm, rmitsch, shadeMe

8.1.0

✨ New features and improvements

- Added support for mypy 0.950 and pydantic v1.9.0, added bound types throughout layers and ops (599).
- Made all `NumpyOps` CPU kernels generic (627).
- Made all custom CUDA kernels generic (603).
- Added bounds checks for `NumpyOps` (618).
- Fixed out-of-bounds writes in `NumpyOps` and `CupyOps` (664).
- Reduced unnecessary zero-init allocations (632).
- Fixed reductions when applied to zero-length sequences (637).
- Added `NumpyOps.cblas` to get a table of C BLAS functions (643, 700).
- Improved type-casting in `NumpyOps.asarray` (656).
- Simplified `CupyOps.asarray` (661).
- Fixed `Model.copy()` for layers used more than once (659).
- Fixed potential race in `Shim` (677).
- Convert numpy arrays using dlpack in `xp2tensorflow` and `xp2torch` when possible (686).
- Improved speed of `HashEmbed` by avoiding large temporary arrays (696).
- Added `Ops.reduce_last` and `Ops.reduce_first` (710).
- Numerous test suite improvements.
- **Experimental**: Add support for Metal Performance Shaders with PyTorch nightlies (685).

🔴 Bug fixes

- Fix issue 707: Fix label smoothing threshold for `to_categorical`.

⚠️ Backwards incompatibilities

* In most cases the typing updates allow many casts and ignores to be removed, but types may also need minor modifications following the updates for mypy and pydantic.
* `get_array_module` now returns `None` for non-numpy/cupy array input rather than returning `numpy` by default.
* The `prefer_gpu` and `require_gpu` functions no longer set the default PyTorch `torch.Tensor` type to `torch.cuda.FloatTensor`. This means that wrapped PyTorch models cannot assume that Tensors are allocated on a CUDA GPU after calling these functions. For example:


Before Thinc v8.1.0, this Tensor would be allocated on the GPU after
{prefer,require}_gpu. Now it will be allocated as a CPU tensor by default.
token_mask = torch.arange(max_seq_len)

To ensure correct allocation, specify the device where the Tensor should be allocated.
`input` refers to the input of the model.
token_mask = torch.arange(max_seq_len, device=input.device)


This change brings Thinc's behavior in line with how device memory allocation is normally handled in PyTorch.

👥 Contributors

adrianeboyd, danieldk, honnibal, ines, kadarakos, koaning, richardpaulhudson, shadeMe, svlandeg

8.0.17

✨ New features and improvements

- Extend support for `typing_extensions` up to v4.1.x (for Python 3.7 and earlier).
- Various fixes in the test suite.

👥 Contributors

adrianeboyd, danieldk, honnibal, ines, shadeMe

8.0.16

✨ New features and improvements

- Make [`Ops.asarray`](https://thinc.ai/docs/api-backends#asarray) implementations more robust.

🔴 Bug fixes

- Fix issue 624: Support CPU inference for models trained with gradient scaling.
- Fix issue 633: Fix invalid indexing in `Beam` when no states have valid transitions.
- Fix issue 639: Improve PyTorch `Tensor` handling in `CupyOps.asarray`.
- Fix issue 649: Clamp inputs in `Ops.sigmoid` to prevent overflow.
- Fix issue 651: Fix type safety issue with model ID assignment.
- Fix issue 653: Correctly handle Tensorflow GPU tensors in tests.
- Fix issue 660: Make `is_torch_array` work without PyTorch installed.
- Fix issue 664: Fix out of-bounds writes in `CupyOps.adam` and `NumpyOps.adam`.

⚠️ Backwards incompatibilities

- The `init` implementations for layers no longer return [`Model`](https://thinc.ai/docs/api-model#model).

📖 Documentation and examples

- Add [notebook demonstrating Bloom embeddings](https://github.com/explosion/thinc/blob/master/examples/bloom_embeddings.ipynb).
- Fix LSTM benchmark example.
- Update installation instructions.

👥 Contributors

adrianeboyd, danieldk, honnibal, ines, kadarakos, koaning, notplus, richardpaulhudson, shadeMe

8.0.15

🔴 Bug fixes

- Fix issue 610: Improve compatibility with PyTorch versions before v1.9.0.

👥 Contributors

adrianeboyd, danieldk

8.0.14

✨ New features and improvements

- Add new activation functions: [`ClippedLinear.v1`](https://thinc.ai/docs/api-layers#clippedlinear), [`Gelu.v1`](https://thinc.ai/docs/api-layers#gelu), [`HardSigmoid.v1`](https://thinc.ai/docs/api-layers#hardsigmoid), [`HardSwish.v1`](https://thinc.ai/docs/api-layers#hardswish), [`HardSwishMobilenet.v1`](https://thinc.ai/docs/api-layers#hardswishmobilenet), [`HardTanh.v1`](https://thinc.ai/docs/api-layers#hardtanh), [`ReluK.v1`](https://thinc.ai/docs/api-layers#reluk), and [`Swish.v1`](https://thinc.ai/docs/api-layers#swish).
- Automatically set the GPU allocator to PyTorch when PyTorch models are loaded through `PyTorchWrapper` on GPU to avoid [memory contention between CuPy and PyTorch](https://thinc.ai/docs/usage-frameworks#memory-contention).
- Support big endian platforms through [`thinc-bigendian-ops`](https://github.com/andrewsi-z/thinc-bigendian-ops) and consistently serialize model data with little endian byte order.
- Add [`Softmax.v2`](https://thinc.ai/docs/api-layers#softmax) with support for softmax with temperature and optional normalization.
- Add [`CategoricalCrossentropy.v3`](https://thinc.ai/docs/api-loss#categorical_crossentropy) and [`SequenceCategoricalCrossentropy.v3`](https://thinc.ai/docs/api-loss#sequence_categorical_crossentropy) with support for label smoothing.
- Speed up [`CupyOps.maxout`](https://thinc.ai/docs/api-backends#maxout) by [exploiting GPU parallelism](https://github.com/explosion/thinc/pull/579#issue-1113932021) better.
- Support sequence lengths in the `NumpyOps.seq2col` and `CupyOps.seq2col` implementations of [`Ops.seq2col`](https://thinc.ai/docs/api-backends#seq2col) to determine padding.
- [Improve performance](https://github.com/explosion/thinc/pull/585#issue-1124058029) of [`Ragged`](https://thinc.ai/docs/api-types#ragged).
- Support [`Ragged`](https://thinc.ai/docs/api-types#ragged) arrays in [`expand_window.v1`](https://thinc.ai/docs/api-layers#expand_window).

🔴 Bug fixes

- Fix issue 552: Do not backpropagate `Inf`/`NaN` out of PyTorch layers when using mixed-precision training.
- Fix issue 578: Correctly cast the threshold argument of `CupyOps.mish` and correct an equation in `Ops.backprop_mish`.
- Fix issue 587: Correct invariant checks in `CategoricalCrossentropy.get_grad`.
- Fix issue 592: Update `murmurhash`requirement.
- Fix issue 594: Do not sort positional arguments in `Config`.

⚠️ Backwards incompatibilities

- The `out` keyword argument of `Ops.mish` and `Ops.backprop_mish` is replaced by `inplace` for consistency with other activations.

📖Documentation and examples

- Update [example Jupyter notebooks](https://github.com/explosion/thinc/#-selected-examples-and-notebooks) for the current Thinc version.

👥 Contributors

adrianeboyd, andrewsi-z, danieldk, honnibal, ines, Jette16, kadarakos, kianmeng, polm, svlandeg, thatbudakguy

Page 4 of 14

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.