Thinc

Latest version: v9.1.1

Safety actively analyzes 693883 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 11 of 14

6.11.1

Not secure
✨ New features and improvements

* Thinc now vendorizes OpenBLAS's `cblas_sgemm` function, and delegates matrix multiplications to it by default. The provided function is single-threaded, making it easy to call Thinc from multiple processes. The default sgemm function can be overridden using the `THINC_BLAS` environment variable --- see below.
* `thinc.neural.util.get_ops` now understands device integers, e.g. `0` for GPU 0, as well as strings like `"cpu"` and `"cupy"`.
* Update `StaticVectors` model, to make use of spaCy v2.0's [`Vectors`](https://spacy.io/api/vectors) class.
* New `.gemm()` method on NumpyOps and CupyOps classes, allowing matrix and vector multiplication to be handled with a simple function. Example usage:

**Customizing the matrix multiplication backend**

Previous versions of Thinc have relied on numpy for matrix multiplications. When numpy is installed via wheel using pip (the default), numpy will usually be linked against a suboptimal matrix multiplication kernel. This made it difficult to ensure that Thinc was well optimized for the target machine.

To fix this, Thinc now provides its own matrix multiplications, by bundling the source code for OpenBLAS's sgemm kernel within the library. To change the default BLAS library, you can specify an environment variable, giving the location of the shared library you want to link against:

bash
THINC_BLAS=/opt/openblas/lib/libopenblas.so pip install thinc --no-cache-dir --no-binary
export LD_LIBRARY_PATH=/opt/openblas/lib
On OSX:
export DYLD_LIBRARY_PATH=/opt/openblas/lib


If you want to link against the Intel MKL instead of OpenBLAS, the easiest way is to install Miniconda. For instance, if you installed miniconda to `/opt/miniconda', the command to install Thinc linked against MKL would be:

bash
THINC_BLAS=/opt/miniconda/numpy-mkl/lib/libmkl_rt.so pip install thinc --no-cache-dir --no-binary
export LD_LIBRARY_PATH=/opt/miniconda/numpy-mkl/lib
On OSX:
export DYLD_LIBRARY_PATH=/opt/miniconda/numpy-mkl/lib


If the library file ends in a .a extension, it is linked statically; if it ends in .so, it's linked dynamically. Make sure you have the directory on your `LD_LIBRARY_PATH` at runtime if you use the dynamic linking.

🔴 Bug fixes

* Fix pickle support for `FeatureExtracter` class.
* Fix unicode error in Quora dataset loader.
* Fix batch normalization bugs. Now supports batch "renormalization" correctly.
* Models now reliably distinguish predict vs. train modes, using the convention `drop=None`. Previously, layers such as `BatchNorm` relied on having their `predict()` method called, which didn't work they were called by layers which didn't implement a `predict()` method. We now set `drop=None` to make this more reliable.
* Fix bug that caused incorrect data types to be produced by `FeatureExtracter`.

👥 Contributors

Thanks to dvsrepo, justindujardin, alephmelo and darkdreamingdan for the pull requests and contributions.

6.10.3

Not secure
✨ New features and improvements

* Update `cytoolz` version pin to make Thinc compatible with Python 3.7.
* Only install old `pathlib` backport on Python 2 (see 69).
* Use `msgpack` instead of `msgpack-python`.
* Drop `termcolor` dependency.

6.10.2

Not secure
✨ New features and improvements

* Improve GPU utilisation for attention layer.
* Improve efficiency of Maxout layer on CPU.

🔴 Bug fixes

* Bug fix to `foreach` combinator, useful for hierarchical models.
* Bug fix to batch normalization.

📖 Documentation and examples

* Update `imdb_cnn` text classification example.

6.10.1

Not secure
🔴 Bug fixes

* Fix installation with CUDA 9.
* Fix minor memory leak in beam search.
* Fix dataset readers.

6.10.0

Not secure
* Improve efficiency of `NumpyOps.scatter_add`, when the indices only have a single dimension. This function was previously a bottle-neck for spaCy.
* Remove redundant copies in backpropagation of maxout non-linearity
* Call floating-point versions of `sqrt`, `exp` and `tanh` functions.
* Remove calls to `tensordot`, instead reshaping to make 2d `dot` calls.
* Improve efficiency of Adam optimizer on CPU.
* Eliminate redundant code in `thinc.optimizers`. There's now a single `Optimizer` class. For backwards compatibility, `SGD` and `Adam` functions are used to create optimizers with the `Adam` recipe or vanilla SGD recipe.

👥 Contributors

Thanks to RaananHadar for the pull request!

6.9.0

Not secure
✨ Major features and improvements

* Add new namespace modules ``thinc.v2v``, ``thinc.i2v``, ``thinc.t2t``, ``thinc.t2v`` that group layer implementations by input and output type ``v`` indicates `vector`, `i` indicates integer ID, `t` indicates tensor. The input type refers to the logical unit, i.e. what constitutes a sample.

🔴 Bug fixes

* Fix bug in layer normalization. The bug fix means that models trained with Thinc 6.8 are incompatible with Thinc 6.9. For convenience, a backwards compatibility flag has been added, which can be set with ``thinc.neural._classes.layernorm.set_compat_six_eight``. This flag is off by default.

Page 11 of 14

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.