Keras-nlp

Latest version: v0.19.3

Safety actively analyzes 723882 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 9

0.15.1

Summary
Bug fix patch release.

* Always run tf preprocessing on CPU.
* Fix running preprocessing outside the main python thread.
* Fix loading classifiers with the "old name" of `XXClasssifier` as `XXTextClassifier`.
* Restore support for bytestring to tokenizers and other preprocessing layers as strings.

What's Changed
* Version bump for pre-release by mattdangerw in https://github.com/keras-team/keras-hub/pull/1842
* V0.15.1.dev1 by mattdangerw in https://github.com/keras-team/keras-hub/pull/1844
* Version bump for 0.15.1 release by mattdangerw in https://github.com/keras-team/keras-hub/pull/1845


**Full Changelog**: https://github.com/keras-team/keras-hub/compare/v0.15.0...v0.15.1

0.15.0

Summary

📢 KerasNLP is becoming KerasHub 📢, read more about it [here](https://github.com/keras-team/keras-nlp/issues/1831).

This release contains a number of feature improvements:

* Added int8 quantization support.
* Use the `quantize()` method to quantize any model.
* Llama 2 and Llama 3 pre-quantized presets are available.
* PaliGemmaCausalLM will automatically resize input images during preprocessing.
* Added more converters for hugginface/transformers checkpoints.
* Gemma 2, PaliGemma, GPT2, Bert, Albert, DistilBert, Bart.
* Class detection for huggingface/transformers checkpoints.
* Call `from_preset()` on a base class, and we will find the correct subclass to create.
* Added Vicuna presets.
* Alias `Classifier` as `TextClassifier`, `BertClassifier` as `BertTextClassifier`.
* Added `tokenizer.special_tokens` and `tokenizer.special_token_ids` as convenient properties to view all special tokens on a pretrained tokenizer.

python
Quantize an unquantized model.
lm = keras_nlp.models.CausalLM.from_preset(
"gemma2_instruct_2b_en",
dtype="bfloat16",
)
lm.quantize("int8")
Load a pre-quantized model.
lm = keras_nlp.models.CausalLM.from_preset(
"llama3_instruct_8b_en_int8",
dtype="bfloat16",
)
Convert a bert model in the huggingface/transformers format.
classifier = keras_nlp.models.TextClassifier.from_preset(
"hf://google-bert/bert-base-uncased",
num_classes=2,
)
View all special tokens.
print(classifier.preprocessor.tokenizer.special_tokens)
print(classifier.preprocessor.tokenizer.special_token_ids)


Breaking changes

* On all backends, all strings and ragged output will be returned as python strings or python lists respectively.
* This include preprocessing methods like `tokenize()` and `detokenize()`.
* This may break code that depended on `tf.Tensor` output on the `tensorflow` backend, but will lead to consistent output on all backends, which we believe will be an overall improvement.
* Preprocessing layers can still always be included in a `tf.data` preprocessing pipeline, on any backend.

What's Changed
* Version bump to 0.14.0.dev0 by grasskin in https://github.com/keras-team/keras-nlp/pull/1675
* Revert "Version bump to 0.14.0.dev0" by grasskin in https://github.com/keras-team/keras-nlp/pull/1676
* Remove Keras pin, fix tests by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1681
* Add quantization support for `Gemma`, `Gemma2` and `PaliGemma` by james77777778 in https://github.com/keras-team/keras-nlp/pull/1670
* add vicuna preset by sineeli in https://github.com/keras-team/keras-nlp/pull/1672
* Porting Gemma 2 transformers checkpoint by ariG23498 in https://github.com/keras-team/keras-nlp/pull/1678
* Improve CI speed and resolve issues of `run_quantization_check` by james77777778 in https://github.com/keras-team/keras-nlp/pull/1682
* Remove build_from_signature from MHA layers by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1687
* Refactoring: in CachedMultiHeadAttention call MHA methods instead of recoding the attention calculation by apehex in https://github.com/keras-team/keras-nlp/pull/1684
* Porting PaliGemma transformers checkpoint by ariG23498 in https://github.com/keras-team/keras-nlp/pull/1686
* Allow importing keras_nlp without tensorflow by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1660
* Add flag to gemma conversion script to specify local orbax by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1688
* Fix compatibility for earlier versions of Keras by james77777778 in https://github.com/keras-team/keras-nlp/pull/1690
* Add a test against keras-nightly by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1693
* Fix dtype bugs in `ReversibleEmbedding` and `LayerNorm` by james77777778 in https://github.com/keras-team/keras-nlp/pull/1692
* Partially revert 1687 by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1695
* Fix quantization test for `XLNet` by james77777778 in https://github.com/keras-team/keras-nlp/pull/1699
* Add a HF BERT converter, improve safetensor loading by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1694
* Add a subtle fix for gemma 2 conversions by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1701
* One more small Gemma conversion fix by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1702
* Slightly more defensive handling of type for backbone by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1703
* Add support for converting Gemma 2 checkpoints by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1700
* Make it clearer what is running in the github action UI by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1707
* Try upgrading tensorflow pin by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1706
* Bump version to fix query norm in Gemma 2 9b by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1709
* Gemma: Add logit soft-capping to score function. by RyanMullins in https://github.com/keras-team/keras-nlp/pull/1712
* Version bump HEAD to 0.15 by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1713
* Port gpt2 transformers checkpoint by cosmo3769 in https://github.com/keras-team/keras-nlp/pull/1704
* Add soft capping to reversible embedding layer by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1718
* Add presets for gemma 2 2b by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1721
* Utilize `to_numpy=True` in `quantize` if available by james77777778 in https://github.com/keras-team/keras-nlp/pull/1725
* Dynamic int8 quantization for Llama2 and Llama3 by james77777778 in https://github.com/keras-team/keras-nlp/pull/1720
* Bump the python group with 2 updates by dependabot in https://github.com/keras-team/keras-nlp/pull/1726
* Shield gemma shortnames by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1731
* Sliding window fixes by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1738
* Add int8 models to Llama2 and Llama3 by james77777778 in https://github.com/keras-team/keras-nlp/pull/1734
* Port distilbert transformer checkpoint by cosmo3769 in https://github.com/keras-team/keras-nlp/pull/1736
* Add support of `kwargs` to `Backbone.from_preset` and fix the dtype forwarding in `Task.from_preset` by james77777778 in https://github.com/keras-team/keras-nlp/pull/1742
* Remove src init file contents by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1743
* Remove ROADMAP.md by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1773
* Fix nested list in args on keras.io by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1772
* Remove stale tf only examples by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1771
* Limit the default sequence length to 1024 for all models by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1770
* Consistent preprocessing output on all backends by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1777
* Port albert transformer checkpoint by cosmo3769 in https://github.com/keras-team/keras-nlp/pull/1767
* Lower the default learning rate for albert by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1786
* Port bart transformer checkpoint by cosmo3769 in https://github.com/keras-team/keras-nlp/pull/1783
* Add an option to disable default compilation by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1787
* Port mistral transformer checkpoint by cosmo3769 in https://github.com/keras-team/keras-nlp/pull/1768
* [Bart]Fix missing weight port by cosmo3769 in https://github.com/keras-team/keras-nlp/pull/1789
* Remove python 3.8 version in setup.py by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1792
* Class detection works for huggingface checkpoints by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1800
* Rename KerasNLP symbols for a multi-modal future by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1803
* Move preprocessing to base classes by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1807
* Add `add_bos=False, add_eos=False` to SentencePieceTokenizer.__init__() by briango28 in https://github.com/keras-team/keras-nlp/pull/1811
* Only load a full task config when `load_task_extras` is passed by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1812
* Add image and audio converter classes by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1813
* Simplify registering "built-in" presets by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1818
* Support image and audio information in task summaries by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1819
* Take two of 1812, simpler classifier head loading by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1823
* Remove preprocessing layers we no longer use by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1824
* Version bump for dev release by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1825
* Version bump for dev release by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1830
* Version bump for 0.15.0 release by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1832

New Contributors
* apehex made their first contribution in https://github.com/keras-team/keras-nlp/pull/1684
* cosmo3769 made their first contribution in https://github.com/keras-team/keras-nlp/pull/1704

**Full Changelog**: https://github.com/keras-team/keras-nlp/compare/v0.14.4...v0.15.0

0.14.4

Summary
* Fix issues with Gemma 2 sliding window.
* Fix TensorFlow backend Gemma 2 generation.

What's Changed
* Sliding window fixes by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1738
* version bump by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1740
* version bump by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1741


**Full Changelog**: https://github.com/keras-team/keras-nlp/compare/v0.14.3...v0.14.4

0.14.3

Summary
* Short names for shield gemma checkpoints.
python
keras_nlp.models.GemmaCausalLM.from_preset("shieldgemma_2b_en")


What's Changed
* Version bump dev release by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1732
* Version bump for release by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1733


**Full Changelog**: https://github.com/keras-team/keras-nlp/compare/v0.14.2...v0.14.3

0.14.2

Summary

* Add Gemma 2 2b.
* Fixes for logit softcapping.

What's Changed
* Version bump 0.14.2.dev0 by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1719
* Bump pypi action version by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1722
* version bump by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1723
* Version bump 0.14.2 by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1724


**Full Changelog**: https://github.com/keras-team/keras-nlp/compare/v0.14.1...v0.14.2

0.14.1

Summary

* Update Gemma 2 9b to fix minor config error.

What's Changed
* Bump version to fix query norm in Gemma 2 9b by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1709
* Version bump 0.14.1.dev0 by mattdangerw in https://github.com/keras-team/keras-nlp/pull/1714

**Full Changelog**: https://github.com/keras-team/keras-nlp/compare/v0.14.0...v0.14.1

Page 3 of 9

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.