- [83](https://github.com/noamgat/lm-format-enforcer/pull/83) - Supporting added model tokens - Improved support for out-of-token-vocabulary characters
0.9.2
- [80](https://github.com/noamgat/lm-format-enforcer/issues/80) - Fixed bug where comma could start json list - [34](https://github.com/noamgat/lm-format-enforcer/issues/34) - Fixed llama-cpp-python low max_tokens default in sample
0.9.1
- Fixed build errors in certain edge cases
0.9.0
- [68](https://github.com/noamgat/lm-format-enforcer/pull/68) Added NVIDIA TensorRT-LLM Support, NVIDIA's contribution by [Ahmet Erdem](https://github.com/aerdem4). Thanks! - Much faster TokenizerData initialization, new JSON freetext token caching algorithm. - More robust error reporting.
0.8.3
- [67](https://github.com/noamgat/lm-format-enforcer/issues/67) Updating vLLM integration to support v0.3.0 - [63](https://github.com/noamgat/lm-format-enforcer/issues/63) JSONSchemaParser: Empty list cannot be closed after a newline
0.8.2
Several `JsonSchemaParser` improvements: - [32](https://github.com/noamgat/lm-format-enforcer/issues/32) Added limited support for regex-constrained string in JSON using the `pattern` field. See `test_phone_number_in_string()`. - [54](https://github.com/noamgat/lm-format-enforcer/issues/54) Fixed regression bug caused by limited-length JSON string caching. - [53](https://github.com/noamgat/lm-format-enforcer/issues/53) Fixed problems with arrays of union types.