✨ New features and improvements
- **NEW**: More accurate [Chain-of-Thought (CoT) NER](https://spacy.io/api/large-language-models#ner-v3) prompting with `spacy.NER.v3`(180)
- **NEW**: Task-specific [component factories](https://spacy.io/api/large-language-models#config) for `llm_ner` , `llm_spancat`, `llm_rel`,`llm_textcat` , `llm_sentiment`, `llm_summarization`(243, 283)
- **NEW**: Implementation of `add_label` functionality to more easily work with an `llm` component directly in Python code and not (necessarily) through the config system (277)
- New `v2` model versions for the OpenAI models that set [reasonable defaults](https://spacy.io/api/large-language-models#models-rest) for `temperature` and `max_tokens` (236)
- Functionality to ignore occassional blocking errors from Cohere (233)
- Support for Pydantic v1 and v2 (261, 275)
- Internal refactoring, including renaming of v1 Jinja templates (242)
- Empty the cache of `torch.cuda` in between calls (242)
- Various improvements to the test suite and CI
🔴 Bug fixes
- Fix Anthropic chat endpoints (230)
⚠️ Backwards incompatibilities
- Though significant refactoring of internal modules has happened, this release should not introduce any backwards incompatibilities for user-facing functionality.
- Check our [migration guide](https://github.com/explosion/spacy-llm/blob/main/migration_guide.md#04x-to-05x) if you want to update the SpanCat or NER task from `v1` or `v2` to `v3`.
📖 Documentation and examples
- Updated [usage](https://spacy.io/usage/large-language-models) documentation
- Updated [API](https://spacy.io/api/large-language-models) documentation
- New Chain-of-Though [example](https://github.com/explosion/spacy-llm/tree/main/usage_examples/ner_v3_openai) with GPT 3.5
- New `textcat` [example](https://github.com/explosion/spacy-llm/tree/main/usage_examples/textcat_dolly) with Dolly
👥 Contributors
adrianeboyd, honnibal, ines, kabirkhan, ljvmiranda921, rmitsch, svlandeg, victorialslocum, vinbo8