new:
- Use OnPrem.LLM with OpenAI-compatible REST APIs (61)
- information extraction pipeline (64)
- experimental support for Azure OpenAI (63)
- Docker support
- Few-Shot classification pipeline (66)
changed
- change default model to Mistral (65)
- allow installation of onprem without llama-cpp-python for easier use with LLMs served through
REST APIs (62)
- Added `ignore_fn` argument to `LLM.ingest` to allow more control over ignoring certain files (58)
- Added `Ingester.get_ingested_files` to show files ingested into vector database (59)
fixed:
- If encountering a loading error when processing a file, skip and continue instead of halting (60)
- Add check for partially download files (49)