Neural-cherche

Latest version: v1.4.0

Safety actively analyzes 634667 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

1.4.0

Update dependancies, make BM25 retriever normalisation compatible with previous version of sklearn.

1.3.1

Update LeNLP dependancy version in order to run without error on older Ubuntu version.

1.3.0

The version 1.3.0 introduce:

- A new BM25 retriever powered by LeNLP vectorizer written in rust. SOTA.
- An updated TfIdf retriever with LeNLP TfidfVectorizer by default which is written in rust, not SOTA but really fast.

- Breaking change with the evaluation code. The evaluation module is much simpler and will handle duplicates queries.
- Update of the documentation.

1.1.0

- ColBERT Retriever is now available (complementary to the ColBERT ranker).
- Improved default settings for every models.
- Attention mask added to models.
- ColBERT and SparseEmbed pre-trained checkpoints on HuggingFace: raphaelsty/neural-cherche-colbert and raphaelsty/neural-cherche-sparse-embed.
- Improved ranking loss.
- Addition of benchmarks.

Overall, this version makes easier to fine-tune ColBERT, SparseEmbed and Splade and achieve excellent results without default parameters.

1.0.0

**Introducing Neural-Cherche 1.0.0: The Evolution of Sparsembed**

I'm thrilled to announce the launch of Neural-Cherche 1.0.0, a significant upgrade from Sparsembed, packed with innovative features and enhancements:

- **ColBERT Fine-Tuning & Ranking:** Enhance your search capabilities with fine-tuned ColBERT for more precise and efficient ranking.

- **Revamped Retrievers with Enhanced API:** Experience our newly optimized retrievers. They now come with an improved API that enables users to comprehensively capture and analyze all model outputs.

- **Optimized Training with Refined Hyperparameters:** Benefit from our enhanced training procedure, featuring good default hyperparameters for better performance.

- **Efficiency Boost with Splade and SparseEmbed:** These components have been upgraded to utilize more efficient Sparse Matrices, boosting overall effectiveness.

- **Intelligent Embedding Management:** Once computed, embeddings are now transferred to the CPU, remaining there until needed again. This approach enables extensive, large-scale offline neural searching without overwhelming GPU resources.

- **Comprehensive Documentation:** Get up to speed quickly with the documentation.

- **Improved Evaluation API**

- **A Fresh, New Look with a cool Logo**

Embrace the future of neural search with Neural-Cherche 1.0.0 – a giant leap forward from Sparsembed!

0.1.1

Avoid intersection errors with Sparsembed

Page 1 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.