Refact

Latest version: v0.10.8

Safety actively analyzes 701563 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

1.8.0

Refact.ai Self-hosted:
- **CUDA and cuDNN Version Update:** 11.8.0 -> 12.4.1
- **New models:** llama3.1, llama3.2 and qwen2.5/coder families.
- **New providers support:** 3rd party APIs for groq and cerebras.
- **Support for multiline code completion models.**

1.7.0

Refact.ai Self-hosted:
- **New models:** last OpenAI models is now available in Docker.
- **Tool usage support for 3rd party models:** turn on 3rd party APIs to use latest features of Refact.
- **Removed deprecated models.**

1.6.4

Refact.ai Self-hosted:
- **Claude-3.5 Sonnet Support:** New model from Anthropic is now available in Docker.
Refact.ai Enterprise:
- **Llama3 vLLM Support:** Added vLLM version of Llama-3-8B-Instruct for better performance.

1.6.3

Refact.ai Self-hosted:
- **Llama3 8k Context:** `llama3` models now support 8k context.
- **Credentials Management:** We added information about tokens and keys.
- **Deprecated Models:** The models `starcoder`, `wizardlm`, and `llama2` are deprecated and will be removed in the next release.
Refact.ai Enterprise:
- **Refact Model 4k Context:** `refact` model now supports 4k context.

1.6.2

Refact.ai Self-hosted
- **Models Support:** We've introduced support for gated models and the new llama3 model
- **Even More Models**: GPT4o and GPT4-turbo models are now available

Refact.ai Enterprise
- **VLLM Speed Improvement:** You are now able to experience faster processing times with our optimized VLLM
- **VLLM LoRa-Less Mode:** In cases where LoRa is not set up, VLLM will now operate 20% faster due to the new LoRa-less mode
- **Empty Prompt and OOM Handling:** We've addressed issues in VLLM that caused broken generations

1.6.1

Context Switching Mechanism
We've implemented the context-switching mechanism, and it's available in our latest version of the VS Code plugin. Now, you can change the max context value for models depending on your needs — small context for less memory usage and faster operation or large context for deeper insights.

Model Deprecation
Our UI updates now flag models slated for removal. This ensures you're always working with the latest and most efficient models.

Factory Reset Fix
We've resolved issues with the factory reset process for when you need a fresh start.

Page 1 of 3

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.