Refact

Latest version: v0.10.4

Safety actively analyzes 693883 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 3

1.5.0

- **Fine-tune Process Enhancements**: We've made the fine-tuning process for starcoder models both faster and of higher quality with new default settings
- **Fine-tune UI**: The fine-tune setup has been moved to the Model Hosting tab for easier access
- **Plugin Fine-tune Switching**: VS Code and JetBrains plugins now support switching between fine-tuned models
- **Chat Tab Redesign**: The Chat tab has been temporarily hidden for a redesign and will be back in the next release

Compatibility Issues
- **Plugin Support**: Older versions of plugins will fall back to using the base model as they do not support the new fine-tuning capabilities. Make sure to update your plugin

1.4.0

What's New

- **WebGUI Chat**: Now, we ship a chat UI with our docker image!
- **Embeddings**: From now on, in our docker, by default, we are starting the model responsible for the embeddings. That is necessary for the VecDB support.
- **Shared Memory Issue Resolved**: A critical performance issue related to shared memory has been fixed. For more details, check out the GitHub [issue](https://github.com/smallcloudai/refact/issues/262).
- **Anthropic Integration**: We've implemented an ability to add API keys to use third-party models!
- **stable-code-3b**: The list of available models is growing! This time, we added [stabilityai/stable-code-3b](https://huggingface.co/stabilityai/stable-code-3b)!
- **Optional API Key for OSS**: Refact.ai Self-hosted version can now use an optional API key for security if deployed on a cloud.
- **Build Information**: In the settings, you can now find the About page, which includes information about packages that are used, versions, and commit hashes.
- **LoRa Switch Fix**: The issue with switching between LoRas (didn't show information in logs) is now fixed!
- **VLLM Out-of-Memory (OOM) Fix**: We've fixed an out-of-memory issue with VLLM for the Refact.ai Enterprise!

1.3.1

**Open-Source Updates:**

- **Memory Consumption Fix** for local Cassandra.
- **Unified Volume:** One volume for all data, including the database.
- **Encodings Fix** for the fine-tuning process.
- **Minor Fixes** addressing various small issues.

**Enterprise Updates:**

- **Tag Upgrade:** Transition from `beta` to `latest` in `docker-compose.yml`. Ensure to update your compose file.
- **Runpod Support:**
- Local database integration.
- One storage solution for all data.
- **Minor UI Fixes:** Improvements and bug fixes.

1.3.0

**New Models**
Expanding the list of available models with:
- **Mistral**
- 7B
- **Mixtral**
- 8x7B
- **Deepseek**
- 6.7B
- 33B
- **Magicoder**
- 6.7B

**Statistics**
We are introducing a new cool feature - user stats.
Check out the new page with informative charts to see the impact of having Refact!
![screenshotr_2023-12-6T15-31-4 (1) (1)](https://github.com/smallcloudai/refact/assets/62517920/4a5f866c-ba4a-44a7-b316-5da1c64d76c1)


**Better Docker Flow**

We've simplified our Docker image usage! Now, it's just one command:

bash
docker run --rm --gpus all -p 8008:8008 -v perm-storage:/perm_storage -v
refact-database:/var/lib/cassandra smallcloud/refact_self_hosting:latest


**UI Enhancements**

- **Improved Modal Window**: From now on, we will have a more structured and organized interface for the list of models.
- **User Seats Information**: For the Refact Enterprise, the access control page now includes information about user seats – a feature exclusive to the enterprise plan.

**General Improvements and Chat Handlers**

1.2.0

**Deepseek-Coder Models**:
We added support for the deepseek-coder family models, and you can use these models for completion and fine-tuning.

**Codellama/7b**:
Starting today, the `codellama/7b` model is available for fine-tuning.

**Faster Fine-Tuning**:
Fine-tuning is now faster for GPUs with CUDA capabilities `8.0` and higher.

**UI & Performance**:
General UI and performance improvements.

1.1.0

Model Updates

**Starcoder 1b, 3b, and 7b models** are now available for completion and finetuning.

Features

Upload LoRA

- **LoRA Upload**: LoRA upload feature is now available. You can now upload LoRA either via a direct link or by uploading the file.

Download Run

- **"Download Run"** feature allows downloading the best checkpoint.
- The **"Download Checkpoint"** feature allows you to download only the selected checkpoint.

UI & UX Improvements

- **Model Hosting Tab**: The selected finetune checkpoint associated with the model is now visible.
- **Finetune Tab**: The selected model for completion is now visible on the Finetuning page. From now on, you can easily view the model for which you've selected the checkpoint.
- **Checkpoint Selection**: You can now set the best checkpoint for a run.

Bug Fixes & Performance Improvements

- We've crushed some bugs and optimized Refact for even better performance.

Page 2 of 3

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.