Hello! In this update, we are excited to introduce collaborative features, the ability to skip benchmark polling, and output node selection. Check out the details below to learn more about these new features.
**1. Collaborative Features**
***
<figure><img src="https://cdn.owlite.ai/release240523/project_share.png" alt=""><figcaption></figcaption></figure>
Work together with your colleagues more efficiently. With our collaborative features, you can now share the compression process and achieve the best results faster. When creating a project or editing settings, you can invite users within the same workgroup by entering their username.
* **Invite Colleagues:** Enter the username to invite users within your workgroup when setting up a project.
* **Collaborative Optimization:** Work together on the same project and compare optimization results in one place.
* **Note:** You cannot edit experiments owned by others, but you can duplicate them and start a new experiment once the collaborator finishes their work.
* **Availability:** This feature is available for users on the Lite plan and above.
**2. Skip Benchmark Polling**
***
Optimal optimization requires numerous trials and errors. Multiple tests and benchmarks need to be performed to achieve the desired optimization results, but waiting for code execution each time can be time-consuming. To solve this, you can add specific content to the benchmark function, allowing it to run on the server while you perform other tasks.
python
owl.benchmark(download_engine=False)
* **Feature Addition:** Add content to the benchmark function to run it on the server.
* **Note:** To receive the TensorRT engine, you will need to remove the added content and rerun the code.
* **Usage Scenario:** Perform multiple configs consecutively and rerun only the optimal experiment to receive the TensorRT engine.
**3. Output Node Selection**
***
<figure><img src="https://cdn.owlite.ai/release240523/output_quantization.png" alt=""><figcaption></figcaption></figure>
You can now quantize the output node of the AI model if needed. While the recommended settings are sufficient for general use cases, we offer various quantization options to help you find the perfect optimization for your model, reflecting our philosophy of providing tailored solutions.
* **Feature Description:** Quantize the output node of your AI model if needed.
We hope these new features enhance your workflow and make your tasks more efficient and convenient. If you have any additional needs or suggestions, please feel free to let us know. Thank you!