- Fully revamped the training utils. Check out the [Training 101 guide](https://refine.rs/guides/training_101/) for a gentle start - Added [SDXL-Lightning](https://arxiv.org/abs/2402.13929). #305 - Added [Latent Consistency Models](https://arxiv.org/abs/2310.04378) and [LCM-LoRA](https://arxiv.org/abs/2311.05556) for Stable Diffusion XL. #297 - Added [Style Aligned adapter](https://arxiv.org/abs/2312.02133) to Stable Diffusion models. #289 - Added [ControlLoRA (v2) adapter](https://github.com/HighCWu/control-lora-v2) to Stable Diffusion XL. #285
Full Changelog: [v0.3.1...v0.4.0](https://github.com/finegrain-ai/refiners/compare/v0.3.1...v0.4.0)
0.3.1
What's Changed
- Improved typing for `Module` and `Chain` (see 257 for details) - Added missing PyPi classifiers to `pyproject.toml` e.g. Python versions
Full Changelog: [v0.3.0...v0.3.1](https://github.com/finegrain-ai/refiners/compare/v0.3.0...v0.3.1)
0.3.0
What's Changed
- Initiated a full-fledged documentation website available at [https://refine.rs](https://refine.rs) - Added a new `SDLoraManager` to easily load one or multiple community LoRA-s (e.g. CivitAI) - Revamped entirely the LoRA adapter and added support for `Conv2dLora` - Fixed various minor issues in solvers (fka schedulers) - Added [Euler's method](https://arxiv.org/abs/2206.00364) to solvers - Expanded IP-Adapter to support multiple image prompts - Refactored IP-Adapter for better composability, e.g. with LoRA-s - Added dense mask prompt support to Segment Anything - Added [DINOv2](https://github.com/facebookresearch/dinov2) to foundation models for high-performance visual features - Added [FreeU](https://github.com/ChenyangSi/FreeU) for improved quality at no cost - Converted the test weights conversion script from bash to Python - Replaced [Poetry](https://python-poetry.org/) by [Rye](https://rye-up.com/) for Python dependencies management
Full Changelog: [v0.2.0...v0.3.0](https://github.com/finegrain-ai/refiners/compare/v0.2.0...v0.3.0)
0.2.0
What's Changed
- Added [Restart Sampling](https://github.com/Newbeeer/diffusion_restart_sampling) for improved image generation ([example](https://github.com/Newbeeer/diffusion_restart_sampling/issues/4)) - Added [Self-Attention Guidance](https://github.com/KU-CVLAB/Self-Attention-Guidance/) to avoid e.g. too smooth images ([example](https://github.com/SusungHong/Self-Attention-Guidance/issues/4)) - Added [T2I-Adapter](https://github.com/TencentARC/T2I-Adapter) for extra guidance ([example](https://github.com/TencentARC/T2I-Adapter/discussions/93)) - Added [MultiDiffusion](https://github.com/omerbt/MultiDiffusion) for e.g. panorama images - Added [IP-Adapter](https://github.com/tencent-ailab/IP-Adapter), aka image prompt ([example](https://github.com/tencent-ailab/IP-Adapter/issues/92)) - Added [Segment Anything](https://github.com/facebookresearch/segment-anything) to foundation models - Added [Stable Diffusion XL 1.0](https://github.com/Stability-AI/generative-models) to foundation models - Made possible to add new concepts to the CLIP text encoder, e.g. via [Textual Inversion](https://arxiv.org/abs/2208.01618)
Full Changelog: [v0.1.0...v0.2.0](https://github.com/finegrain-ai/refiners/compare/v0.1.0...v0.2.0)
0.1.0
Initial release:
- Initiated core APIs (aka Fluxion), in particular: `Chain` , `Context` and `Adapter` - Foundation model: Stable Diffusion 1.5 - Adapters: ControlNet, LoRA, Reference-Only Control - Optional training utils - Scripts for converting model weights into the Refiners format