Mflux

Latest version: v0.6.1

Safety actively analyzes 714736 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

0.6.0

MFLUX v.0.6.0 Release Notes

Major New Features

🌐 Third-Party HuggingFace Model Support
- Comprehensive ModelConfig refactor to support compatible HuggingFace dev/schnell models
- Added ability to use models like `Freepik/flux.1-lite-8B-alpha` and `shuttleai/shuttle-3-diffusion`
- New `--base-model` parameter to specify which base architecture (dev or schnell) a third-party model is derived from
- Maintains backward compatibility while opening up the ecosystem to community-created models

🎭 In-Context LoRA
- Added support for In-Context LoRA, a powerful technique that allows you to generate images in a specific style based on a reference image without requiring model fine-tuning
- Introduced a new command-line tool: `mflux-generate-in-context`
- Includes 10 pre-defined styles from the Hugging Face ali-vilab/In-Context-LoRA repository
- Detailed documentation on how to use this feature effectively with prompting tips and best practices

🔌 Automatic LoRA Downloads
- Added ability to automatically download LoRAs from Hugging Face when specified by repository ID
- Simplifies workflow by eliminating the need to manually download LoRA files before use

🧠 Memory Optimizations
- Added `--low-ram` option to reduce GPU memory usage by constraining the MLX cache size and releasing text encoders and transformer components after use
- Implemented memory saver for ControlNet to reduce RAM requirements
- General memory usage optimizations throughout the codebase

🔢 Enhanced Quantization Options
- Added support for 3-bit and 6-bit quantization (requires mlx > v0.21.0)
- Expanded quantization options now include 3, 4, 6, and 8-bit precision

Interface Improvements

🔧 Modified Parameters

- The previous `--init-image-path` parameter is now `--image-path`
- The previous `--init-image-strength` parameter is now `--image-strength`

🖼️ Image Generation Enhancements
- Added `--auto-seeds` option to generate multiple images with random seeds in a single command
- Added option to override previously saved test images
- Added `--controlnet-save-canny` option to save the Canny edge detection reference image used by ControlNet
- Improved handling of edge cases for img2img generation

🔄 Callback System
- Implemented a general callback mechanism for more flexible image generation pipelines
- Added support for before-loop callbacks to accept latents
- Enhanced StepwiseHandler to include initial latent

Architecture Improvements

🏗️ Code Refactoring
- Removed 'init' prefix for a more general interface
- Removed `ConfigControlnet` - the `controlnet_strength` attribute is now on `Config`
- Refactored model configuration system
- Refactored transformer blocks for better maintainability
- Unified attention mechanism in single and joint attention blocks
- Added support for variable numbers of transformer blocks
- Optimized with fast SDPA (Scaled Dot-Product Attention)
- Added PromptCache for small optimization when generating with repeated prompts

🧰 Developer Tools
- Added Batch Image Renamer tool as an isolated uv run script
- Added descriptive comments for attention computations

Compatibility Updates
- Updated to support the latest mlx version
- Fixed compatibility issues with HuggingFace dev/schnell models

Bug Fixes
- Fixed handling of edge cases for img2img generation
- Various small fixes and improvements throughout the codebase


Contributors

- anthonywu
- ssakar
- azrahello
- DanaCase

v.0.5.1
**Bug fix**: Fix bug which caused locally saved quantized models to fail to set LoRA weights. With this fix, users should now be able to have a local quantized model on disk and load it with an external LoRA adapter.

v.0.5.0
Features
- **DreamBooth fine-tuning support**: V1 of fine-tuning support in MFLUX.

Developer Experience Improvements
- **Better weight handling**: Completely rewritten LoRA weight handling
- **Better test coverage**: Incudes more tests to cover new and existing features (such as for multi-lora and local model saving)
- **New dependencies**:
- Adds matplotlib as dependency for visualizing training loss
- Adds toml library as dependency for better handling of MFLUX version metadata.



v.0.4.1
Fix img2img for non-square image resolutions

v.0.4.0
Features
- **Img2Img Support**: Introduced the ability to generate images based on an initial reference image.
- **Image Generation from Metadata**: Added support to generate images directly from provided metadata files.
- **Progressive Step Output**: Optionally output each step of the image generation process, allowing for real-time monitoring.

Developer Experience Improvements
- **Enhanced Command-Line Argument Handling**: Improved parsing and validation for command-line arguments.
- **Automated Testing**: Added automatic tests for image generation and command-line argument handling.
- **Pre-Commit Hooks**: Integrated pre-commit hooks with `ruff`, `isort`, and typo checks for better code consistency.


v.0.3.0
- ControlNet Canny support
- Enhanced dev experience with uv, ruff, makefile, pre-commit, and more.
- Ability to export quantized model with LoRA weights baked in.
- Official MIT license is in place.

v.0.2.1
Better LoRA support

0.2.0

- Official PyPI release: `pip install mflux` -- Big thanks to deto for letting us have this name!
- New commands:
- `mflux-generate` for generating an image
- `mflux-save` for saving a quantized model to disk
- Support for quantized models (4 bit and 8 bit)
- Support for loading trained LoRA weights
- Automatically saves metadata when saving an image

Links

Releases

Has known vulnerabilities

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.