Imaginairy

Latest version: v14.3.0

Safety actively analyzes 638297 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 12

14.0.0

- ๐ŸŽ‰ video generation using [Stable Video Diffusion](https://github.com/Stability-AI/generative-models)
- add `--videogen` to any image generation to create a short video from the generated image
- or use `aimg videogen` to generate a video from an image
- ๐ŸŽ‰ SDXL (Stable Diffusion Extra Large) models are now supported.
- try `--model opendalle` or `--model sdxl`
- inpainting and controlnets are not yet supported for SDXL
- ๐ŸŽ‰ imaginairy is now backed by the [refiners library](https://github.com/finegrain-ai/refiners)
- This was a huge rewrite which is why some features are not yet supported. On the plus side, refiners supports
cutting edge features (SDXL, image prompts, etc) which will be added to imaginairy soon.
- [self-attention guidance](https://github.com/SusungHong/Self-Attention-Guidance) which makes details of images more accurate
- ๐ŸŽ‰ feature: larger image generations now work MUCH better and stay faithful to the same image as it looks at a smaller size.
For example `--size 720p --seed 1` and `--size 1080p --seed 1` will produce the same image for SD15
- ๐ŸŽ‰ feature: loading diffusers based models now supported. Example `--model https://huggingface.co/ainz/diseny-pixar --model-architecture sd15`
- ๐ŸŽ‰ feature: qrcode controlnet!
- feature: generate word images automatically. great for use with qrcode controlnet: `imagine "flowers" --gif --size hd --control-mode qrcode --control-image "textimg='JOY' font_color=white background_color=gray" -r 10`
- feature: opendalle 1.1 added. `--model opendalle` to use it
- feature: added `--size` parameter for more intuitive sizing (e.g. 512, 256x256, 4k, uhd, FHD, VGA, etc)
- feature: detect if wrong torch version is installed and provide instructions on how to install proper version
- feature: better logging output: color, error handling
- feature: support for pytorch 2.0
- feature: command line output significantly cleaned up and easier to read
- feature: adds --composition-strength parameter to cli (416)
- performance: lower memory usage for upscaling
- performance: lower memory usage at startup
- performance: add sliced attention to several models (lowers memory use)
- fix: simpler memory management that avoids some of the previous bugs
- deprecated: support for python 3.8, 3.9
- deprecated: support for torch 1.13
- deprecated: support for Stable Diffusion versions 1.4, 2.0, and 2.1
- deprecated: image training
- broken: samplers other than ddim

13.2.1

- fix: pydantic models for http server working now. Fixes 380
- fix: install triton so annoying message is gone

13.2.0

- fix: allow tile_mode to be set to True or False for backward compatibility
- fix: various pydantic issues have been resolved
- feature: switch to pydantic 2.3 (faster but was a pain to migrate)

13.1.0

- feature: *api server now has feature parity with the python API*. View the docs at http://127.0.0.1:8000/docs after running `aimg server`
- `ImaginePrompt` is now a pydantic model and can thus be sent over the rest API
- images are expected in base64 string format
- fix: pin pydantic to 2.0 for now
- build: better python 3.11 incompatibility messaging (fixes 342)
- build: add minimum versions to requirements to improve dependency resolution
- docs: add a discord link

13.0.1

- feature: show full stack trace when there is an api error
- fix: make lack of support for python 3.11 explicit
- fix: add some routes to match StableStudio routes

13.0.0

- ๐ŸŽ‰ feature: multi-controlnet support. pass in multiple `--control-mode`, `--control-image`, and `--control-image-raw` arguments.
- ๐ŸŽ‰ feature: add colorization controlnet. improve `aimg colorize` command
- ๐ŸŽ‰๐Ÿงช feature: Graphical Web Interface [StableStudio](https://github.com/Stability-AI/StableStudio). run `aimg server` and visit http://127.0.0.1:8000/
- ๐ŸŽ‰๐Ÿงช feature: API server `aimg server` command. Runs a http webserver (not finished). After running, visit http://127.0.0.1:8000/docs for api.
- ๐ŸŽ‰๐Ÿงช feature: API support for [Stablity AI's new open-source Generative AI interface, StableStudio](https://github.com/Stability-AI/StableStudio).
- ๐ŸŽ‰๐Ÿงช feature: "better" memory management. If GPU is full, least-recently-used model is moved to RAM. I'm not confident this works well.
- feature: [disabled] inpainting controlnet can be used instead of finetuned inpainting model
- The inpainting controlnet doesn't work as well as the finetuned model
- feature: python interface allows configuration of controlnet strength
- feature: show full stack trace on error in cli
- fix: hide the "triton" error messages
- fix: package will not try to install xformers on `aarch64` machines. While this will allow the dockerfile to build on
MacOS M1, [torch will not be able to use the M1 when generating images.](https://github.com/pytorch/pytorch/issues/81224#issuecomment-1499741152)
- build: specify proper Pillow minimum version (fixes 325)
- build: check for torch version at runtime (fixes 329)

Page 2 of 12

ยฉ 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.