Iopaint

Latest version: v1.3.3

Safety actively analyzes 629959 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 9

1.5

No more huggingface access token is needed at the first download time

1.3.3

[BrushNet](https://www.iopaint.com/models/diffusion/brushnet) and [PowerPaintV2](https://www.iopaint.com/models/diffusion/powerpaint_v2) can turn any normal sd1.5 model into an inpainting model.
When using any SD1.5 base model(e.g: `runwayml/stable-diffusion-v1-5`), the option for BrushNet/PowerPaintV2 will appear in the sidebar. The model will automatically download the first time it is used.

For BrushNet, there are two models to choose from: `brushnet_segmentation_mask` and `brushnet_random_mask`.
Using `brushnet_segmentation_mask` means that the final inpainting result will maintain consistency with the mask shape,
while `brushnet_random_mask` provides a more general ckpt for random mask shapes.

For PowerPaintV2, just like PowerPaintV1, it was trained with "learnable task prompts" to guide the model in achieving specific tasks more effectively. These tasks include `text-guided`, `shape-guided`, `object-remove`, and `outpainting`.



<img width="279" alt="image" src="https://github.com/Sanster/IOPaint/assets/3998421/9c1bd719-c4d4-42f9-92fb-146bc6b75b1f">

1.2.2

- Press and hold `Alt`/`Option` and use the mouse wheel to adjust the mouse
- Fix minor bug in Extender(outpainting)

1.2.0

Better ControlNet support in Stable Diffusion

https://lama-cleaner-docs.vercel.app/models/controlnet

- `--sd-controlnet`: enable controlnet in stable diffusion
- `--sd-controlnet-method`: set controlnet method used, method can be change in web ui
- control_v11p_sd15_canny
- control_v11p_sd15_openpose
- control_v11p_sd15_inpaint
- control_v11f1p_sd15_depth

New plugin
[Anime Segmentation](https://github.com/SkyTNT/anime-segmentation): `--enable-anime-seg`

![image](https://github.com/Sanster/lama-cleaner/assets/3998421/a9de2805-1a32-4117-897b-f6cdd089097c)

New icon/logo
<img height=256 src="https://github.com/Sanster/lama-cleaner/assets/3998421/097ff906-6685-4394-baef-a38316cd06a2"/>


Other improvement
- fix exif issue: https://github.com/Sanster/lama-cleaner/issues/299
- remove scikit-image to make install eaiser for python3.11
- Use new font: [Inter](https://github.com/rsms/inter)
- Show Stable Diffusion inpainting progress:

https://github.com/Sanster/lama-cleaner/assets/3998421/5c8c109a-5bca-4666-8fc6-c6ef07056536

- Show prev mask

https://github.com/Sanster/lama-cleaner/assets/3998421/cd96f0c1-4f90-4091-9003-e5c1944f551a

1.1.1

Use [Segment Anything](https://github.com/facebookresearch/segment-anything) model to do interactive segmentation. See demo here: https://twitter.com/sfjccz/status/1643992289294057472?s=20

bash
--enable-interactive-seg --interactive-seg-model=vit_l --interactive-seg-device=cuda


- Available:
- vit_b: small
- vit_l: mid (Recommend)
- vit_h: large
- Available device:
- cuda
- cpu
- mps

1.0.0

This version contains a lot of features, so I set the version number to 1.0. I hope these updates will help you in your work.

Plugins


https://user-images.githubusercontent.com/3998421/229261236-c4d59f5c-d293-42ac-9d04-dd73f494f07f.mov



In the post-processing of image cleaning, in addition to erasing, algorithms such as facial repair or super-resolution are often used. Now you can directly use them in Lama Cleaner. See the [Plugins Doc](https://lama-cleaner-docs.vercel.app/plugins) for how to use it.
- [RemoveBG](https://github.com/danielgatis/rembg): Remove images background
- [RealESRGAN](https://github.com/xinntao/Real-ESRGAN): Super Resolution
- [GFPGAN](https://github.com/TencentARC/GFPGAN): Face Restoration
- [RestoreFormer](https://github.com/wzhouxiff/RestoreFormer): Face Restoration

Other Features
- Stable Diffusion ControlNet Inpainting: thanks for https://github.com/mikonvergence/ControlNetInpaint, now you can use ControlNet inpainting when using sd1.5 model. This can make your inpainting results more consistent with the original structure. Run lama-cleaner with`--sd-controlnet` to enable it.
- Load Stable Diffusion 1.5 model(ckpt/safetensors) from local path: Run lama-cleaner with`--model sd.15 --sd-local-model-path /path/to/your/local/inpainting_model.ckpt` to enable it. You can learn how to create a inpainting in AUTO1111's webui [here](https://www.reddit.com/r/StableDiffusion/comments/zyi24j/how_to_turn_any_model_into_an_inpainting_model/)
- MAT model vRAM usage improvement: Now defaulting to using fp16 format, which use less vRAM and run faster.
- Better FileManager: implement some improve suggestion mentioned [here](https://github.com/Sanster/lama-cleaner/issues/241)

Page 1 of 9

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.