Skip to content

jordenyt/stable_diffusion_sketch

Repository files navigation

Stable Diffusion Sketch Version

Do more and simpler with your A1111 SD-webui on your Android device. Inpainting / txt2img / img2img on your sketches and photos with just a few clicks.

Download APK

Notes

  • A1111 SD-webui 1.8.0 and prior does not support separate sampler and scheduler. When you update your SD-webui to 1.9.x, please also update the custom mode JSON and default sampler in the app.
  • A1111 SD-webui 1.7.0 and prior does not support SDXL Inpainiting model. Please update to latest release.
  • There are several SDXL Inpainting models on Civitai. For your instance, JuggerXL_inpaint and RealVisXL V3.0 may be a good choice.

Screenshots

Supported Features

  • Support ControlNet
  • Support SDXL
  • Support SDXL Turbo / Lightning
  • Support SDXL Inpainting
  • Autocomplete LORA tag in prompt
  • Autocomplete Phrase setup
  • Autocomplete for Custom Mode
  • Select Style for prompt
  • Sketch with color
  • Create new paint from:
    • Blank Canvas
    • Capture from camera
    • Output of Stable Diffusion txt2img
    • shared image from other apps
  • Enhance your sketch with Stable Diffusion
    • Preset Modes:
      • img2img(sketch) + Scribble(sketch)
      • txt2img + Canny(sketch)
      • txt2img + Scribble(sketch)
      • SDXL (turbo) txt2img
      • SDXL img2img
      • Inpainting (background)
      • Inpainting (sketch)
      • Partial Inpainting (background)
      • Partial Inpainting (sketch)
    • Special Modes:
      • Outpainting
      • Fill with Reference
      • Merge with Reference
    • 12 Custom Modes
  • Painting Tools:
    • Palette
    • Paintbrush
    • Eyedropper
    • Eraser
    • Undo/redo
    • Zooming / Panning
  • Preset values for your prompt
    • Prompt Prefix
    • Prompt Postfix
    • Negative Prompt
  • 4 Canvas aspect ratio: wide landscape, landscape, portrait and square
  • Upscaler
  • Long press image on Main Screen to delete
  • Group related sketches
  • Support multiple ControlNet
  • Keep EXIF of shared content in your SD output
  • Batch size

Custom Modes

Custom mode can be defined in JSON format.

Examples

  1. Partial inpaint with POSE
    {"type":"inpaint","denoise":0.75, "baseImage":"background", "inpaintFill":1, "inpaintPartial":1, "cn":[{"cnInputImage":"background", "cnModelKey":"cnPoseModel", "cnModule":"openpose_full", "cnWeight":1.0, "cnControlMode":0}], "sdSize":768}
  2. Color fix
    {"type":"inpaint","denoise":0.5, "baseImage":"background", "inpaintFill":1, "inpaintPartial":1, "cn":[{"cnInputImage":"background", "cnModelKey":"cnSoftedgeModel", "cnModule":"softedge_pidinet", "cnWeight":1.0, "cnControlMode":0}], "sdSize":1024}
  3. Mild Enhance
    {"type":"inpaint","denoise":0.15, "baseImage":"background", "inpaintFill":1, "inpaintPartial":1, "cn":[{"cnInputImage":"background", "cnModelKey":"cnTileModel", "cnModule":"tile_resample", "cnModuleParamA":1, "cnWeight":1.0, "cnControlMode":0}], "sdSize":1024}
  4. Heavy Enhance
    {"type":"inpaint","denoise":0.4, "baseImage":"background", "inpaintFill":1, "inpaintPartial":1, “cn”:[{"cnInputImage":"background", "cnModelKey":"cnTileModel", "cnModule":"tile_colorfix+sharp", "cnModuleParamA":5, "cnModuleParamB":0.2, "cnWeight":1.0, "cnControlMode":0}], "sdSize":1024}
  5. Partial Redraw
    {"type":"inpaint", "denoise":0.7, "baseImage":"background", "inpaintFill":1, "inpaintPartial":1, "sdSize":1024}
  6. Get similar image
    {"type":"txt2img", "cn":[{"cnInputImage":"background", "cnModelKey":"cnNoneModel", "cnModule":"reference_only", "cnWeight":1.0, "cnControlMode":2}]}
  7. Tiles Refiner
    {"type":"img2img", "denoise":0.4, "baseImage":"background", "cn":[{ "cnInputImage":"background", "cnModelKey":"cnTileModel", "cnModule":"tile_colorfix+sharp", "cnModuleParamA":4, "cnModuleParamB":0.1, "cnWeight":1.0}], "sdSize":1024}
  8. Partial Inpaint with LORA
    {"type":"inpaint", "denoise":0.9, "cfgScale":7, "inpaintFill":1, "baseImage":"background", "sdSize":1024, "model":"v1Model"}

Parameters for the mode definition JSON:

Variable txt2img img2img inpainting Value
name O O O Name of this custom mode.
prompt O O O Postfix for this mode on prompt.
negPrompt O O O Postfix for this mode on negative prompt.
type M M M txt2img - Text to Image
img2img - Image to Image
inpaint - Inpainting
steps O O O integer from 1 to 120, default value is 40
cfgScale O O O decimal from 0 to 30, default value is 7.0
model O O O v1Model - Default for type=txt2img and type=img2img
v1Inpaint - Default for type=inpaint
sdxlBase - Default for SDXL txt2img mode
sdxlInpaint
sdxlTurbo - Default for SDXL Turbo txt2img mode
sampler O O O Can use all samplers available in your A1111 webui.
scheduler O O O Automatic - Default
Possible values are Uniform, Exponential, Karras, Polyexponential and SGM Uniform
denoise - M M decimal from 0 to 1
baseImage - M M background - background image under your drawing
sketch - your drawing on the background image
inpaintFill - - O 0 - fill (DEFAULT)
1 - original
2 - latent noise
3 - latent nothing
inpaintPartial - - O 0 - Inpainting on whole image (DEFAULT)
1 - Inpainting on "painted" area and paste on original image
sdSize O O O Output resolution of SD. Default value is configured in setting.
Suggested value: 512 / 768 / 1024 / 1280
clipSkip O O O Clip skip for v1.5 Model. Default value is configured in setting.
Suggested value: 1-2
cn O O O JSON Array for ControlNet Object

(M - Mandatory; O - Optional)

Parameters for ControlNet Object:

Variable Value
cnInputImage background - background image under your drawing
sketch - your drawing and the background image
reference - reference image
cnModelKey cnTileModel - CN Tile Model
cnPoseModel - CN Pose Model
cnCannyModel - CN Canny Model
cnScribbleModel - CN Scribble Model
cnDepthModel - CN Depth Model
cnNormalModel - CN Normal Model
cnMlsdModel - CN MLSD Model
cnLineartModel - CN Line Art Model
cnSoftedgeModel - CN Soft Edge Model
cnSegModel - CN Seg Model
cnIPAdapterModel - CN IP-Adapter Model
cnxlIPAdapterModel - CN IP-Adapter XL Model
cnOther1Model - Other CN Model 1
cnOther2Model - Other CN Model 2
cnOther3Model - Other CN Model 3
cnModel Alternative for cnModelKey. Value can be any valid CN models name with hash code.
cnModule CN Module that ControlNet provided. Typical values are: tile_resample / reference_only / openpose_full / canny / depth_midas / scribble_hed
For full list, please refer to the Automatic1111 web UI.
cnControlMode 0 - Balanced (DEFAULT)
1 - My prompt is more important
2 - ControlNet is more important
cnWeight decimal from 0 to 1
cnResizeMode 0 - Just Resize
1 - Crop and Resize
2 - Resize and Fill (default)
cnModuleParamA First Parameter for ControlNet Module
cnModuleParamB Second Parameter for ControlNet Module
cnStart Starting Control Step (default 0.0)
cnEnd Ending Control Step (default 1.0)

Demo Video (on outdated version)

sdSketchDemo.mp4

Preset Modes Demo

Mode Config Demo Input Demo Output
img2img(sketch) + Scribble(sketch) {"baseImage":"sketch", "cn":[{"cnInputImage":"sketch", "cnModelKey":"cnScribbleModel", "cnModule":"none", "cnWeight":0.7}], "denoise":0.8, "type":"img2img"}
txt2img + Canny(sketch) {"cn":[{"cnInputImage":"sketch", "cnModelKey":"cnCannyModel", "cnModule":"canny", "cnWeight":1.0}], "type":"txt2img"}
txt2img + Scribble(sketch) {"cn":[{"cnInputImage":"sketch", "cnModelKey":"cnScribbleModel", "cnModule":"scribble_hed", "cnWeight":0.7}], "type":"txt2img"}
Inpainting(background) {"baseImage":"background", "denoise":1.0, "inpaintFill":2, "type":"inpaint"}
Inpainting(sketch) {"baseImage":"sketch", "denoise":0.8, "inpaintFill":1, "type":"inpaint"}
Partial Inpainting (background) {"baseImage":"background", "denoise":1.0, "inpaintFill":2, "inpaintPartial":1, "type":"inpaint"}
Outpainting {"baseImage":"background", "denoise":1.0, "inpaintFill":2, "type":"inpaint", "cfgScale":10.0}
Merge with Reference {"baseImage":"background", "denoise":0.75, "inpaintFill":1, "type":"inpaint"}

Prerequisites

Before using Stable Diffusion Sketch, you need to install and set up the following on your server:

  1. Stable Diffusion Web UI by AUTOMATIC1111
  2. Install sd-webui-controlnet extension on Stable Diffusion Web UI
  3. Enable the API and listen on all network interfaces by editing the running script webui-user.bat: set COMMANDLINE_ARGS=--api --listen
  4. Put your perferenced SD model under stable-diffusion-webui/models/Stable-diffusion folder. You may selected one from Civitai.
  5. Put your perferenced ControlNet Model under stable-diffusion-webui/extensions/sd-webui-controlnet/models folder.
    • Scribble, Canny, Depth, Tile and Pose model are needed.
    • Default supported model can be download from lllyasviel's ControlNet v1.1 Hugging Face card
    • ControlNet Model needed to match with your SD model in order to get it working. i.e. If your ControlNet model are build for SD1.5, then your SD model need to be SD1.5 based.

Usage

Here's how to use Stable Diffusion Sketch:

  1. Start the Stable Diffusion Web UI on your server.
  2. Download and install the Stable Diffusion Sketch APK on your Android device.
  3. Open the app and input the network address of your Stable Diffusion server in the "Stable Diffusion Server Address" field.
    • If both of your Android device and Server are on the same intranet, you can use the intranet IP, i.e. 192.168.xxx.xxx / 10.xxx.xxx.xxx. You can get this IP by running ipconfig /all on Windows or ifconfig --all on MacOS/Linux.
    • If your Android device is on public internet, and your server is on intranet, you need to config your router NAT/Firewall and DDNS. In this case, use the internet IP and translated port number as the server address.
    • You can test the server address by using it on Android device's web browser. If it is valid, then you will see automatic1111's webui running on your web browser.
  4. In the app, select SD Model, Inpainting Model, Sampler, Upscaler and ControlNet model.
  5. Start sketching and let Stable Diffusion do the magic!

License

Stable Diffusion Sketch is licensed under the GNU General Public License v3.0.