My own implementation of Stable Diffusion for me to generate reference art
-
Updated
Mar 31, 2023
My own implementation of Stable Diffusion for me to generate reference art
Easy to use diffusion in mobile interface.
Using SageMaker and LoRA to fine-tune the Stable Diffusion model and generate fashion images
Generates images from input text.
Generate 3D assets with a text prompt or an image.
Generate images with a text prompt.
A web app that allows you to select a subject and then change its background, OR keep the background and change the subject.
Here I have used Stable Diffusion with the diffusers from hugging face, took one image as input and then added elements into it by using diffusion algorithm, and iterated this process three/four times with different type of elements, adding by text inputs.
This is an adaption of the notebook , which is provided as part of a class by Hugging Face
Generating images with diffusion models on a mobile device, with an intranet GPU box as backend
a tool to create synthetic image data
Experimental Stable Diffusion XL Webui
Configuration files for building E621-Rising v3 SDXL model and dataset
Easily create your own AI avatar images!
diffusion model for unconditional image generation of Bored Apes
This is a repo providing same stable diffusion experiments, regarding textual inversion task and captioning task
Implementation of Paint-with-words with Stable Diffusion using diffusers pipeline: method from eDiff-I that let you generate image from text-labeled segmentation map.
Toolchain for creating custom datasets and training Stable Diffusion (1.x, 2.x, XL) models and LoRAs
Collection of OSS models that are containerized into a serving container
Diffusers API in OCaml
Add a description, image, and links to the huggingface-diffusers topic page so that developers can more easily learn about it.
To associate your repository with the huggingface-diffusers topic, visit your repo's landing page and select "manage topics."