My own implementation of Stable Diffusion for me to generate reference art
-
Updated
Mar 31, 2023
My own implementation of Stable Diffusion for me to generate reference art
Using SageMaker and LoRA to fine-tune the Stable Diffusion model and generate fashion images
Easy to use diffusion in mobile interface.
Generating images with diffusion models on a mobile device, with an intranet GPU box as backend
a tool to create synthetic image data
diffusion model for unconditional image generation of Bored Apes
A web app that allows you to select a subject and then change its background, OR keep the background and change the subject.
Configuration files for building E621-Rising v3 SDXL model and dataset
Toolchain for creating custom datasets and training Stable Diffusion (1.x, 2.x, XL) models and LoRAs
Generates images from input text.
This is an adaption of the notebook , which is provided as part of a class by Hugging Face
Generate 3D assets with a text prompt or an image.
Generate images with a text prompt.
This is a repo providing same stable diffusion experiments, regarding textual inversion task and captioning task
Experimental Stable Diffusion XL Webui
Diffusers API in OCaml
Collection of OSS models that are containerized into a serving container
Here I have used Stable Diffusion with the diffusers from hugging face, took one image as input and then added elements into it by using diffusion algorithm, and iterated this process three/four times with different type of elements, adding by text inputs.
Implementation of Paint-with-words with Stable Diffusion using diffusers pipeline: method from eDiff-I that let you generate image from text-labeled segmentation map.
Easily create your own AI avatar images!
Add a description, image, and links to the huggingface-diffusers topic page so that developers can more easily learn about it.
To associate your repository with the huggingface-diffusers topic, visit your repo's landing page and select "manage topics."