Comfyui workflow examples github


Comfyui workflow examples github. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. Face Masking feature is available now, just add the "ReActorMaskHelper" Node to the workflow and connect it as shown below: Example. [2024/07/16] 🌩️ BizyAir Controlnet Union SDXL 1. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. 1 ComfyUI install guidance, workflow and example. However, the regular JSON format that ComfyUI uses will not work. SDXL Examples. ComfyUI Manager: Plugin for CompfyUI that helps detect and install missing plugins. Example GIF [2024/07/23] 🌩️ BizyAir ChatGLM3 Text Encode node is released. I've encountered an issue where, every time I try to drag PNG/JPG files that contain workflows into ComfyUI—including examples from new plugins and unfamiliar PNGs that I've never brought into ComfyUI before—I receive a notification stating that the workflow cannot be read. Installing ComfyUI. "knight on horseback, sharp teeth, ancient tree, ethereal, fantasy, knva, looking at viewer from below, japanese fantasy, fantasy art, gauntlets, male in armor standing in a battlefield, epic detailed, forest, realistic gigantic dragon, river, solo focus, no humans, medieval, swirling clouds, armor, swirling waves, retro artstyle cloudy sky, stormy environment, glowing red eyes, blush Img2Img Examples. For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples. 1. Experience a ReActorBuildFaceModel Node got "face_model" output to provide a blended face model directly to the main Node: Basic workflow 💾. Additionally, if you want to use H264 codec need to download OpenH264 1. GitHub community articles Repositories. This sample repository provides a seamless and cost-effective solution to deploy ComfyUI, a powerful AI-driven image generation tool, on AWS. Here is an example of how to use upscale models like ESRGAN. This repository provides a comprehensive infrastructure code and configuration setup, leveraging the power of ECS, EC2, and other AWS services. - daniabib/ComfyUI_ProPainter_Nodes You signed in with another tab or window. Then you can load this image in ComfyUI to get the workflow that shows how to use the LCM SDXL lora with the SDXL base model: The important parts are to use a low cfg, use the "lcm" sampler and the "sgm_uniform" or "simple" scheduler. This guide is about how to setup ComfyUI on your Windows computer to run Flux. (I got Chun-Li image from civitai); Support different sampler & scheduler: Nov 1, 2023 · All the examples in SD 1. This was the base for my Kolors的ComfyUI原生采样器实现(Kolors ComfyUI Native Sampler Implementation) - MinusZoneAI/ComfyUI-Kolors-MZ Lora Examples. Flux Schnell. Here is an example of uninstallation and You signed in with another tab or window. ComfyUI Examples. ComfyUI ControlNet aux: Plugin with preprocessors for ControlNet, so you can generate images directly from ComfyUI. A collection of simple but powerful ComfyUI workflows for Stable Diffusion with curated default settings. 5 trained models from CIVITAI or HuggingFace as well as gsdf/EasyNegative textual inversions (v1 and v2), you should install them if you want to reproduce the exact output from the samples (most examples use fixed seed for this reason), but you are free to use any models ! Jul 5, 2024 · You signed in with another tab or window. - comfyui-workflows/cosxl_edit_example_workflow. I have not figured out what this issue is about. Then press “Queue Prompt” once and start writing your prompt. Contribute to shiimizu/ComfyUI-PhotoMaker-Plus development by creating an account on GitHub. You can find the InstantX Canny model file here (rename to instantx_flux_canny. ComfyICU provides a robust REST API that allows you to seamlessly integrate and execute your custom ComfyUI workflows in production environments. safetensors. 🖌️ ComfyUI implementation of ProPainter framework for video inpainting. I then recommend enabling Extra Options -> Auto Queue in the interface. Regular KSampler is incompatible with FLUX. "A vivid red book with a smooth, matte cover lies next to a glossy yellow vase. FFV1 will complain about invalid container. You can use Test Inputs to generate the exactly same results that I showed here. 2. You can load this image in ComfyUI to get the full workflow. Upscale Model Examples. The denoise controls the amount of noise added to the image. 0 node is released. x, SD2. Once loaded go into the ComfyUI Manager and click Install Missing Custom Nodes. safetensors, stable_cascade_inpainting. Inside ComfyUI, you can save workflows as a JSON file. Contribute to comfyanonymous/ComfyUI_examples development by creating an account on GitHub. json at main · roblaughter/comfyui-workflows Aug 2, 2024 · Good, i used CFG but it made the image blurry, i used regular KSampler node. SD3 performs very well with the negative conditioning zeroed out like in the following example: SD3 Controlnet Dynamic prompt expansion, powered by GPT-2 locally on your device - Seedsa/ComfyUI-MagicPrompt Improved AnimateDiff integration for ComfyUI, as well as advanced sampling options dubbed Evolved Sampling usable outside of AnimateDiff. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. 0 and then reinstall a higher version of torch torch vision torch audio xformers. Please consider a Github Sponsorship or PayPal donation (Matteo "matt3o" Spinelli). DocVQA allows you to ask questions about the content of document images, and the model will provide answers based on the visual and textual information in the document. You can Load these images in ComfyUI to get the full workflow. Reload to refresh your session. Let's get started! Aug 1, 2024 · For use cases please check out Example Workflows. For Flux schnell you can get the checkpoint here that you can put in your: ComfyUI/models/checkpoints/ directory. Check ComfyUI here: https://github. You signed out in another tab or window. It covers the following topics: Load the . You signed in with another tab or window. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Common workflows and resources for generating AI images with ComfyUI. om。 说明:这个工作流使用了 LCM Sep 2, 2024 · After successfully installing the latest OpenCV Python library using torch 2. Our API is designed to help developers focus on creating innovative AI experiences without the burden of managing GPU infrastructure. Contribute to kijai/ComfyUI-LivePortraitKJ development by creating an account on GitHub. [Last update: 01/August/2024]Note: you need to put Example Inputs Files & Folders under ComfyUI Root Directory\ComfyUI\input folder before you can run the example workflow This Truss is designed to run a Comfy UI workflow that is in the form of a JSON file. 0. x, SDXL, Stable Video Diffusion, Stable Cascade, SD3 and Stable Audio. XLab and InstantX + Shakker Labs have released Controlnets for Flux. Fully supports SD1. These are examples demonstrating how to use Loras. Always refresh your browser and click refresh in the ComfyUI window after adding models or custom_nodes. A collection of Post Processing Nodes for ComfyUI, which enable a variety of cool image effects - EllangoK/ComfyUI-post-processing-nodes You signed in with another tab or window. Hello, I'm wondering if the ability to read workflows embedded in images is connected to the workspace configuration. The effect of this will be that the internal ComfyUI server may need to swap models in and out of memory, this can slow down your prediction time. You can download this image and load it or drag it on ComfyUI to get the workflow. 8. Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. A sample workflow for running CosXL Edit models, such as my RobMix CosXL Edit checkpoint. Examples of ComfyUI workflows. Flux. 👏 欢迎来到我的 ComfyUI 工作流集合地! 为了给大家提供福利,粗糙地搭建了一个平台,有什么反馈优化的地方,或者你想让我帮忙实现一些功能,可以提交 issue 或者邮件联系我 theboylzh@163. Here is an example: You can load this image in ComfyUI to get the workflow. Here is a workflow for using it: Save this image then load it or drag it on ComfyUI to get the workflow. Please read the AnimateDiff repo README and Wiki for more information about how it works at its core. The resulting MKV file is readable. The following images can be loaded in ComfyUI to get the full workflow. The only way to keep the code open and free is by sponsoring its development. json workflow file from the C:\Downloads\ComfyUI\workflows folder. Face Masking feature is available now, just add the "ReActorMaskHelper" Node to the workflow and connect it as shown below: This fork includes support for Document Visual Question Answering (DocVQA) using the Florence2 model. Elevation and asimuth are in degrees and control the rotation of the object. 0+CUDA, you can uninstall torch torch vision torch audio xformers based on version 2. This should update and may ask you the click restart. Mixing ControlNets Flux. Inpainting a cat with the v2 inpainting model: Inpainting a woman with the v2 inpainting model: It also works with non inpainting models. All LoRA flavours: Lycoris, loha, lokr, locon, etc… are used this way. A CosXL Edit model takes a source image as input alongside a prompt, and interprets the prompt as an instruction for how to alter the image, similar to InstructPix2Pix. You switched accounts on another tab or window. AnimateDiff workflows will often make use of these helpful ComfyUI also has a mask editor that can be accessed by right clicking an image in the LoadImage node and “Open in MaskEditor”. I'm facing a problem where, whenever I attempt to drag PNG/JPG files that include workflows into ComfyUI—be it examples The any-comfyui-workflow model on Replicate is a shared public model. This means many users will be sending workflows to it that might be quite different to yours. The input image can be found here , it is the output image from the hypernetworks example. As a reminder you can save these image files and drag or load them into ComfyUI to get the workflow. The vase, with a slightly curved silhouette, stands on a dark wood table with a noticeable grain pattern. You can then load or drag the following image in ComfyUI to get the workflow: Flux Controlnets. For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Example. Topics Trending For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples. Sytan SDXL ComfyUI: Very nice workflow showing how to connect the base model with the refiner and include an upscaler. Collection of ComyUI workflow experiments and examples - diffustar/comfyui-workflow-collection This is the input image that will be used in this example: Here is an example using a first pass with AnythingV3 with the controlnet and a second pass without the controlnet with AOM3A3 (abyss orange mix 3) and using their VAE. The more sponsorships the more time I can dedicate to my open source projects. json. safetensors for the example below), the Depth controlnet here and the Union Controlnet here. Here is an example for how to use the Canny Controlnet: Here is an example for how to use the Inpaint Controlnet, the example input image can be found here. Features. com/comfyanonymous/ComfyUI. 5 use the SD 1. Instead, you can use Impact/Inspire Pack's KSampler with Negative Cond Placeholder. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. Aug 1, 2024 · [2024/07/25] 🌩️ Users can load BizyAir workflow examples directly by clicking the "☁️BizyAir Workflow Examples" button. This repo contains examples of what is achievable with ComfyUI. You can take many of the images you see in this documentation and drop it inside ComfyUI to load the full node structure. A Jul 31, 2024 · You signed in with another tab or window. Please check example workflows for usage. For these examples I have renamed the files by adding stable_cascade_ in front of the filename for example: stable_cascade_canny. starter-person. These are examples demonstrating how to do img2img. To review any workflow you can simply drop the JSON file onto your ComfyUI work area, also remember that any image generated with ComfyUI has the whole workflow embedded into itself. 0 and place it in the root of ComfyUI (Example: C:\ComfyUI_windows_portable). ComfyUI Examples. Check my ComfyUI Advanced Understanding videos on YouTube for example, part 1 and part 2. PhotoMaker for ComfyUI. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. You can ignore this. A workflow to generate pictures of people and optionally upscale them x4, with the default settings adjusted to obtain good results fast. ComfyUI nodes for LivePortrait. ReActorBuildFaceModel Node got "face_model" output to provide a blended face model directly to the main Node: Basic workflow 💾. rjgkh hjtnp vdaq tkfxsi eqvwqct hdq yhtg qhyalwr hgldj rqfcrnmzm