Comfyui adetailer reddit ya been reading and playing with it for few days. Please share your tips, tricks, and workflows for using this software to create your AI art. . I just made the move from A1111 to ComfyUI a few days ago. Any tips are greatly appreciated. g. tl;dr just check the "enable Adetailer" and generate like usual, it'll work just fine with default setting. A1111 is REALLY unstable compared to ComfyUI. I use ADetailer to find and enhance pre-defined features, e. My main source is Civitai because it's honestl Apr 24, 2025 · Hello I've been using stable diffusion for a while now and recently I've been trying to migrate to comfyui but I'm struggling with getting good results on the adetailer process. Dec 15, 2024 · I come from Forge UI and the way it's done there is HiRes Fix -> ADetailer. See, this is another big problem with IP adapter (and me) is that it's totally unclear what all it's for and what it should be used for. Both of my images have the flow embedded in the image so you can simply drag and drop the image into ComfyUI and it should open up the flow but I've also included the json in a zip file. It picked up the loras, prompt, seed, etc. Adetailer is actually doing something now, however minor. I also had issues with this workflow with unusually-sized images. be/ynfNJEtvUtQHow to Install Manager: https://youtu. This is the first time I see Face Hand adetailer in Comfyui workflow /r/StableDiffusion is back open after the protest of Reddit killing open API access, which Welcome to the unofficial ComfyUI subreddit. I've also seen a similar look when ADetailer is used for Turbo models with certain samplers. And Also Bypass the AnimateDiff Loader model to Original Model loader in the To Basic Pipe node else It will give you Noise on the face (as AnimateDiff loader dont work on single image, you need 4 atleast maybe and facedetailer can handle only 1 ) Hi there. To encode the image you need to use the "VAE Encode (for inpainting)" node which is under latent->inpaint. The thing that is insane is testing face fixing (used SD 1. true. And above all, BE NICE. If adetailer is not capable of doing it, what's your suggestion? 27 votes, 38 comments. Is stable diffusion's adetailer just better? Does it also upscale the mask? Sometimes in comfyui I even get worse results than the preview. 1st pic is without ADetailer and the second is with it. Uncharacteristically, it's not as tidy as I'd like, mainly due to a challenge I have with passing the checkpoint/model name through reroute nodes. I didn't use any adetailer prompt. Adetailer and other are just more automated extensions for it, but you don't really need to have a separate model to place a mask on a face (you can do it yourself), that's all that Adetailer and other detailer extensions do. That way you can address each one respectively, eg. Noticed that speed was almost the same with a1111 compared to my 3080. currently my "fix" for poor facial details at 1024x1024 resolution (SDXL) is two-cycle ksampling - ending the first sampler at 8/24 steps and… Installation is complicated and annoying to setup, most people would have to watch YT tutorials just to get A1111 installed properly. And the new interface is also an improvement as it's cleaner and tighter. Specifically, "img2img inpainting with skip img2img is not supported" due to bugs, which could be a potential issue for ComfyUI integration . and 9 seconds total to refine it. I tend to like the mediapipe detectors because they're a bit less blunt than the square box selectors on the yolov ones. Put ImageBatchToImageList > Face Detailer > ImageListToImageBatch > Video Combine. I just released version 4. turn adetailer on. To continue talking to Dosu, mention @dosu. Help me make it better! Just tried it again and it worked with an image I generated in A1111 earlier today. Adetailer was the only real thing I was missing coming from SDNext, but thanks to mcmonkey and fiddling around a bit I got adetailer-like functionality running without too much trouble. My guess -- and it's purely a guess -- is that ComfyUI wasn't using the best cross-attention optimization. It is pretty amazing, but man the documentation could use some TLC, especially on the example front. try default settings. I set up a workflow for first pass and highres pass. This wasn't the case before the updating to the newest version of A1111. From chatgpt: Guide to Enhancing Illustration Details with Noise and Texture in StableDiffusion (Based on 御月望未's Tutorial) Overview. Adetailer model is for face/hand/person detection Detection threshold is for how sensitive it's detect (higher = stricter = less face detected / will ignore blurred face in background character) then mask that part Welcome to the unofficial ComfyUI subreddit. Forgot even comfyui exist. 23 votes, 21 comments. Most "ADetailer" files i have found work when placed in Ultralytics BBox folder. Help me make it better! As the title suggest, I'm using ADetailer for Comfy (the impact-pack) and works well, problem is I'm using a Lora to style the face after a specific person(s), and the FaceDetailer node makes it clearly "better" but kinda destroys the similarity and facial traits. I even tried adetailer, but Roop always happens after adetailer, so it didn't help either. 0. IOW, their detection maps conform better to faces, especially mesh, so it often avoids making changes to hair and background (in that noticeable way you can sometimes see when not using an inpainting model). CrunchBangPlusPlus (or #!++) is an effort to continue the #! environment. So when I tried it that way in ComfyUI, it comes out a little weird (eyes too far apart or sharp lines, not consistent to the overall style) to really bad (extremely deformed, ears, eyes, and ears not where there is supposed to be any). Please keep posted images SFW. How exactly do you use it to fix hands? When I use default inpaint to fix hands, the result is also not so good, no matter the checkpoint and the denoise value. used Eyes adetailer from civitai and sam_vit_l_0b3195. pth Testing the same prompt keeps giving me the same result, except that this time is the eye on the right the one that came up good. I've never tried to generate whole video with denoising 1, maybe I will give it a try. Any way to preserve the "lora effect" and still fix imperfect faces? BTW, that pixelated image looks like it could be because the wrong VAE is being used. be/dyrhPVRsy9wComfyUI Impact Pack: https://github. If there is only one face in the scene, there is no need for a node workflow. Did not pick up the ADetailer settings (expected, though there are nodes out there that can accomplish the same things). hello cool Comfy people! happy new year. com) 機能拡張マネージャーから入手できます。 We would like to show you a description here but the site won’t allow us. I'm new to all of this and I've have been looking online for BBox or Seg models that are not on the models list from the comfyui manager. Belittling their efforts will get you banned. 5 just to see to compare times) the initial image took 127. Losing a great amount of detail and also de-aging faces on a creepy way. But it's reasonably clean to be used as a learning However, I get subar results compared to adetailer from webui. ComfyUI only has ReActor, so I was hoping the dev would add it too. i always wanted to get in to ComfyUI due to speed. no prompt. With ComfyUI you just download the portable zip file, unzip it and get ComfyUI running instantly, even a kid can get ComfyUI installed. Okay, so it's complete test out and refiner is not used as img2img inside ComfyUI. You can use Segs detailer in ComfyUI which if you create a mask around the eye, it will upscale the eye to a higher resolution of your choice like 512x512 and downscale it back. Please share your tips, tricks, and… Welcome to the unofficial ComfyUI subreddit. ) To clarify, there is a script in Automatic1111->scripts->x/y/z plot that promises to let you test each ADetailer model, same as you would a regular checkpoint, or CFG scale, or number of steps. If you want to have good hands without precise control on pose, you add a LoRA, put "hands" on negative and use adetailer for the fine retouch if needed. I have to push around 0. One for faces, the other for hands. Update: I went ahead and reinstalled SD. " It will attempt to automatically detect hands in the generated image and try to inpaint them with the given prompt. When I do two-pass the end result is better although still falls short from what I got on webui with adetailer, which is strange as they work in the same way from what I understand. Oct 29, 2023 · あるいはその代わりをするカスタムノードは? Facedetailer Facedetailer Comfy UIには「ADetailer」はありません。 そのかわりに「Facedetailer」というものがあります。 (19) ADetailer for ComfyUI : StableDiffusion (reddit. pt" and give it a prompt like "hand. There are some distortions and faces look more proportional but uncanny. 27 votes, 38 comments. How to Install ComfyUI: https://youtu. If you're using ComfyUI you can right click on a Load Image node and select "Open in MaskEditor" to draw an inpanting mask. More flexible. That extension already had a tab with this feature, and it made a big difference in output. All packages were forked directly from the #! repositories/Github and changed only where necessary to keep it up to date with newer packages. ADetailer works OK for faces but SD still doesn't know how to draw hands well so don't expect any miracles. 0,9 seconds. I'm using ComfyUI portable and had to install it into the embedded Python install. I tried to upscale a low-res in img2img, with adetailer on, still doesn't do much. 5 Its getting over saturation bc facedatailer essentially just detects where face is, crops that region along with a mask matching only the face, resizes that region to the max_size, then it does an img2umg at low denoise, after that it just resizes the regenerated face to original size and patches it into the Hi, is there a tutorial how to do a workflow with face restoration on COMFY UI? I downloaded the impact pack, but I really don't know how to go from… I was waiting for this. #!++ a lightweight Debian-based distribution featuring the Openbox and GTK+ applications. I am curious if I can use Animatediff and adetailer simultaneously in ComfyUI without any issues. Reply reply More replies Top 1% Rank by size Hi guys, adetailer can easily fix and generate beautiful faces, but when I tried it on hands, it only makes them even worse. The default settings for ADetailer are making faces much worse. That said, I'm looking for a front-end face swap, something that will inject the face into the mix at the point of ksampler, so if I prompt for something like Freckles they won't get lost in the swap/upscale but I've still got my likeness. " but the results were basically the same. and the adetailer repo: sd-webui-adetailer Adetailer was the only real thing I was missing coming from SDNext, but thanks to mcmonkey and fiddling around a bit I got adetailer-like functionality running without too much trouble. Use Ultralytics to get either a bbox/SEGS and feed that into one of the many Detailer nodes and you can automate a step to have it work on the face up close. Hopefully, some of the most important extensions such as Adetailer will be ported to ComfyUI. I want to install 'adetailer' and 'dddetailer', the installation instruction says it goes into the 'extensions' folder, but there is none in ComfyUI. I am using AnimatedIff + Adetailer + Highres, but when using animatediff + adetailer in webui, the face appears unnatural. Comparison: 128 votes, 32 comments. Then i bought a 4090 a couple of weeks ago (2 i think). com) 機能拡張マネージャーから入手できます。 BTW, that pixelated image looks like it could be because the wrong VAE is being used. Change max size in Facedailer node to 1024 whenever using sdxl models, 512 for sd1. or want to add something similar to "adetailer" pluging from automatic1111 or a Hello guy, Sorry to ask, but i searched for hours, documentation internet, even the source code of Impact-Pack i found no way to add new bbox_detector. A lot of people are just discovering this technology, and want to show off what they created. I am using adetailer (max. The best use case is to just let it img2img on top of a generated image with an appropriate detection model, you can use the Img2Img tab and check "Skip img2img" for testing on a preexisting image like this one. While that's true, this is a different approach. it's no longer maintained, do you have any recommendation custom node that can be use on ComfyUI (that have same functionality with aDetailer on A1111) beside FaceDetailer? someone give me direction to try ComfyUI-Impact-Pack, but it's too much for me, I can't quite get it right, especialy for SDXL. Hello, I have been trying to find a solution to fix multiple faces in a single photo but I am unable to do so, a scene such as a bar full of people, if I use A1111 adetailer or ComfyUI Face detailer, every time there are more than 1 people in a photo, the face fixing just adds the same face for every single character. (In webui, adetailer runs after the animatediff generation, making the final video look unnatural. 21K subscribers in the comfyui community. im beginning to ask myself if that's even possible in Comfyui. i get nice tutorial from here, it seems work. Currently I don't think ComfyUI lets you output outside the output folder but we could add options for choosing subfolders within that and template based file names. Hi guys, adetailer can easily fix and generate beautiful faces, but when I tried it on hands, it only makes them even worse. i'm looking for a way to inpaint everything except certain parts of the image. I guess with adetailer denoising at 0. Welcome to the unofficial ComfyUI subreddit. This one took 35 seconds to generate in A1111 with a 3070 8GB with a pass of ADetailer I observed that using Adetailer with SDXL models (both Turbo and Non-Turbo variants) leads to an overly smooth skin texture in upscaled faces, devoid of the natural imperfections and pores. The creator has recently opted into posting YouTube examples which have zero audio, captions, or anything to explain to the user what exactly is happening in the workflows being generated. But the problem I have with ComfyUI is unfortunately not with how long it takes to figure out, I just find it clunky. OP, you can greatly improve your results by generating, and then using aDetailer on your upscale, and instead of using a singular aDetailer prompt, you can choose the option to prompt faces individually from left to right. also take out all the "realistic" eye stuff in ur pos/neg prompt that voodoo does nothing for better eyes, good eyes come from good resolution, to increase the face resolution during txt2img you use adetailer. 3 in order to get rid of jaggies, unfortunately it will diminish the likeness during the Ultimate Upscale. Tried comfyui just to see. The easiest solution to that is to specify a different sampler for ADetailer. Before switching to ComfyUI I used FaceSwapLab extension in A1111. i managed to find a simple SDXL workflow but nothing else. Tweaked a bit and reduced the basic sdxl generation to 6-14 seconds. We know A1111 was using xformers, but weren't told, as far as i noticed, what ComfyUI was using. 2 noise value it changed quite a bit of face. For example, the Adetailer extension automatically detects faces, masks them, creates new faces, and scales them to fit the masks. The video was pretty interesting, beyond the A1111 vs. This is the setup for the eye detailer. i just want to be able to select model, vae if necessary, lora and thats it. I tried with "detailed face, realistic eyes, etc. And the clever tricks discovered from using ComfyUI will be ported to the Automatic1111-WebUI. Just make sure you update if it's already installed. Mar 23, 2024 · Hey this is my first ComfyUI workflow hope you enjoy it! I've never shared a flow before so if it has problems please let me know. the amount of control you can have is frigging amazing with comfy. I will have to play with it more to be sure it's working properly but it looks like that may have been the issue. Continued with extensions, got adetailer, control net etc with literally a click. 4 denoise) after roop and codeformer and then SD Ultimate and normal Upscaler with Ultra Sharp. I'm actually using aDetailer recognition models in auto1111 but they are limited and cannot be combined in the same pass. It is what only masked inpainting does automatically. Is there a way to have it only do the main (largest) face (or better yet, an arbitrary number) like you can in Adetailer? Any time there's a crowd, it'll try to do them all and it ends up giving them all the expression of the main subject. 25K subscribers in the comfyui community. That was the reason why I preferred it over ReActor extension in A1111. The original author of adetailer was kind enough to merge my changes. Clicking and dragging to move around a large field of settings might make sense for large workflows or complicated setups but the downside is, obviously, a loss of simple cohesion. Comfy speed comparison. Maybe I will fork the ADetailer code and add it as an option. It's amazing the quality of images that you can get with simple prompts, even panoramic images. Next since that one is apparently kept more up to date and so far this has made a difference. com/ltdrdata/Comf Regarding the integration of ADetailer with ComfyUI, there are known limitations that might affect this process. a few days ago installed it, speed is amazing but i cannot do anything almost. Despite relatively low 0. If you want the ComfyUI workflow, let me know. However, the latest update has a "yolo world model", and I realised I don't know how to use the yolov8x and related models, other than in pre-defined models as above. Here's the repo with the install instructions (you'll have to uninstall the wildcards you already have): sd-webui-wildcards-ad. using face_yolov8n_v2, and that works fine. Please share your tips, tricks, and workflows for using this… Thanks for the reply - I’m familiar with ADetailer but I’m actually deliberately looking for something that does less. Anything wrong here with this workflow?. 149 votes, 33 comments. Hell it probably works better with mcmonkey's implementation now that I understand the ins and outs. Hi. Now, a world of possibilities has opened; next, I will try to use a segment to separate the face, upscale it, add a lora or detailer to fine-tune the face details, rescale to the image source size, and paste it back. One thing about human faces is that they are all unique. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Will add other image metadata display of things like models and seeds soon, they're already loaded from the file, just not in the UI yet. I wanted to set up a chain of 2 facedetailer instances into my workflow. Following this, I utilize Facedetailer to enhance faces (similar to Adetailer for A1111). We would like to show you a description here but the site won’t allow us. Under the "ADetailer model" menu select "hand_yolov8n. doing one face at a time with more control over the prompts. 0 of my AP Workflow for ComfyUI. Or just throwing the image to img2img and running adetailer alone (with skip img2img checked) then photoshopping the results to get good hands and feet. Adetailer doesn't require an inpainting checkpoint or controlnet etc etc, simpler is better. For something similar, I generate images with a low number of steps and no adetailer/upscaler/etc, then when I get one I like I'll drag it into back into the UI to recreate the exact workflow and up the step count/enable the extra quality features that were in groups set to bypass. Is it true that we will forever be limited by the smaller size model from the original author? Can someone shed some light on it please? Thanks a lot. and the adetailer repo: sd-webui-adetailer Welcome to the unofficial ComfyUI subreddit. Man, you're damn right! I would never be able to do this in A1111; I would be stuck into A1111's predetermined flow order. I wish there was some way to force adetailer only to a specific region to look for its subjects, that could help alleviate some of this. It seems I may have made a mistake in my setup, as the results for the faces after Adetailer are not turning out well. If adetailer is not capable of doing it, what's your suggestion? We would like to show you a description here but the site won’t allow us. I call it 'The Ultimate ComfyUI Workflow', easily switch from Txt2Img to Img2Img, built-in Refiner, LoRA selector, Upscaler & Sharpener. While I can select that script, and plug in the different ADetailer models, it does not seem to have any effect. Now unfortunatelly, I couldnt find anything helpful or even an answer via Google / YouTube, nor here with the subs search funtion. Most of them already are if you are using the DEV branch by the way. Going to python_embedded and using python -m pip install compel got the nodes working. It works just like the regular VAE encoder but you need to connect it to the mask output from Load Image. 3 it is not that important. 5ms to generate. This guide, inspired by 御月望未's tutorial, explores a technique for significantly enhancing the detail and color in illustrations using noise and texture in StableDiffusion. Giving me the mask and letting me handle the inpaint myself would give me more flexibility for eg. I do a lot of plain generations, ComfyUI is It can help you do similar things that the adetailer extension does in A1111. xnh gknue vuabp unwdnf rxyw iizyr rfdv ehixm iajza ttdis