creating consistent AI characters is now easier than ever I've just upgraded my free consistent character workflows based on your feedback making them easier to install and adding so many new cool features I'm calling this one the consistent Character Creator and here's how it works you use flu or sdxl to generate a new character or you can use an image of an existing character whether it's real or AI generated just drag and drop that in Click generate and the workflow will automatically create high resolution views of your character from every direction generate different emotions that you
can easily customize using sliders and put your character in different lighting conditions and environments that you can change with a simple prompt and here's the best part the new workflow will now automatically assemble all these images into two character sheets in different formats so you can quickly share your characters or keep track of your iterations so in this video I'm going to show you how to use this workflow walk you through the installation step by step and share some Advanced tips and tricks and by the end you'll know how to train your own AI model
of your character to create endless consistent AI characters for your AI movies Comics or social media this workflow combines several powerful AI tools and models to create this character sheet Ed its heart is a tool called multiv view adapter which uses sdxl to generate consistent 3D views of your character based on an input image we are also incorporating AI upscaling for higher quality results Florence 2 for automatic prompt generation Advanced life portrait to create different expressions and IC light to place your characters in different lighting environments and I've combined and connected all these models inside
of a free tool called comi and it's a note-based interface for staed fusion and other AI models I'll show you how to set everything up step by step at the end of this video however this is a very Advanced workflow so I highly recommend you familiarize yourself with the tool before diving into this workflow the workflow comes in three versions flux sdxl and sdxl light which is suitable for lower vram environments I recommend you start with the sdxl version of the workflow as it allows you to create a wide variety of styles and it's also
a very good balance between speed and quality so once you have everything set up you can just drag and drop the workflow file into the com UI interface and it will load up the workflow I know this looks like a lot and it kind of is but most of the stuff is going to happen automatically so let's say we already have an image that of a character that we want to use we can go up here and this will allow us to deactivate or activate different parts of the workflow we can deactivate this first group
here because we don't need to generate a character we already have one and then we can give our character a name so now we can skip this group here and go to this second group here and just upload a character here I recommend using an image like this a character with a neutral expression in a t pose in front of a simple background but any kind of pose will work and it doesn't have to be a t pose next just make sure that for this note here the input is two otherwise if it's one it
would try to load in the character that you generate here but we have a character so let's switch that to two and now we can pretty much just click Q prompt and wait for the whole thing to finish and I'm just going to grab a new coffee and now that it's done let me quickly walk you through all the steps so you know exactly what happens and where you can change stuff if you want so let's first zoom in on that first group here in this group you pretty PR much only need to load in
the image of your character or if you want to generate a new character using this workflow you need to put this node here to one otherwise it will just take a look at this image here and create a prompt and automatically generate these different views for your character based on this prompt and this image make sure that you are using these correct pipelines so you want to use the under adapter name you it should be this one and under pipeline name it should be this one the next thing that this part will do it will
extract the face of your character and put it in front of a gray background and as you can see the quality is not that great and even though the poses up here look pretty good the quality is also not that great so this is what this next group here is for first on the left here a prompt will be automatically created for your character if you want to add some additional information you can do that down here in this character description box this will then get fed into this upscaling setup I'm combining a tile control
control net with a typical sampler set up here and you can play around with the D noise value in the scheduler here higher values allow the AI to change your image more and lower values will keep it closer to the original the same happens for tiles so for example if you set that higher the original image will influence your image more and keep it more consistent but it will also not allow the AI to add additional detail and create that quality that we want so we need to find a balance here but these values usually
work pretty well and you can see that up here you can compare the input image and the generated image that worked really well but the eye color changed a little bit so in this case I would go back to the box down here and add like brown brown eyes next to that the same thing pretty much happens but for the body poses so here I'm using a tire control net I also give you the option to use a candy control net but the tile one usually works really well and I'm loading in an additional IP
adapter that will help keep the generated image closer to the original input image again you can play around with the strength in the control net and the D noising strength up here the values worked pretty well like for example here it started to fix the face and the hands a little bit but it's not perfect yet so I would try maybe to increase the the denoise value up here to give it some more freedom but honestly this is pretty much enough because this is only the first step because next to that here we have the
second up scaling group this first part here will upscale the image to a value that you specify here so I'm doubling the size and using the Doo's value again you can decide how much the AI is allowed to change the image so this one usually works really well as a starting point and you can see the quality increased a lot but the faces still look kind of weird and that's what this second part is for here this face detail will look only at the faces and just add some additional detail and again you play around
with the doo strength here once that's done all the individual pose images will be saved out and now we can select which image we want to use to continue in this workflow it said that we want to use the frontal uh face image here you can see this is image number one and this is selected up here but we could also select maybe the full body pose by changing this this is image number two to two up here and you can see it will then add the different emotions to the full body poses and it
will also then add your full body view of your character into these different lighting environments but now right now I want to set it to one to use the the high quality frontal image of the face here you can see in this group the emotions for your character will be created and you can super easily change them by just playing around with the sliders so for example let's make her Wing here and maybe raise the eyebrows and yeah uh be beautiful next to that we have the lighting group and the way this one works is
you can go up here and create some different environments that you want to put your character in I filled in these uh varied example prompts here so we have a desert we have a club we have Sunset and like a national park and then the character will be added on top of these backgrounds down here and you can see this just works really really well and the character is absolutely consistent if you want you can also change the input for these faces down here so for example right now we have the neutral pose for all
these images but maybe you want to use all these different emotions from here what you can do then is go down here and change this input switch to two below that you can also decide if you want to blur the background a little bit I did that just so it looks a bit more like a real uh photography just click Q prompt again and now we have these varied poses next in this final group you don't have to do anything since this will just create the character sheets for you it will take all the images
that we just created and put them into these two formats here and yeah this is pretty much the whole workflow if you don't want to use an image as your input you simply need to go to the top here switch on character generation and change this one here to one so it will select this first input download the free open post T POS here from my patreon and just load that in here so this will just create a character in a t POS and then you can create a prompt for your character here so let's
quickly run that and you can see it created a cool character for us now we can just wait for all the following steps to finish with this character yeah that looks really nice the emotions are cool and yeah now we have the final character sheets for our newly generated character another cool thing we can do with a consistent character creator is change the style of an input image so let's say I want to create a children's book for my son here in the Pixar Style just kidding I don't have a son I found this image
on pexels but maybe you have a son and you want to create a children's book featuring him as the main protagonist so what you could do is download a Laura for a specific style for example on civit AI and load it in here or what I will do is I will go to this group here and change the style from photography to pixel 3D animation so something like this it created these different views from all the Angles and extracted the face again and now it upscaled the image and created a slightly more pixel style but
I want to make the style stronger so what I can do is just reduce the strength of the tile control net maybe to like 30 and just run this again and you can see he comes out looking more like a Pixel character for the body I also reduced the control n strength and now it's done and you can see it just worked really really well but you can also change the style before using this workflow you can basically input any style that you like for example I could load in this character right here sort of
already like a Disney style click Q prompt and it will create this character sheet for me you don't even need to input an image of a character you can also just input like a product or something and the workflow will generate a character sheet for milk with like different um perspectives of the milk of course it will not add any emotions uh since there is no phase in there but it all works if you want to get the maximum quality out of these workflows consider supporting me on patreon I created Advanced versions for all the
workflows that have additional upscaling setups all the different emotions will be upscaled as well as the lighting setups and you can really see how how much amazing detail you get in these images as an additional thank you you also get access to my amazing AI Discord Community as well as additional example files and luras that I created while making this video so now let's continue with my free version of the flux version of this workflow you can see the flex version pretty much looks exactly the same because it's structured in the exact same way but
in the beginning here you need to make sure that you have a flux checkpoint loaded and you can can download that via the com UI model manager if you want and the second thing you need to change is you need the instant X control net for flux and you can also install that via the com UI manager that's pretty much all the things you need to change of course the sampler settings and everything are different but I already changed them for you that means you can just import an image and click Q prompt and it
will just work this will be a lot slower than the sdxl version but in my experience flux is a lot better with like human anatomy and it will fix the body parts way better than for example the sdxl version with the flux version we also don't have a functioning IP adapter so I remove that part and that means you cannot go as high with the den noising however in my experience it works really well at like 25% and this will keep the character consistent and still fix a lot of the things now for a lot
of you the flux version will probably be too slow in this case I recommend using us the sdxl version it's also just my favorite version I use it all the time now in some cases that might also not work for you because that multi view adapter part here is pretty vrm intensive in this case you can use the low vram versions of all the workflows you see they have the same structure but with a key difference the multi view adapter part is missing because this one requires at least 12 GB of vram but I still
want you to be able to use the workflow so the way it works is you import your character image right here as you would with the other versions of the workflows but then you go to this link here and this is the free demo of multiv view adapter that you can use without even creating an account you import your character image right here then you can create a simple prompt I'll just do a woman standing with her arms outstretched and click run now this takes around 40 seconds and it removes the background and creates all
these different views for you next you need to download them all create a new empty folder and put all these images inside of there copy the path to this folder go back to your workflow and paste in the path here when you now click U prompt it will load in these images and just finish the character sheets for you now that you know how to use this workflow you can use all the images that it creates to train aura for that specific character Aura is basically a way to give an AI model more context for
a specific object style or character to show you how this works I created this pixel Style character using this input image right here and these are the character sheets that it created so now I create my data set by just selecting the best images I definitely want images from from the side and from the back this one looks great as well so these are the images that I selected and these are more than enough to train aura you can use a variety of online services that allow you to train your Laura and you just have
to pay a small fee for for that but you can also do it locally and for this again I'm using flux gym the easiest way to install flux gym is via Pinocchio Pinocchio is this sort of AI browser that just lets you install different AI tools with one click just go to the Discover page and click install and now we have to follow these three steps first we need to give our character a name I call her pixela and now we need a trigger word for Lara so when we put that into the prompt it
will generate that character and that trigger word should be something unique so not a concept that could already be in the data set so I'm going to use the trigger word pixel but with a one next you can select how much vram you have I have 24 so I will choose that but you can go as low as 12 below that you should probably use one of these online services next I'm just going to drag and drop in my data set and now we have to caption these images so that flux knows what's depicted in
these images and this will just give us more flexibility when we work with this character and do you remember how I used Florence 2 in my consistent character workflow to create the captions and prompts for the the workflow we can do the same thing here with Florence 2 so I just click that and it will automatically add captions to all these images Florence 2 just did a pretty good job of captioning these images but I would like to be even more precise so what I would like to do is I would like to name the
um framing of her so this for example is a closeup I now also want to name her emotions and the view angle and I also want to make sure Flex knows the pose that is depicted so I'm going to add T pose and standing with arms outstretched okay so these are my final captions and now I can just click start training when you first run this it will probably take a long time because it's downloading the full flux Dev model but you only need to do that the first time after that it will just immediately
start training once the training is completed you can go into your Pinocchio folder go to API flux gym open outputs and there you will find all your models this is for pixela and you can see that it saves out different versions during the training but the final one the one that we want is this one right here so I just copy that and then I go into my comi folder models luras and I created an extra folder for flux gym lauras so I'm just going to throw that in there and then you need to refresh
or restart comi and then I just load in my standard flx image Generation Um workflow we're loading in our flx Dev checkpoint here and then we need to load in our Laura here so maybe a prompt like this make sure to drop in the keyword that you created here and you can already see it's following our prompt perfectly here I also added the option to add additional detail using these uh detail demon uh sampler settings here so what you could do is reduce this multiply Sigma factor for some increased detail or change the detail amount
here and once you found an image that you like you can use this setup to the right here to upscale it to the value defined here you could also use this image generation workflow with your trained Laura to create an even better and more varied data set for the next iteration of the Laura so now that you know how to use this workflow and what you can do with it let me show you the full installation process to help you follow along I created a this free guide here so let's start by downloading comi go
to the comi page scroll down and click direct link to download this will then download the comi folder once that's done you can put that folder anywhere you like and extract it this extracted folder is now your comi directory next we need to install git just go to the website and install the Standalone version by following the standard installation steps next we need to download the com VII manager so just go to this page scroll down to installation and rightclick on this link save link as and go to your newly created com UI directory and
save it in here once it's downloaded double click it to run it and it will install the comi manager once it's done you can start comi by clicking run Nvidia GPU to use my workflows just drag and drop the work flow Json file into the com UI interface now you can see it loads up the workflow but there are still a lot of red missing custom nodes to install them you just need to go to manager install missing custom nodes select all of them and click install I had the issue that with the newest version
of compi some of the custom nodes weren't installing properly this will probably be fixed in the future but for now you have to use this workaround here you can find it in my free guide when you scroll down to the bottom here you can see that there is an issue with hugging face H and all you need to do is go to your comi directory go to the address bar and type in CMD to open this command window and then you just need to copy this Command right here paste it in there hit enter this
will install the newest version of hugging face Hub then restart comy UI and once you restart it it should look like this with all the nodes here now before I use this workflow let's get rid of these noodles here and create these straight lines for this you just need to go to the settings go to light graph and switch the link render mode to straight just looks a bit cleaner that way but if you want you can also go to the bottom corner here and um toggle link visibility so you don't see these Links at
all and it will look even cleaner that way finally we need to install some models and you can find all the models that you need to download right here to the left left of the workflow so you can see directly next to the the corresponding note where to get that model and where you need to put that in your comu folder structure I'm using the sdxl version of this workflow so I need an sdxl model and I'm using Jager not XL but you can also use any other sdxl model just be aware that you might
need to adapt the sampler settings so I recommend starting with this one just right click on this download button here save link as and then go to your conf UI folder comy UI models and this one should go into checkpoints next we need the control net and I'm using the control net Union sdxl and you can install that via the manager just go to manager model manager and search for Union and then you should download and install this one right here the prax one next we need an upscale model and I used this one right
here clear reality and you can use this link here to get it just click download download this one right here and put it into your comi folder comi models upscale models and put it in here next we need an SD 1.5 checkpoint and we need that for the relighting process for IC light and it's not really important which one we use so I recommend just downloading Photon just right click on the download link save link as and also put that into your checkpoints folder next we need this IC light model here go to manager model
manager and you can search for FBC and this is the one that we need next we need this bbox model down here and we can also get that via the manager model manager search for bbox and download this one at the top for the sdxl version we also need to download the models for the IP adapter and you can also do that in the manager just go to model maner search for IP adapter and you can download all the ones that have this description here now you don't need all of them right now but it's
good to have them for future workflows next we need the clip models for the IP adapter so just go to clip scroll down until you see this here clip Vision model needed for IP adapter and so you need to download these ones right here and you also need to install these models here so once that's done you can click up to refresh comi or restart now for the newest version of comi you might get an error like this for this you just need to go to the comi folder type in CMD open the command window
again and paste in this command that you can also find in the guide I hope you enjoyed this video and this workflow is useful for you if you'd like to support my work and gain access to the advanced versions and exclusive example files like all the data sets character sheets luras and prompts I used created for this video consider supporting me on patreon your support makes this channel possible so thank you very much and see you next time