Flowise n8n - The BEST No Code Local AI Agent Combo
49.6k views6299 WordsCopy TextShare
Cole Medin
The Local AI and n8n sections are LIVE in the oTTomator Think Tank - head and over and come be a par...
Video Transcript:
before we get into the main video I have two announcements that I'm very excited to share with you really quickly here the first one over in our automator Think Tank Community I am opening up sections for both n8n and local AI obviously playing very well into this video things that I'm going to keep focusing on a lot on my channel the second announcement I am doing a collab with zuar and his YouTube channel AI Workshop where he does a ton with NN war on that coming soon but I'm very much looking forward to that collab so with that on to the main video the future of AI is ultimately going to be running everything locally your llms your reg pipeline your workflow automations and the list goes on a few months ago I covered the local AI starter kit developed by the n8n team which provides an amazing proof of concept of running all of your AI needs locally and that video completely blew up on my channel it is still my most viewed to date by far and I think the biggest reason for that is like me you realize that local AI is the way of the future and this package is a very powerful demonstration that running everything locally and having all the power that you need on your computer is not that far off from reality the biggest blocker right now is just that open source llms aren't quite as powerful as the close Source ones right now but that Gap is diminishing very quickly more recently I added open web UI into the local AI starter kit to make it possible to interact with your NN agents in a beautiful locally hosted UI and now I'm adding adding another platform into the mix flow wise flow wise is a low SL noode AI automation tool it is completely free open source and built on top of Lang chain which I also love and the best part is it pairs really really well with n8n and that's what we're going to cover in this video n8n is still my favorite no code AI automation platform especially because of how well it integrates with hundreds of applications but there is definitely a steeper learning curve for n8n especially when building your AI agents flow wise on the other hand is as simple as it can come and has become my go-to platform to very quickly prototype AI agents for what I am building and I integrate it with n8n because I use n8n workflows as my agent tools to interact with things like slack and Google drive it is just the perfect combo to build AI agents super quickly that can really do anything and in this video we'll build an agent together using flow wise and n8n to demonstrate the power of this combo and really showcase why you want to use both platforms together and not just stick with one and we're going to do all of this within the local AI starter kit let's Dive Right In so let's start with an overview of flow wise pH of prototyping is very crucial and you want to do it fast and that is what flow wise combined with n8n does very very well and flow wise is backed by y combinator ton of people are using it it's super easy to install they have their GitHub repo here so you can see all the code for flow wise they've got instructions on how to run it both with npm and with Docker which by the way Docker is what we're going to be using for the local AI starter kit and if we go into their Docker folder here they have instructions on how to run everything with Docker so really the only prerequisites that you need for flow wise and the local AI starter kit is GitHub desktop and Docker desktop and I'll have a link to both of those in the description of this video so you have those installed and you're going to be able to follow along and do everything that we are seeing right here and so they even have instructions on how to run it including setting up the EnV file there's a lot of environment variables that I'm not bringing into the local AI starter kit because I want to keep it simple but you can add things like user authentication to flow wise as well so definitely check this out if you want to look into a bit more of what is available for environment variables but yeah this Docker compost file right here that defines the flow wise service this is what I use to bring it into the local AI starter kit and that's what I'm going to show you how to run right now so let's dive into actually getting this running all packaged together with this AI start kit that I've extended with flow wise and so if we scroll down in this read me here we've got instructions on how to install this and I'll have this Linked In the description as well but it's super basic you literally just need like three commands to get this entire thing up and running because first of all you have to do a get clone of this repo that's why get is a requirement I obviously already have a clone which is why I get this error U but once you have that you just need to CD into the repository Right Here Local AI package and then and then if I do code Dot and I have Visual Studio installed this will open it up right away within my VSS code and here's my Docker compose file so you can see right here that my recent addition is the flowy service with open web UI being the last thing that I added in beforehand and so this is just running on Port 300 And1 U actually flow wise by default likes to run on Port 3000 um but that's where we have open web UI right now so just running it on Port 3001 super easy addition and then setting up your environment variables is also very very easy you just take your env. example file in the repo rename it Tov like I dead here and then you can change any postgress settings that you want or leave these as the default and then for your n8n encryption stuff you can just have this be a random alpha numeric uh character set it doesn't really matter what you have here this is just for n8n under the hood and so once you have this renamed Tov you can go ahead and right away and run the last command here which is using docker compose to start this up and so I'll just paste this in here also by the way for Linux I believe it is Docker Dash compose so keep that in mind but Docker compose and then I'm running it with my Nvidia GPU there's also instructions for how to run this with mac and then if you're just running on the CPU if you don't have a powerful GPU for your local llms and so I'll run this right here it's going to spin it all up for me and so it's going to pull all of the images if I don't have them already and then once this is all done we'll go ahead and open up flow wise and n8n so let me pause and come back once this is installed all right once you have everything installed with the docker compost command you can go into your Docker desktop and see your compost stack right here with all of the services running we got n8n quadrant postest flow wise I'm not running open web UI right now because I'm not using it in this video I'm also going to be using the olama hosted on my computer so I don't have to reinstall my llms which is why I don't have that here as well but if you run that command I just gave you you'll have all all of that running here ready free to use in the browser and one thing I want to cover really quickly before I go into the browser is if you click on any of these containers here you get to see the logs in real time and you can go to the exact Tab and execute any commands within the image here and so there are just so many powerful tools for you right here to debug your containers in real time as you're building up your local AI stack so just wanted to call it out really quickly here because it's super important that you know how to get the visibility you need to really debug things as you are building and so with that we can go over into the browser here and yeah to access flow wise all you to do is just go to Local Host Port 3001 so you can reference your Docker compost file that I have in the repo to see what port you need to go to for each of the different services so it's 31 for flow wise for n8n it's Local Host Port 5678 for uh open web UI it's Port 3000 you get the idea just reference the docker compost file and So within flow wise right here I already have a chat flow that I've created but for this demonstration just to show you how easy it is to do everything I'm going to make my flow wise agent completely from scratch and so let's go ahead and dive right into doing that so as I build out this agent I'll cover other parts of the platform like tools credentials and variables but I want to Dive Right into setting up the chat flow and so we're going to do that the one thing that I'm not going to cover here is Agent flows this is a new beta feature Within flow wise a whole new level of agents that you can build within the platform but I do want to start pretty simple here so I'm going to go with the classic chat flow where you can still build really powerful AI agents and integrate with n8n like we are going to do and so the agent that we're going to build I'm going to do it completely from scratch just so you can see how easy it is to do we're going to give it the ability to search the web interact with NN workflows to create Google Docs summarize slack conversations send slack messages all that good stuff so let's go ahead click on add new and we have a blank slate here where we can now add in all these different components from Lang chain and right away you can see looking at just all the options that we have here for agents and for things like document loaders there's a lot here that's a bit harder to set up in n8n like I can just load right from the brav search API you can't do that in Ann without a good amount of work and I will say this is super important to know anything you can do inflow wise you can technically do in n8n that's why I still prefer n8n and like I I said I don't know if flow wise is the most production ready application but it's so easy to build things so fast and that's why flow wise is beautiful and so what we're going to do is start with our tool agent node and so if I just search for Tool we can see that we have this tool agent I'm going to drag this in and there we go and you can see it's like vector shift here where we just have to drag out to different nodes to connect all the things that we need for our agent and so really basic we can start with our memory here so if I Collapse the agents and tools and clear my search let's go down into the memory here so there's a lot of different options that we have here I'm going to start with something really basic I'm just going to go with buffer memory but you could also use postgress if you want to because that's a part of the local AI starter kit I'm just going to start simple and do this but just know that you can use the other things that we have in the local AI starter K which is the beauty of it so there's our buffer memory and now we want to use AMA for our chat model and so again super easy I had to search for that's it there we go we got our chatama model boom all right and then you can even set up a cache as well like look how easy it is just set up all these things I'm going to collapse this here and go down into uh the caching oh that's right up here so let me go to cache and then a lot of options for this as well so many things that yeah like setting up like a reddish cache just absolutely beautiful I'm going to stay simple here but reddish is probably what I'd recommend if you really want to build something uh robust but yeah I'll just use an inmemory cache for now and then then we get to set all the parameters that you'd want to set for olama so first of all we have our base URL here and the base URL it's a little tricky because it depends if you want to use olama running in the container for the local AI starter kit or if you want to use olama running on your own computer so right now for me I actually want to use olama running on my computer and the reason for that is I don't want to reinstall all the models that I have for olama on my computer in in my olama container in the starter kit and so the base URL when you are within the flow wise container pointing out to your computer it's host. doer.
internal this is just standard Docker notation to reference the host machine I. E my computer that is currently running olama if you want to instead reference the olama instance running in the container you have to reference the container name that you just get from the docker compost file so the olama service that is in that Docker compose file that we run is just called ol so you would literally just replace local host or host. doer.
internal with olama and then boom that references olama within the container so when I was covering the uh local AI starter kit in some other videos people were confused by that so that's this is how you reference your computer and then referencing the container name is how you reference the service running in there so super important I wanted to spend spent a minute just talking about that just that we're all on the same page there and then for the model name this is where I can reference like quen 2. 5 coder 32b and the way that you get this name here is you just go into your terminal run the olama list command and this gives you all the models that you have installed with olama and I'm just taking quen 2. 5 coder 32b which by the way this model has performed the best for me as an AI agent it's kind of funny that it's a coder model that I'm using not for coding but it actually does work the best which which is pretty interesting and I can adjust the temperature here to make it more deterministic by lowering the temperature going into additional parameters there's everything that you'd want to set up within ol llama one thing I'd recommend changing is the context window size because for some reason you can read it right here every single olama model defaults to 248 tokens for the window size so anything that is past that is going to be cut off from the current context in the call tolama which is pretty bad because if you have a longer conversation history or tools like web search that return a lot of text it's going to start to truncate what you send into the llm which is not good and so you want to Jack this number up something like 32,000 is what I typically do this the specifics of the number doesn't matter a ton just make that bigger otherwise your AMA models were will not perform very well they'll miss out on tool calls and hallucinate stuff so yeah very important to keep that in mind so we spent a good amount of time on the chat AMA node here but that one takes the longest because of everything we're setting up for um our model here and so I'll drag in the tool calling chat model into chat Lama here and boom we are connected all right and then the last thing we have to set up before we test out our agent for the first time is we have to give it a tool since it is a tool agent you can see with the red asteris here that the tools are required and so let's go ahead and add one of our tools and so what I'm going to start out with here we're not integrating with n8n yet that's going to come in a second I want to start out with a web search tool so I'm going to use the brave search API which I absolutely Ely love and so we're going to connect the tool here to the brave search API and then this is where we set up our first credential so in this case I already have credentials set up you can set these up outside of the workflow Builder as well there's a dedicated tab for that that I showed earlier um but you can also click on the drop down and click on create new here give it a name and then give it your Brave search API key and this is going to be save so you can use it within other workflows that you're building within flow wise as well and again dedicated place to manage that too so we now have the brave search API everything is developed here and we can test it out for the first time so I'm going to click on Save here and just call it my YouTube agent just give it a random name all right click save and then if I click on the chat icon in the top right here that's why I mve my face by the way so you can see the full uh chat widget here on the right side now I can test it out so I'll say um search the web and tell me Elon musk's Network all right so the kind of thing that I won't be able to get without actually searching the web because of the training cut off for these large language models and so it's going to take a little bit here because it is running quen 2.