Flowise AI Tutorial #9 - Using Local Models, LocalAI, GPT4ALL

22.27k views1092 WordsCopy TextShare
Leon van Zyl
#flowise #langchain #openai In this video we will have a look at integrating local models, like GPT...
Video Transcript:
let's have a look at using free and open source models with flow wise we have been using the open AI models throughout the series but you guys have been asking me in the comments as to whether it's possible to use local models with flow wise so in this video I will show you step by step how you can run flowwise with any local model for free although it's very easy to integrate the openai models with flow wise there could be several reasons as to why you would want to use a local model instead the most
obvious being that the local models are free to use and secondly you do not require an internet connection to use these models keeping your conversations local and private let's look at the prerequisites first in order to run these models locally we will install a tool called local AI in order to run local AI we also need to install Docker desktop so head over to docker.com and then download and install Docker for your operating system after setting up Docker and if everything was set up correctly you should be presented with a screen like this and this
means that Docker is running after setting up Docker we now need to download this local AI repository so go over to the local AI repo click on code and then copy this link next create a new folder on your drive to which you will install local AI then open up the command prompt in Windows you can do that by clicking on the address bar then entering CMD and pressing enter in the command prompt enter git clone and then paste in the URL to the local AI repo and press enter we can now open up the
folder that was downloaded and what we need to do next is download the models that we'd like to use and then copy them into this models folder by default this folder is empty so let's go ahead and download our first model for this demo we will use a very popular model called GPT for all you will find the link to this page in the description of this video and if you scroll down with this page you will find a section called Model Explorer and in this drop down you will find plenty of models that you
can use and you can simply click on this button to download the model but for this demo I will leave a link in the description to the specific GPT for all model that I'll be using so go ahead and copy that link and then paste it into your address bar and then click enter and this will download the model after the model was downloaded simply copy and paste that model into the models folder we can now go ahead and start up our local AR AI instance to do this simply CD into your local AI folder
and then run the following command Docker Dash compose up dash dash dash pool always and press enter when this is your first time executing this command this could take several minutes to download and it will download gigabytes worth of data but since I've already done this previously in preparation for this video this only took a few seconds to start up also if I go back to Docker I can see the local AI instance and I can also see that it is currently running and if I expand this I can also see that this instance is
running on port 8080 we can now go ahead and test this technically what local AI is doing is it is exposing API endpoints that we can call to interact with these models so the easier this way to taste that this is working is by opening a tool like Postman let's create a new session and let's leave the method as get you can then use the following URL to access a list of all models available in local ai go to localhost port 8080 slash V1 slash models let's go ahead and test this by clicking send and
we do get a list of models back and because we only have this one model available we only see this one model coming back and the fact that we do get a response means that local AI is working right now that we know that local AI is up and running let's create our flowwise chat flow from the flowwise dashboard click on add new let's save this chat flow and let's give it a name I'll just call it local AI demo and let's save this first let's add a chain to this demo so let's click on
nodes go to chains and then add the llm chain to your canvas let's add our local llm click on nodes then under chat models go to chat local Ai and add this to the canvas let's connect our model with our chain for the base path we will use this port of the URL like so for the model name we can simply copy the model name that we got back from the API and then add that to this field like so and let's set the temperature to something like 0.7 lastly let's add our prompt template by
clicking on nodes then let's go to prompts and let's add a prompt template let's connect the prompt template to the chain and for the template value let's enter something like what is a good business name for a business that sells and then as a variable we will pass in product and product is the message coming from the user so let's click on format prompt values and for product let's edit this value and let's make it equal to the question coming in from the user great let's go ahead and save this chain Let's test this in
the chat let's enter a product name something like balloons and we are getting a response back from our local model saying that a good business name for a company that sells balloons would ideally be something catchy memorable and related to the product and service offered for example you could consider naming the company balloon Empire or simply the balloon company that's awesome I hope you found this video informative please consider subscribing to my channel and liking and sharing this video I'll see you in the next one bye bye
Copyright © 2025. Made with ♥ in London by YTScribe.com