Flowise AI Tutorial #2 - Creating ChatFlows (LLM Chains, Chat Models & Agents)

37.8k views3215 WordsCopy TextShare
Leon van Zyl
#flowise #langchain #autogpt #openai In this video we will create our first chatflows from scratch ...
Video Transcript:
hello and welcome back now that we have flow y setup and had a look at the user interface we can now create our very first chat flow in the flowwise dashboard we can create a new chat Flow by clicking on add new you will now see a blank canvas using the plus button you can now add nodes to the canvas in this list we can see all the available nodes that can be added to our canvas of which there are quite a few we will have a look at some of these nodes throughout this series but if you want more information on what each of these nodes do I suggest using the langchain documentation as reference flowwise uses Lang chain behind the scenes and the land chain documentation can be a very useful resource for making sense of the flowwise nodes let's start by creating a very simple interaction with an AI model in this example we will create an application whereby we will ask the AI to generate a dad joke based is on a subject that we passed to it in order to get any kind of response from our application we need to add either a chain node or an agent node we will delve into the differences in this video for a simple interaction with a model we can use chains chains are the most basic node for producing some sort of output from our application you will notice that we have access to quite a few different chains for instance the llm chain which is a basic interaction with an AI model then we also have more advanced chains like the conversation chain which can be used for a back and forth conversation with the model whereby the model needs to remember past conversations but because this is our first chat flow let's use the llm chain we can drag and drop this chain onto the canvas this node will tell us what other nodes are required in order for this node to function for an llm chain we need to provide a language mode model and a prompt template let's add a large language model to our project so we can do that by clicking on this add button then we can guide to llms and within lrm we have access to a few models including openai arguing phase and the co-year models open AI is really easy to integrate and since we've already generated our openai API key in the previous video we will continue to use openai let's drag and drop the openai node onto the canvas we now need to add our openai API key to this node so we can paste our API key in this input box optionally we can change the model that we'd like to use from this drop down list and we can also set the temperature of our model 0 being factual and one being creative so I will use a number in between like 0. 7 under additional parameter others we can set other parameters like the maximum amount of tokens that we are willing to spend for each API call we can now connect the openai node to the Chain by dragging a link from this node to the language model input in the chain now we need to add a prompt template prompt templates are used to capture the input from the user and in four method text in a way that will influence the ai's response we can add a prompt template by clicking on ADD then under prompts we can select the prompt template node and drag that onto our canvas we can now also connect this prompt template node to the llm chain in the prompt template we can now provide a template string for example create a funny dad joke about weddings we will leave the format prompt values box empty for now so what is llm chain node will do it will take the input from The Prompt template and then pass it to our open AI model we can test this out by saving our chat flow we can give it a name like Dad jokes and we can save this we can then go ahead and test our application by clicking on the chat window then within the chat we can type any text I will just type paste and press enter and within less than a second we got this response back from the AI saying what did the groom say when he saw his bride walking down the aisle wow she really took the plunge so at the moment this chain is simply grabbing the text from the template and then passing it to the model and at the moment it is ignoring the message that we are sending to the model so we need to get the user message into this template somehow and how we can do that is let's say we want to make the subject of the joke a variable input from the user so to make this variable we can remove this text and then add a curly bracket followed by any text I will call it subject and we can close it on off with another curly bracket let's go ahead and save this and let's run the chat again and let's enter a different subject like dogs and let's press enter and now we can see a message being returned that takes the user input into account let's now have a look at prompt values prompt values allows us to add additional variables to our template text as an example let's replace funny with a variable and we'll call it joke type we can now set a value for jug type in the prompt values box we can set our variables like the example showed in this box we need to add an opening curly bracket and then in this off with a closing curly bracket in quotes we can now type the name of the variable like joke underscore type we can close that off with quotes colon and then the value of this variable which was funny and we can close that off in quotes as well we can save this and then run the chat again and we'll type the subject as cats and as you can see the stall works we can now easily change the type of jug by replacing the value of this variable and let's make it sad and let's try again we'll pass in cats as the input and this time we are getting a joke that's a bit more on the sad side you might be wondering how flowwise knows that the input from the user should be mapped to the subject variable and that's quite easy flow wise or first I have a look at the prompt values and any variable in the template that is not defined in the prompt values list will be assumed to be the user input also we can call this variable whatever we want it doesn't have to be subject we can change it to something like text and this will still work as you can see if we want we can explicitly say to the application the text should be mapped to the user input by adding a comma at the end of this variable then in quotes we will specify the variable of text and set that equal to the user input there is a specific format for pointing at the user input and we can see that by expanding this box and on the right hand side we can see all the variables that are available in this session and if we click on this it will automatically add the user's input to The Prompt values and internally flowwise is referring to the user input as question with double curly brackets I will just remove this from here and then add it to the text variable also very important is that this user input must also be wrapped in quotes let's save our chat flow and it's tested I'll make the subject clowns and as you can see we are getting a response from the AI these simple llm chains is useful for scenarios where you need a simple once-off interaction with your model for instance asking it to generate jokes or generate title for blogs or generate articles Etc let's move on to chat models let's go back to our dashboard let's create a new chat flow now let's talk about chat models chat models are way more sophisticated than the standard llm models and they allow for additional features let's start by adding a chain to our project and to demonstrate the basic functionality of a chat model let's use a simple llm chain for now we can drag this onto the canvas and for the language model we will click on ADD but instead of using llms we will look at chat models instead within chat models we can grab the chat open AI node for this node we need to paste in the openai API key and under model name you will now notice that we have access to different GPT models including GPT 3. 5 turbo which Powers chat GPT as well as the newer and more sophisticated gpt4 this leaves this model on GPT 3.
5 turbo and for the temperature I will change it to 0. 7 and we can connect this llm to our chain then for the prompt we can click on ADD node and we can then open up prompts but this time instead of using the standard prompt template I will now use the chat prompt template and we can connect this to our chain the chat prompt template looks very similar to the normal prompt template but we now have access to a system message as well as the human message which is effectively the text being passed from the chat and as before we can set prompt values as well for the human message I will simply call it text you can call it anything you want then in the system message we can now Prime our model priming is a way to tell the AI how it needs to behave and what its role is for fun let's do the following we will say you or a salty old pirate looking for Treasure let's go ahead and save this chat flow we can call this chat model demo and let's run this by clicking on chat and in the message I'll just type hello and in the response we can see that the AI is now responding as if it's a pirate and that was because of the system message that we used to Prime the model this can make for some very fun role-playing applications let's now move on to another use case for using chat models I'm going to remove the chat model template as well as the llm chain let's click on node then within chains instead of using llm chain let's use the conversation chain and let's spend a minute talking about what we're trying to do here the benefit of using chat models is the ability of the model to remember post messages in the conversation and it will remember details of the conversation based on the chat history and to make this all work we will use a conversation chain the conversation chain takes an llm as input but in order for the chain to remember the chat history and context we also need to specify memory so to create a memory node we will click on ADD nodes then we will open memory and from memory we will grab buffer memory you can leave the default values in these fields as this is a bit Technical and it's not necessary to change these values all we have to do is connect our buffer memory node to our chain node let's go ahead and save these changes then let's open up our chat window I'm going to clear the previous chat and then back in the chat window let's say hello just to test this and we are getting a response back so let's notice the memory of this chatbot first I will say my name is Leon we can now test if the memory is working by asking the AI what our name is what is my name and because of the company Precision chain openai was able to pick up from the previous conversations and therefore it remembered that my name is Leon we now have a chat gpt-like chatbot which we can ask questions and it will remember context from our previous chat messages as with chat GPT we can ask it coding questions as well and our bot will respond with code Snippets and more just like chatgpt next let's chat about agents first I want to talk about the shortcomings of using llms and what agents do to solve these issues large language models can only respond using the data that they were trained on if you had to ask openai about information that is recent for instance what is flowwise AI we get a response saying that it doesn't know this is because the GPT models were trained on data up to 2021. additionally the GPT models are notoriously bad at doing math death we can solve these issues by introducing agents to our flows agents will first send the prompt to the model and then determine the accuracy of the result being returned if the model does not know the answer or has a very low confidence level in the answer that it provided agents can assign tools to the AI to get more accurate results let's look at this example where I and the agent and you are the model imagine if I ask you a question like what is flowwise AI you might have an idea but for the most part you are uncertain of the answer or you might even take a guess with a low confidence level in your answer being the agent I can now provide you with a tool to assist you in answering the question for instance I could hand you a phone with internet access and you would be able to find this information with no problem the same thing with math it might be difficult to solve certain math equations by yourself but if I gave you calculator it becomes so much easier so enough torque let's see this in action let's go back to the dashboard and let's create the new chat flow let's save this and call it agent demo in our note list instead of adding a chain we will now add an agent and more specifically let's use the conversational agent this node takes in the language model as input as well as memory this is in order for the model to remember our chat history but it also takes in a list of allowed tools for instance internet access and a calculator let's start off with adding our language model so in nodes go to chat models and let's add the chat openai node let's paste in our API key we will leave the model as GPT 3.
5 turbo and for the temperature I will change it to 0. 7 and let's connect this node to the agent let's also add buffer memory by clicking on nodes let's now add a memory note by clicking on ADD nodes then let's go down to memory and let's drag in the buffer memory node and let's connect our buffer memory now to our agent and now we can assign a list of tools that should be made available to our agent we can do this by clicking on ADD node then within this list go down to tools and from tools we can see a list of all the available tools supported by Lang chain and flowwise in our example we want access to the calculated tool as well as serp API serp API will give our model access to Google search so let's first add the serp API note to our canvas as well as the calculator node I'll just grab calculator and drop it in as well we can now connect these tools to our agent by dragging the node and connecting it to the allowed tools parameter we can do the same thing for serp API we now have our model set up as well as the buffer memory for the chat history and we've assigned two tools to our agent so if we ask a question that openai cannot answer accurately the agent will assign a suitable tool to the model for getting more accurate results for serp API we do need to provide a serp API API key and for the calculator we don't have to configure anything let's get our serp API key to do this go to serpapi. com and create a new account after signing in go to your account then click on API key and on this page you can simply copy this key and then paste it into the serp API key field we can now save this chat flow and then click on chat and let's test this out in the previous example we saw that openai couldn't tell us what flowwise AI is so let's try it again by typing what is wise Ai and it's seen this and this time we did get an accurate answer back from the model let's try another example and this time this example will make use of our buffer memory let's clear this chat then let's type the following who is Olivia Wilde's boyfriend our application used serp API to find the name of Olivia Wilde's boyfriend let's ask a follow-up question how old is he and in this message we are not specifying Jason's name but what we want to test is that openai will assume that we are asking about Json since this is the name in the chat history it's saying this this rephrase the question a bit let's rephrase the question what is his age and this time we did get Jason's age this would not have been possible without giving our model access to the internet let's also try out the calculator what is his age to the power of 0.
Copyright © 2024. Made with ♥ in London by YTScribe.com