all right I think I'm live I'm just going to wait a second here to have it pop up on my screen on the right hand side here before I start uh so give me a moment and then uh we'll get started it's my second ever live stream so I'm still getting used to how this works okay cool cool so I see it um all right yeah so hey everyone welcome to my second ever official live stream on my channel I'm super super excited for this what we're going to be doing right now is building a
prototype for an awesome agent using n8n and Gemini 2.0 flash as our model and so yeah I'm just going to have a blast with you for this it's a continuation of my miniseries that I started last week showing you my entire process for building AI agents also let me switch to um me I meant to do that to start um I'll show that that Ro map in a second here um but yeah I'm just looking forward to this in my last video that I posted on this Miner I showed a bird's eye view of my
entire process and now we get to actually dive into it starting with building a prototype for our agent with n8n now the first ever official live stream that I did was this big announcement live stream for bolt. DIY which was called autod Dev back then that's our open- source AI coding assistant and there were so many announcements to get through there and so many questions from all of you it ended up being a pretty high pressure live stream and to say the least uh but this time around I wanted to be quite the opposite I
mean I've literally got my NN mug here with coffee that I'll be sipping on throughout the live stream I'm going to pause a lot to chat with you so really I want this to be super casual and laidback while still giving you a lot of value because we are going to build an awesome NN agent together as a prototype for what I'll be building for this entire miniseries here and so yeah I always start off by using a no /lo code tool to build a sort of proof of concept for the agent that I am
working on that's what I mean by a prototype and so that's what we're going to be doing right now building something quite awesome that I'll Showcase in a little bit here um the other thing I wanted to say is I hope you guys are having an awesome end of your year I know that I am I'm actually taking some time off pretty soon here pretty much right after this live stream actually um so the video that I would usually be publishing tomorrow night is going to be replaced by this live stream and then I have
a super awesome video scheduled for the first which is really going to help you get the start of your year right with AI so don't miss that super exciting video um so yeah I think we can actually dive into the meat of things here so first of all like I mentioned already I posted the first video of this minseries to my channel already giving a bird's eyee view of my entire process for building AI agents and so now we get to actually dive into prototyping our agent that's what we get to do together so let
me actually switch to my my screen here and present this um because this is the kind of road map that I covered in my first video for my entire process and I want to cover a kind of couple of the first steps here just to get us on the same page and then we can dive into um building out our prototype here so first of all let me uh let me check the chat like I promised I'm going to be going to be sipping on my coffee let me do that and I'm going to be
reading through the chat make this as casual as I can I I liked the last stream the last stream that I did for bolt. DIY a lot um but yeah I definitely want it to be more casual going forward so yeah hey everybody I've seen a lot of comments in the chat already welcome from Nigeria and Brazil Thomas cool to have all you guys here hey welcome from Hungary it's cool to see everyone from around the world that's actually one of the big reasons that I did this stream 9:00 central time in the morning my
time instead of the last one I did 7:00 at night my time because it just was a terrible time for people from around the world so yeah I'm glad that this is working for people around the world hey from Indonesia welcome to the live stream here um so yeah let's dive into the road map really quickly here because I want to cover the first couple of things that will get into our prototyping here so the first step that I'm not really going to cover too much right now is the planning of the agent and so
there's a lot of questions that I covered in my first video just to get yourself with the right frame of what you actually want to build and how you want to go about building it and then the second step this is what we are really diving into right now prototyping your agent with a no/ l code tool and the goal is just to get started very very fast and you don't necessarily have to use a no/ low code tool I mean you can use AI idees like wind Surf and cursor to get so get started
so fast with building your agents with code but even considering you can use AI to help code so much I still find tools like n8n and voice flow to make that initial prototype build so much faster and so that's what we're going to be doing right now with uh n8n and so yeah another big thing here is I don't even focus on the front end or the database yet and I actually will get into this a bit at the end of the live stream but to start out what we're going to build initially we're going
to have no front end or database it's just going to very simply be a workflow within n8n that we chat with within that workflow Builder um so yeah let me exit out of this and go over to the live agent Studio because the other thing that I promised for this stream here is I promised that this agent would be available for you to try right now as we are doing this live stream so I have this live on the live agent Studio I will even send a message here with the link studio. automator doai you
can go over to this link right now oh I actually have to sign into chat oh no it worked okay I see it come through all right nice nice all right so yeah you can go to studio. automator doai sign in you get 100 free tokens to try out all these agents right now and so you can try this GitHub assistant as we are building it together with n8n so this is the Prototype version so I call that out specifically it's not like I built the full thing brought that in and then started the miniseries
I'm starting out with the Prototype here on the studio as well but this is what we're going to build right now so you can literally give it a link like I have a couple of examples right here you can give it a link to GitHub repository and ask a question like hey can you describe this project for me and then it'll go and actually do that so it'll read through the file structure um and you can even say like I can ask it a question like um what is being done for cicd in this repo
and so it'll actually go and get the file structure of the repository look at specific files to answer my questions and this is where we're going to be building which is super super cool because this is really useful if you're really really technical or if you're not cuz if you're not technical as you're starting to get into AI you're going to be looking at a lot of these repositories because you need to understand these tools that you're leveraging for yourself or your business but then if you are very technical and you can understand a lot
of the code it still helps to have an agent that can answer questions and look through the repository super quickly so yeah like I asked what's being done for cicd and it got the file structure of the repository and then it starts referencing all these files that we have and I can even say like uh look at each file here and give me a deep dive so I can have it do some pretty complex stuff where now it's going to have to go and look at every single file that it referenced here for the cicd
for this repository and describe each of them to me I'm not sure exactly what it's going to spit out but probably something pretty cool here so yeah let's give it a second it's going to take a bit longer here because this is going to be massive yeah look at this massive response here um so yeah um oh well it kind of gave me the wrong answer here cuz I told it to yeah I literally looked at every single file in the repo so anyway this isn't quite this is my fault because I prompted it pretty
poorly here I wanted to look at the cicd stuff specifically but yeah you can see that like it has access to the entire repository it can look at it make observations on the repo dive into specific files like I can say what um are the contents of the env. example file and it'll go and actually analyze that file and then tell me what's in it which is super cool um so yeah and overall it's not actually that complicated to build this agent this is literally what we're going to prototype like what I have published right
here on the live agent studio is exactly what we're going to build right now so yeah there we go this is our our env. example file so super cool all right so yeah with that I'm going to go back to the homepage of the live agent studio um because what we're going to do in this live stream after we actually build up the agent is we're going to make it compatible with the live agent studio and I have this whole developer guide um which actually by the way I should introduce the live agent studio for
those of you who haven't seen it let me go back to the homepage here so getting ahead of myself here the live agent studio is a platform that I have developed as a part of automator which is my larger platform and the live agent studio is a community-driven platform filled with all these AI agents that are all open source so you can try them out like we just did with the GitHub assistant you can try it out and then you can even view the source code like I click on this right here and then boom
it brings me to the source code or I guess the workflow Json if it is an n8n workflow and so you can download the agent yourself run it yourself learn how to use it I've got extensive documentation for all of the agents here and so it's the super valuable place for you to go and find agents that are interesting to you try them learn how to implement them and it's all Community Driven which Speaking of Community Driven I am actually launching a hackathon competition so let me go to this right here super super exciting so
a hackathon competition is really just what you call a coding competition and so what we're doing for the live Asian Studio here I launched this on the 25th as a kind of Christmas present to you it's a hackathon competition with a $5,000 prize pool and the only requirement for this competition is that you build an AI agent for the live agent studio and so using that developer guide that I showed just right here you can learn exact ly how to build an agent for the studio you build this and you submit it like really literally
just go to the submit Agent form right here you submit it and then boom you've entered into the hackathon competition and you can win a ton of prizes um so yeah it's it's sponsored by voice flow and n8n two awesome no/ L code tools that I use all the time for building AI agents so they're sponsoring this they're contributing to the prize pool and my platform is contributing as well super super exciting way for you to be a part of a community building awesome agents win some prize money and showcase your AI Mastery to the
world and the prizes are pretty exciting let me go to this right here so $5,000 total prize pool and if you build the best agent overall you're getting yourself a nice $1,500 which is super cool um so yeah you guys should all register for this right now in fact what I'm going to do here is I'm going to take the registration link copy it and I'm going to put it in the chat right here uh it is totally free to register like this is literally just a huge gift to all of you guys a big
Christmas gift um so yeah register build an agent submit and win some prizes it's super cool um so yeah one thing I'll cover really quickly for the timeline here and by the way this relates to what we're doing here prototyping an agent for n8n because n8n is one of the beautiful platforms that you can use to build an agent for the competition and I'm going to be showing in this live stream how to take what we build and make it compatible with the live agent studio so that it can be hosted here like we just
saw when I demoed it um so as far as the timeline goes let me go back into full screen here uh we got the registration that's open now until January 8th and then the competition will officially start and we'll be accepting submissions January 8th through the 22nd and then me and the automator team will be doing the initial round of judging but then we'll put up a bunch of Agents on the platform the live agent studio for you all to vote on so you get to be a part of choosing the winners for this competition
which is super super cool and we're going to be doing lot to make sure there isn't any kind of like unfair advantage to people that are bringing in their friends to vote and things like that so we're taking care of that but it's going to be a really exciting way to get everyone engaged in this live agent Studio because my goal really is to have this be a central Hub of open- source agents for us all to learn and grow with AI together and then finally I'm going to do be doing another live stream on
February 1st announcing the winners in real time because that's right when the cut off is for the community voting and so I get to find out along with all you guys in real time who actually wins all those super exciting prizes here so that is the hackathon competition so seriously go and register for this right now it takes like two seconds to register um and then yeah start planning and building your agent right away like I mean you don't even have to wait until the competition officially starts to start planning what your agent might be
or maybe you already have something that you built so you can just tweak it a bit to make it compatible with the live agent studio and then submit it and I also have a super helpful video walking through what it looks like to actually build an agent for the live agent studio um and then this entire documentation right here is super comprehensive because I make it so easy to understand everything that goes into making an agent for the studio okay so anyway enough of that that is the hackathon I wanted to spend some time on
that um because that is just something so exciting going on right now um so yeah rx3 said I already got some agents ready to go that is awesome yeah feel free to submit those I mean yeah you don't have to do too much to participate in the competition if you already have some awesome um agents so yeah actually let me read through some more of the comments here before we dive into actually prototyping an agent so I literally got the blank canvas here for n end because we're going to build up an agent from scratch
here um let's see gambi said less than eight minutes into this live stream and two and a half minutes of ads okay I'm really sorry about that so a couple people mentioned in my last stream that there was a lot of ads so I specifically set the setting to have the most or the the minimal ads for the live stream so I don't really know why it's doing that much I'm really sorry about that I don't think I can edit it in real time let's see if I can delay ads okay I clicked the button
to delay ads for 10 minutes I guess I can do that so I'm going to keep doing that if I need to I'm sorry that there are too many ads sometimes I don't appreciate um YouTube has so many settings that are kind of hard to mess with so I'll I'll do my best especially going forward to so uh thank you for mentioning that and then uh Rasin said can we clone live agent repo without using a token yes you can so you have to use tokens to try the agents on the live agent Studio let
me go back to this so all these agents here you have to use a token to try it here but that's mostly just because I'm paying for the infrastructure and the llm costs and so I have to have some way to protect from people just using it way too much and charging me out the wazu um but you don't even have to use a token to view the source and download the agent for yourself so you're only uh using tokens which I give a 100 of them for free when you sign up but you're only
using these to actually try the agent and then you can download it for free so that's super important to keep in mind um yeah Nexus says excited to see Gemini 2 in action for custom agents and yes I am as well I mean it works so so well just the speed at which it can process my prompts is super important especially because with this GitHub agent a lot of times it has to process so many files when it's looking at the entire file structure for a repo um so yeah let's dive into actually building this
out and then I'll I'll do a little bit more time for Q&A um in a little bit here so uh I've got my comments on the left hand side here checking out all what you guys are saying in the live stream and then in my centered monitor here I have n8n which is where we're going to be building our workflow and then on the right hand side I have the fully built out workflow here for my my reference because as much as I want to build this from scratch with you I still want to not
stumble over my noes as I'm setting everything up and so I have a reference point here just to make sure I don't trip up on anything because I want this to be a smooth process as I'm building this with you uh while still doing it from scratch so this is kind of the optimal setup that I've got here um so yeah with that we can actually get started here um so yeah for those of you who are unfamiliar with n8n I have a ton of videos on my channel already that I'd recommend checking out I
don't want to start from scratch explaining NN and triggers and nodes and all of that um because I want to really provide a lot of value by actually having the time giving myself the time to build out a full agent here um so we are starting from scratch n8n is a pretty straightforward and intuitive platform so it should come together pretty quickly for you even if you're not familiar um but yeah we are going to just Dive Right In here so the first thing is we have to set up a trigger for our workflow and
because this is a prototype of an agent I do not care about anything besides being able to chat with it right here in the workflow Builder and the way that you do that is you select the on chat message trigger and so what this does here is it gives me this chat button now so instead of that test workflow button that we had before um let me actually adjust let me do this quickly here I'm going to adjust my screen size hopefully this works in real time I'm going to adjust my screen size so you
can see the full screen here um yeah all right there we go so when chat message received it gives me this chat window here where I can now talk to my agent so I can say hi and then obviously this isn't going to do anything yet because we just have the trigger um but now I have this uh input High that's going to come into whatever I have for the rest of my workflow and the big core component of this workflow is going to be this node called the agent node the AI agent this is
one of my favorite parts of n8n the AI agent node because it bakes in so much in one in this tiny little node right here we've got the chat model that we can attach the memory the conversation history and then also all the tools that we'll be adding to actually do things with GitHub super super cool so for the chat model here I'm going to select Google Gemini chat model because as promised we're going be to be using Gemini 2.0 Flash and so for the credentials here I'll show how to set that up in a
second um and then for the model I want to use Gemini 2.0 flash so I have to scroll through all the 1.0 that we have here I'm not going to use the thinking one right now even though that'd be pretty cool this is Gemini's version of 01 so to speak where the model actually has Chain of Thought and reasons with itself before it gives a response which is super cool um but I'm just going to use the regular 2.0 flash experimental right now and then for the credentials let me uh create a new credential just
to show you what it looks like all you have to do is is leave the host as the same and then for the API key you just get this from the Google admin console and so I'll actually show that quick right here um just so that because credentials are one of the trickier parts of n8n so I want to spend at least a little bit of time showing you how to get these credentials and so I have a link um right here that I will post actually I won't post it in the chat I'll just
use it here um oh that's wrong format sorry about that all right let me copy this boom boom okay so if you go to your Google AI studio and you go to your plan information for your API you have this project right here for Gemini API if you click on this so you just go to aist studio. google.com/ planor information and then you go and click on open a new window this will bring you to the Google Cloud console where you can create a new API key and this is what you use to use Gemini
2.0 flash which by the way with Gemini 2.0 flash you get 1,500 requests per day completely for free it's just an amazing model overall and it's free to use it's just a huge win which is why I want to use it for this prototype here so you just click on API key and it'll make this brand new API key I'm going to close out of this well okay I guess it made the key so I'm going to uh quickly delete this because I'm literally showing my API key so let me click on this and delete
um but that's how easy it is to make an API key to use Flash so yep delete credential I don't want you guys to actually be able to use that because then you're going to blow through my 1,500 requests uh for Gemini so I deleted that but I showed you how easy it is to make it and then going back over to n8n here uh we have our chat chat model fully set up so the next thing that we want to do is set up our memory and again for prototyping don't even focus on the
database yet so for the memory I'm actually just going to use a window buffer memory super super easy because this is going to use local storage on my machine hosting n8n to store all the conversation history so I don't even have to deal with a SQL database yet or anything like that super super easy um so yeah with that let me actually pause quick and just um answer any questions that are in the chat so let me scroll through this quick I got to keep up with my promises and uh answer all the questions and
make this casual so yeah let me scroll through a bit here uh Kevin asked what does n8n cost and then Rasin said if you self host it is free and I have a good tutorial on how to set it up so yes that is true you can any n is open source you can literally download it and then host it yourself with Docker with npm and really the only thing you have to pay for then is the machine that you are using to host it so if you use digital Ocean or AWS or something like
that for that cloud instance to host n8n you're paying for that but everything else is completely free which is one of the big reasons that I love n8n over other platforms like zapier um or make.com so yeah super awesome um let's see prompt for system AI is it important or do we just put anything for that um yeah so the system prompt is actually very very important and I'm going to get into that a little bit for this right here so actually I can show you guys right now if I click into my AI agent
and I go to add option I can add a system message here and I actually am going to doing this in a second here because this is what gives your agent an idea of it's what it's supposed to do and the behavior that you expect from it and so this is where we'll tell it like you are a GitHub expert who analyzes repositories to answer questions over the files or something like that like that sets the stage for the agent to properly understand how to handle the prompts that are coming in from the user so
yeah the system prompt is very very important and we will get to that um so yeah let me keep scrolling down let's see Matt asks can you just make a quick example of what you uh what you can do without explaining it could you maybe um oh okay yeah yeah so at the start of the the live stream maybe you joined a little late I did actually give an example of of what it can do um cuz I let me go out of full screen here and go back over um right now you can literally
go to studio. automator doai um like the link is right here studio. automator doai and you can try out this agent right now you can put the agent to work after you sign up and get your free tokens and you can try it out right now and you can ask it questions about any GitHub repository you just have to give a link to the agent and then you can ask it any questions and it'll actually like look at the file structure for the repo analyze specific files it'll get you the answer that you're looking for
so that's what the agent does um and I showed that at the start of the live stream so I hope that helps um a rule asked what is the best framework for building agents as in the easiest so in my mind n8n is the easiest overall that's why I'm using it right now um and it's free to use open source super awesome uh voice flow is another really good option if you want a paid platform that does a lot more for you um also sorry I think the ad might have went off so I'm going
to delay the ads again I guess I'm going to have to keep clicking that button but it's all good um yeah and then as far as like if you want to actually code your agent uh my recommendation would be using pantic AI it is a phenomenal python framework for coding your AI agents very very easily they have super good documentation and they do a lot of abstraction so like making it easier to code AI agents while still giving you all of the customization and power that you would hope for when you are actually coding an
agent instead of using a no code tool so that'd be my recommendation um yeah so yeah next question I like the UI for n8n does that work for for mapping out multi-agent workflows as well uh yes you can certainly do multi-agent workflows within n8n um cuz you can take this agent node and you can have multiple of them so if you have something like a um orchestrator agent that decides which agent to give the work to based on which one's the expert at what like you can set all that up in n8n very easily um
kind of like you can do in flow wise flow wise is another no/ low code agent Builder that's built right on top of Lang chain and they have like the all these functionalities for building like supervisor agents and orchestration agents and things like that and you can do all that at n8n as well for sure um so yeah uh yeah Matt said thanks man sorry I just joined yeah you're totally good I hope that that quick explanation helped you understand what this agent can do um so yeah uh zerun asked what about relevance AI you
know I've heard of relevance AI but I haven't actually used it before if I had the time to try out every single AI agent framework and Tool I def defitely would cuz there's so many amazing ones out there and I have heard good things about relevance I just haven't gotten a chance to try it um yeah rx3 said got to set up an automation to delay ads I guess I do I I think when I kick off the live stream I can completely turn off ads um I'll have to look into that I might just
do that honestly um so yeah all right one more question and then I will uh continue on here so last one if adding Lang chain or Lang graph where would they be attached into um so typically when I use these Frameworks like Lang graph I'll Implement them once I'm actually custom coding my agent so in that road map that I showed I always start by prototyping my agent with something like n8n and then I'll get into custom coding my agent which by the way you don't always have to go and quote unquote graduate from no
code to coding um but when you do and you start coding your agent that is when you would use something like uh Lang graph um so yeah let me continue on here so a lot of questions but this is good this is what I wanted to do I wanted to um still build something here but also give a ton of time for questions so I think this is good here um so yeah we have our agent here and we could actually like talk to it already um so like I can um save this here and
go into the chat window here and I can say like hello how are you and we should actually get a response from the agent there we go boom so we have an agent that we can actually talk to it just can't really do anything for us at this point but it is using Gemini uh 2.0 Flash and we can have a conversation with it here the chat memory is working I can say uh what did I just say and it should repeat to me yep you just said hello how are you perfect okay so we
chat memory is working the model's working now we can continue adding on some of the tools to actually have it do cool things with GitHub and so the first thing that I actually want to do is I want to add in a system prompt as promised um so I'm I'm going to go in back into our tools agent node right here I already added the option for the system message and now I'm going to paste this in so I'm going to go into my editor here and I already have this paste or copied from the
example workflow that I have on my right monitor here because I don't want to spend the five minutes to type this from scratch with you um but essentially what I do is I set the stage like I already kind of spoke about so you're a coding expert with access to GitHub you can assist users answer questions about repos and then I also give it some instructions so like make sure you actually have the link to the repository before you try to do anything just to avoid any hallucinations there and then the way that I make
sure that the agent can continuously remember which repository we're actually talking about is I have it start its answer with the repo link and then give the answer so that it's constantly available in the conversation memory what repo we're actually looking at and that's a little Jank but that's why we're prototyping here we don't really care about little details like this at first maybe later when we make our full production ready agent there would be a separate place to input the GitHub link and then that'd be used in the conversation but right now we're just
kind of baking it right in the conversation and we're totally good with that um that works for now as we're doing our prototype so now we have this system message that sets the stage for our agent to understand how to use all the tools that we're going to hook up now um so the first thing that we want to do is we want to add a tool to get the file structure of a repository so right now if I were to ask it let me actually get a link here so I have this link to
bolt. DIY which is our open source AI coding assistant and I'm going to ask it a question about that and right now it is going to completely fail obviously it doesn't have any information on how to actually get something from GitHub so I'll paste this in I'll say describe this repo and it's going to do nothing it doesn't have the right answer here um yeah so this repository appears to be related to a project called bolt by Stack Blitz lab so at this point it's just grabbing whatever information it can uh from the URL not
giving me an answer that is actually useful so now it is our job to give it the tools so it can actually go out to GitHub and pull the file structure of the repo look at individual files and be able to answer that kind of question like maybe to describe the project we want it to look at the read that we have which is um this file right here that is at the start of the repo where we describe everything like this this gives the agent everything it needs to quickly tell us what the project
is about and so let's go back and let's add a tool so what we want to do right here for our tool is we want to call an n8n workflow there are no GitHub tools here so we don't have anything where we can directly attach a node like we could with Gmail so we have to make a custom n8n workflow it's going to be kind of like a sub workflow that'll just live right down here that the agent will call to get the information that it needs from a GitHub repo so I'm going to add
this right here call n8n workflow and um I'm going to rename this here to uh call get repo structure tool so this is the name of our tool and then this name right here is actually given to the large language model so this is just the name of our node and then this this is the name that tells the agent hey here's this tool that you have it's called blah and this is what you can use to do this thing so we give it the name and I can just say get repo structure and then
the description this is where we tell it to call this tool to get the entire structure of a GitHub repo given the URL of the repo and so these two pieces of information this is what is given to the model for it to know when and how to use this tool and then for the workflow here I'm actually going to do a little bit of fancy syntax here to reference the current ID of the workflow that I am in so this workflow is going to be right within here it's not going to be a separate
n8n workflow that it has to call it's going to basically call itself and then we're going to use the trigger execute workflow right here to set up our tool and so when it decides decides to call this tool and it calls itself it calls its own ID for the workflow that is going to trigger this right here so we're going to kind of have two separate parts to our agent we're going to have the chat part that uses the agent and then the tools part which is going to be triggered using this trigger right here
which is called from this node right here so I hope that makes sense how that works you can also set up an entirely separate anit and workflow if you want to have your tool completely separate from your agent and then you just reference the ID of that workflow which by the way the way that you get that is just this text right here so n8n do your domain slw workflow slash and then this is the ID right here so that's how you get the ID you can see that matches exactly with what we see right
here with the preview of the workflow ID so there you go boom now we have that set up and it is time to define the parameters that we want for our tool so right now when we call the get repo structure we're not telling it what to actually give to this tool but we obviously have to give it the URL of the GitHub repository that we want to get the repo structure for and so the way that we do that is let me look at my example here so the field to return we'll get to
this in a second here but I'm just going to say structure and then for the extra workflow inputs I'm going to add a value and first of all we need the session ID U and that is going to be the session ID that we get from our chat node right here so we get the session ID and then okay so then specify input schema so I'm going to check this at the bottom this is where we Define what we want the model to give as parameters to our tool so you can if you're familiar with
coding up your agents in Python this is basically the parameters for the tool function that you want to specify right here um and so what I'm going to do for this is I'm going to copy the Json that I have from my example it's just the repository URL this is the only piece of information that we need the agent to give to this function because with that it's now able to pull the entire structure of the repository right here and then we give a little description of what this value actually represents so again this entire
node that we're setting up right here is just defining for the large language model when and how to use this function we say hey this is the name of the function this is when you call it and then here is the argument that you have to give it you have to specify the repository URL here and so now when we call this tool this trigger is going to get called and it's going to have the repository URL as the input parameter um so yeah I think with that let me pause and see if there are
any questions that's kind of like honestly the most complicated part of setting up this entire agent is just getting to that point where we have an understanding of how this node is actually used to invoke a tool and how we initially set up up that tool and then the rest of this is just going to be a walk in the park because it's actually a pretty simple workflow overall to make this tool happen um so yeah let me pause and then scroll up and see if there are any questions here um your NN expert was
very cool by the way thank you very much I appreciate that um yeah let me keep scrolling does agent node NN have mCP for multiple data sources so the model context protocol from Claude is not implemented in the n8n agent node um but essentially these tools that you can add onto the agent do the same thing I mean the whole point of the model context protocol is you can Define these little functions that do something with the service like it'll uh send a message and slack or pull a Gmail email and then you attach those
into CLA desktop so that agent there your CLA agent has access to those tools and essentially that's what we're doing right here with the tools node is we are creating this agent like you would have CLA and Claw desktop and then attaching all these tools here just like you would have those different servers for mCP um so I hope that makes sense how can memory be improved semantic and long-term memory uh that is a good question I am actually going to dive into that um later on in this series I'm going to that's a bit
it's a bit longer to answer that question so I'll save that one but that is a good question I will definitely be diving into that um throughout the series uh next question Lang graph or pantic AI U okay I'm actually really glad that you asked this question because my answer is both they actually work very very well together because pantic AI is a framework to build AI agents and then Lang graph is more of a framework to orchestrate AI agents because you can have these different parts of your workflow that call different agents you have
human in the loop so like before an agent takes a specific action you can have human approval all these different tools to manage agents in a workflow and so they actually work very very well together the original goal of Lang graph was actually to work with Lang chain and Lang chain is more similar to pantic AI because it is also a framework to build AI agents but um even the founder of Lang chain himself uh Harrison Chase he'll tell you that Lang chain is a little bit of a mess and so pantic AI is a
really good replacement while still leveraging Lang graph because Lang graph is incredible and kind of their next Evolution for their whole platform uh so I hope that makes sense um Matt said have you looked at the leak system prompts would you steal them I assume you're talking about leak system prompts for AI coding assistant like I think the the um prompt for v0 was leaked VZ is an AI coding assistant from versell that helps you create front-end uh nextjs components and react components um I've looked at it a little bit there's definitely a lot to
learn from how people set up their system prompts cuz there's so much that goes into describing exactly how the llm should operate and giving it examples for what it should output and things like that there's a lot to learn from them also um the Apple intelligence prompt was leaked I don't know if you guys know that super fascinating because it's actually like a little silly the entire prompt like there's a part of the Apple intelligence prompt that literally says do not hallucinate like they just told the llm please don't hallucinate like they're just begging to
the llm don't make stuff up it's it's kind of silly like it's a super professional system and they're prompt just looks like any one of us made it which is super funny um so yeah uh Carl said thanks for your great content appreciate it Carl um Rita said will you show how to use postgress for the AI agent memory we are going to get into that that's going to be a part of this live stream I'm going to show how to um use postgress and specifically with super base so we will definitely get there uh
what else we got uh John said should we start trying to add the length of the system prompt plus the tool names and descriptions um plus the length of the user question I'm a little confused by the question because the tool names and descriptions are actually put into the system prompt um basically under the hood so when I Define right here like hey this is my tool to get the structure of a repository here's how to use it or when to use it and then here's the parameters that I need all of this is essentially
baked into the system prompt so we have what I wrote right here explicitly for the system prompt and then all that other stuff here is basically built on top of it to give the llm the context that it needs to use these tools so I'm not quite sure I follow your question maybe you could expand on that um yeah so let's see do I have a video on installing n8n in local uh I don't actually however if you just Google installing n8n on digital ocean they have this insanely clear documentation for how to do that
digital ocean is my recommended platform for installing NN so I would definitely check that out uh I didn't make a video on that specifically just because of how easy the documentation makes it that I didn't really feel worth it like I wouldn't be providing enough value in making that video just because the documentation is so clear there's also a lot of other YouTubers that have made that video um so you can check that out if you want like if you Google AI or not Google but if you go into YouTube and search AI Workshop um
n8n local install he's got an awesome video that I'd recommend checking out um yeah let's see what is the best memory for AI agents postgress Rus or is window buffer memory enough so window buffer memory what I'm using right here for my prototype is not enough this is enough for a prototype of but when you actually build on an agent for production you're going to want to use postgress or reddis I typically just use postgress with super base myself and I'll show that later on as well in the Stream here but yeah definitely redness is
a good option as well um because that's more of like a cash platform versus a um you know permanent database but it's super fast and it's good to use as well um for something that can be more ephemeral like chat history um for just like a temporary conversation so yeah that'd be my recommendation there uh hey Juan welcome from Spain super cool that you are here uh let's see how can how can call get repo structure tool know which trigger to execute um yeah so that's that's going off of the workflow ID so when I
specify the workflow ID of this current workflow this is kind of like that Dynamic syntax to get the ID of the current workflow that knows based on that to call this trigger right here the execute workflow trigger and then if I had a separate n end workflow all I'd have to do is change this ID to the ID of that na end workflow and then it would call the execute workflow trigger over there so I hope that makes sense that's how these things are tied together um so yeah let me keep scrolling here can you
make the input to an agent workflow a picture yes you can I'm not going to cover this right now because this agent is going to be focused just on GitHub repositories but you can do image recognition within agents in n8n um yeah can we have multiple sub workflow triggers or limited to one so yes you can call uh well actually I I think I understand your question you're saying could I have multiple execute workflow triggers within just this workflow the answer is no you can only have one execute workflow trigger but as you about to
see when I set up my two tools here one to get the structure of repo and one to analyze a specific file I have a way to get around this where you have one execute workflow trigger but it can actually Branch off into the different tools that we have and that's going to be using an extra workflow input here where I'm basically going to specify which tool I actually want to go down the route of in this work this sub workflow here so we'll get to that in a second here um let's see really like
pantic AI but struggling to integrate it with NN workflows any tips big follower and fan by the way well first of all thank you very much and that is a good question so when I use a coding framework like pantic AI or laying chain or llama index whatever and I'm integrating it with n8n the way that I always do it is I use n8n for my agent tools so I'll use something like pantic AI to set up all of the agent logic and the chat memory and the system prompt and all of that and then
I'll use n8n basically what I'll do is I'll take my n8n workflows and I'll turn them into web hooks so if I search for the web hook trigger here um you'll see that I I'll just click this here the web hook trigger is different than these two triggers because it's a way to actually turn your workflow into an API endpoint so this is an endpoint that I can now call that's going to invoke the rest of my workflow and so that is how I set up my n8n tools to be API endpoints that I can
call as a tool within pantic AI So within pantic AI I basically set up this super simple function that takes in any parameters that you need like the repository URL for example and then that function it's sole job is just to invoke the web hook for the n8n tool so all the logic is still over in the n8n workflow what I'm calling it from pantic and that's something that I think I'll definitely be showing throughout this miniseries as well because there's definitely a time and place to use n8n even when you have everything coded up
in pantic Ai and you're building out your uction agent um so I hope that makes sense I always use web hook andit and workflows for tools for my pedantic AI agents um and I even have a video on my channel where I do that not with pantic AI but with Lang chain cuz pantic AI wasn't even released when I made that video but I show an example um I don't know I don't know if I want to go and try to find that video right now but I have a video on my channel where I
use Lang chain to build out my agent and then I use n8n for the tools um so yeah I hope that helps let's let's see here any other questions yeah rule said I tried na a while back not finding it the best fit for my project yeah I mean it's all up to personal preference like if you're a very technical person and you understand very easily how to build your agents with code maybe at end isn't always going to be the best um but yeah for me I still find it useful especially for agent tools
like I was just saying and for prototyping but I totally get you that like if you have a more complex use case like you're saying with working in your code bases yeah maybe it isn't going to be the most helpful but it still could be just yeah kind of comes up to personal preference here um will recording of This live be available yes it will be definitely will be um what else do we got here what are we covering today this is the set up in screen or is there coming more so what we're building
right now is a prototype for an AI agent that can consume an entire GitHub repository and it'll be able to answer questions over specific files and the general structure of the repo overall so that's what we're building right now it's a prototype with NN as a part of a miniseries that I started where I am building an entire AI agent from scratch going through my entire process so that's what's going on right here um Kamar asks will this work for private get repos as well this will not this is limited to only public repositories which
I mean in my mind is okay because the goal here is to you find some open source repo that you want to understand and learn how to implement for yourself that is going to be public so you can just paste that URL into this agent and it will help you understand that repo and you can ask questions about that repo so just public repos um okay so hungry list thanks for the live session you're welcome really interesting how do you make your agents call for other agents is it in the system prompt or in the
tools uh so very good question when you have a primary agent that you want to invoke sub agents usually you'll make those sub agents as a tool and so You' have another workflow here that would do something specific with another agent like maybe you have some agent that is an expert at um some specific part of understanding GitHub repositories and so you'd have that agent called in the subw workflow here and then your primary agent would have a tool attached to call that agent and you would describe in the description right here what this sub
agent is good at doing and when you would call it so yeah I generally set those up as tools there uh okay blue Google said valky a redus fork is greater than redis um yeah I could get behind that redis it's kind of unfortunate okay so for those of you who don't know redis is an inmemory cash platform so it's a way I mean caching is just super important with software development overall it's this wonderful platform that started out as open source and then they went commercial so they basically forked their own repo and made
it a private version that they continued to develop out and then they just left the open source red reddis hanging forever which is kind of sad but then valky is a project that took reddis and continued to develop it focusing still on open source which is super cool and you all know that I'm a huge fan of Open Source with bolt. DIY and all the agents that I have open sourced in live agent studio and so I I definitely respect valky for that um yeah so timers asked I made my own AI how can I
import that into this maybe you can expand on your question a bit like did you actually build your own generative AI model that'd be super cool but yeah let me know a bit more um John said that answers my question perfectly okay awesome I'm glad that that did I'm mostly using local models which with much smaller cont context lengths so the context length of the model needs to be longer than all of the tools added that is correct yep yep so if you have a model that can accept 4,000 tokens for its uh window but
then your system prompt is also 4,000 tokens then you wouldn't be able to add any tools because that also needs to be included in the the context window and so that would then go over the limit of 4,000 so yes you you have that completely right is there a way to get an agent to log in using an off zero popup um that's a good question I think what you're getting at is the idea of authorization so if you build an agent that like does something with someone's Gmail for example that agent needs to have
some way to actually access that user's Gmail and typically the way that you would do that is you would send the user an author like an ooth to authorization link when the agent needs to us their Gmail then the user would have to click on that authorize the agent and then the agent could continue with the action uh that it was going to do in Gmail I think that answers your question maybe you could expand on that as well um but yeah that is a good question because that's something super important in general with building
agents is the whole idea of authorization um and that's something that I'm actually still working on for the live agent Studio here so um as you can see all of these agents that I have right now none of them actually require user authorization like it's not using someone's slack or using someone's Gmail because that actually takes the agent to a whole new level where it has to manage credentials for these user Services as it's doing these actions for them and that that is something I'm actually looking at implementing um but uh it's it's a bit
challenging and so yeah that's kind of like to come soon but that's super important a lot of Agents when you have multiple users using it it has to be able to handle that kind of authorization so yeah definitely a good question here um so yeah let me scroll down also let me fix my lighting a little bit here there we go all right so let me scroll down a bit for local Vector storage do you recommend super base or is quadrant also a good way of achieving good rag outputs so both are good I typically
go with superbas though it's not better than quadrant maybe equivalent um but it's just really nice because super base when you use PG Vector for rag you also have your SQL database you can do everything else with superbase like your chat m as well that we'll get to while still using PG Vector for rag so you just keep everything on one platform which is why I typically recommend using super base instead of quadrant even though yeah like I said quadrant is great also let me take a quick coffee break here a lot of questions coming
in I really appreciate it guys I might have to move on to the rest of this workflow in a little bit here though um Rita said hi Cole for local Vector storage do you recommends oh no that's that's the same question I scrolled back up a little bit um all right harsh said hi Cole love your content thank you very very much what is your view on production use of locally hosted llms I always advocate for self-hosted due to privacy issues of SAS Solutions so very good question and the answer is it just depends a
lot on the use case I mean there are a ton of use cases even for big Enterprises in production where it's okay to go with a cloud large language model and you don't have issues of data privacy uh most of the time if you have a front-facing agent like a customer support agent for example where its knowledge base is your e-commerce store let's say it's an e-commerce site like every all the data there is public and so um oops uh sorry I clicked on the resume I had to click on the resume ad break again
yeah so where was I yeah yeah so like in that case if it's like a customer facing bot where all of the knowledge is not private a lot of times it actually makes sense to use cloud llms because you don't have an issue of data privacy but if it's some internal application that is like an AI agent that helps you with revenue reports or something like that like in at that point you have a ton of sensitive data you definitely want to use local llms and even if that's a SAS where you help other businesses
with these agents with for Revenue reporting yeah like they're not going to want their data going out to the cloud and so you would want to have local llms for that as well so I hope that makes sense it really comes down to just thinking about like does the data that my agent has for its knowledge base or what it goes and fetches with tools does that have to be U private or not um wow okay there there are there are so many questions okay what I'm going to do right now I appreciate all of
your questions I'm going to continue um for now building out this agent and then I'll get back to more questions CU I want to make sure that I'm still chatting with all you but then also still delivering value with this agent so let me let me continue with that here so next up what we want to do we have this tool that is going to be triggered now but it's not going to actually do anything in fact I can even test this here I'm going to restart the conversation well actually let me copy this here
I'm going to restart the conversation I'm going to paste this here oh wow that did not work I don't know why it pasted the wrong thing let me copy this link to the repo and go back here all right so I'm going to say describe this repo um and at this point boom there we go so it executed the tool it just didn't get anything back because this doesn't do anything yet but you can see that it processed them response called this so now we can set up the rest of our workflow here to actually
um do something with GitHub and so what I'm going to do here first of all is I'm going to do a little bit of coding so I I promise that NN is no/ low code but the low code is important there is a bit of a code component to this agent right here so I'm going to run some JavaScript code and you'll see the purpose of that in a second here so let me zoom out a little bit and add in the code actually hold on why is that not working I literally clicked on the
plus and then clicked on JavaScript and it's not adding it in let me try refreshing here Js come on ADD it in there we go okay I have no idea why that didn't work that was kind of scary I literally wouldn't let me add in a node um all right there we go so I added in this node and then here is my function right here so I'm not going to explain this in detail here especially because I've already went so long with with questions um which is Awesome by the way I'm just saying I
want to go through this a little quickly but essentially I get this repository URL as the parameter for the tool call and I'm passing this repository URL this is the syntax to get the URL into this function right here that is going to use a regular expression it's just a fancy way to extract specific text from a string and what I want to extract is the organization and the repo for the GitHub URL that I'm giving it so if I go back to this repository link right here you can see that the organization is stack
Blitz dlabs and then the repository name is bolt. DIY and I want to separate those things out because that's what I'm going to use with the GitHub API to make the calls to actually get the file structure and read specific files from the repository so that's what this fancy JavaScript does um yeah so I could explain this a little bit more but the Rex I mean I just got this from Claude I'm going to be totally honest I didn't come up with this regular expression myself AI is actually very good at making these super complicated
regular expressions and so this entire function I actually got from clad so that's a huge tip for you is like if you're not super familiar with coding but it becomes obvious that you have to have a little bit of a code snippet in your n8n workflows just use CLA or use 01 whatever model and have it write that code for you so that's what I did for this which was super helpful so now I have the organization and I have the uh repo name as well so now I can actually add in the HTTP request
here so if I type in HTTP I can make a request over the internet now and I'm going to do use the uh GitHub API to do that so I'm going to enter an expression here and so what I'm doing here is I'm crafting a URL maybe I can zoom in a little bit here just so you can see this I'm crafting a URL using the GitHub API and the Syntax for this is SL repos slash the organization that I just pulled out of the URL slash the repo which I also just pulled out of
the URL and then slet SL tree SL main so I'm going off of the main branch here I'm assuming that the repository has a main branch not master or not anything else and so it's kind of a little bit of assumption here but again it's just a prototype so that's totally okay so I'm getting that and then now what I want to do is actually add authentication here so I'm going to give a predefined credential type and I can select GitHub for my credential the GitHub API and all I have to do for my credentials
here is give my user and my access token so this your username and then your access token and the way that you get your GitHub access token by the way for all credentials with n8n they always have very good documentation for how to get that specific credential so you just go to the n8n documentation page here and it'll tell you exactly how to get your API access token so you just click on this link right here and then boom you have your credentials it's so so easy I can even show that really quickly here if
I go to my settings and I go down to um oh where was it uh developer settings and click on personal access tokens I'm not going to go to it right here because you'll see all my tokens U but you'll just go to the tokens classic or you can do fine grain if you want so you're only allowing the token to do specific things in GitHub but you can create your token here super super easy and then you just paste that into here so that's how you create your GitHub authentication super super simple and then
for this get request there's actually no parameters like it's this basic because this is the API that's going to get all of the files within the main branch here super easy um and so now what I can do is add another code block here to actually format everything um because what the response that I get from this request and I'll show this I'll show an example run in a second here the response that that I get from this request is actually a bunch of Json like it's super complicated I want to format it in a
nice way for the large language model so that it can actually understand what's given to me and I mean llms can understand Json pretty well but it's still helps to since it's a natural language model to structure it as natural language I find that it always understands that better than Json even though large language models are very impressive which how much they can take in a bunch of jumble Json and understand that as well um so I'm going to go back into this code block here and then the JavaScript again I just use clad for
this function um but it's super basic overall so I I take in everything that I got from from my request and then there's this tree attribute of the Json response from the GitHub API and this tree this gives me all of the files and folders and so I just create this structure with a simple filter and map so I exclude specific folders which shouldn't be in a GitHub repo anyway way like if you commit your node modules you're definitely making a mistake um but it excludes these just in case and then it maps all of
the items so it's either going to be a folder or a file and I'll show what this output looks like in a little bit but this is just the way uh to do that and um yeah then we're pretty good here so the other thing that I'll say is that what the value that it returns is structure so this is the key name for the output of this JavaScript node and that matches with this value right here so I said that I I'm just going to set this to structure and I'll come back to it
now I'm coming back to it this is the field to return so I am telling the large language model that the response that I get from calling the tool is going to be within the field called structure and sure enough that matches with what I have right here structure so I hope that makes sense um and so now we actually have this full tool so I'm going to save it and I'm going to uh I'll rate I'll rate an N later already done that um I'm going to copy the URL again go back into full
screen I should be going into full screen as much as I can cuz it probably just makes it easier to see and I'm going to paste in this I'm going to say um I'll just say what are some of the files in this repo I hope it doesn't spit out literally everything to me U but yeah let's make sure this works and it might fail the first time maybe I messed up a little bit oh no it didn't it actually worked perfectly all right first try we got it like even though I'm using a template
on my right monitor here I still thought that I'd mess up something but it actually worked perfectly um and so yeah you can see that using the GitHub URL like I said it starts its messages with that but here are the files in the repo and it didn't give me all the files it looks like it just gave me the ones that are at the root of the repo but that's totally okay because we can see that this is definitely working and if I go to the executions here I can take a look at my
tool execution um so if I go into the editor uh it's it's kind of interesting I wish NN did this a little bit differently but when I do my example run you you can see all the green around the boxes here like it's showing me the execution that I just did for the agent like this is what it called the tool with the repository URL and then this is the response that it got and I know that like it doesn't render the new lines properly like it actually just has the the slash n so it
looks like a mess but if you render out the the new lines here this is like a very nice view of all of the folders and files within the repo so that's the response that we got but it doesn't show me the output from the tool itself I have to actually go cuz it counts as a separate execution so we have this execution for the agent and then this execution for the tool but I can take a look at this and see like for my query this is the repository URL this is the session ID
which we're not using yet but we will in a bit and then I go on to the code and you can see this is where we extract the organization and the repository and then I have my HTTP request and this is where I use the organization in repo to call the GitHub API and this is where you can see that it's just a huge mess of Json like this is what we got back from the call because this is the Json that represents all of the files and folders in this repository and so now I
have to take that huge mess and turn that into this nice little Json right here um that is the response that with everything formatted nicely in natural language for the llm so there we go we now have an llm that can understand the structure of a repository which is super neat um but then the other thing that we want to do is we want to make it so that it can actually read specific files as well so I'm going to implement that second tool right now so that it can understand file structures of the repo
and also view individual files and then we will be done with our initial prototype and then I'll go back to answering some more questions so let's get through this right now so the first thing that I want to do is I want to actually distinguish between this tool and the next tool that I'm going to add and so I'm going to add another extra workflow input here so this is going to be another value that is passed into my tool I'm going to call this action and the action is going to be the exact same
as the name of the tool itself so this is getting around the issue that I spoke to maybe like 10 minutes ago with a really good question that someone put in the chat this get around the issue where you can only have one execute workflow trigger in an n8n workflow so if I have multiple tools I can't actually have multiple execute workflow triggers but what I can do is manually specify which action I'm taking with this tool call and then within here I can actually Branch out based on what that value is so you'll see
exactly what I mean and just a little bit here this is how we're going to make it so that we can have multiple tools that are all handled within the same sub workflow of our n8n agent workflow so now I'm going to add another tool here it's going to be another call n8n workflow tool and for the name of this one I'm going to just copy this from my example I want to move a little bit quicker here just because I want to get back to questions for you all so it's going to be uh
call get repo file content tool the name of this one is going to be get file content and then for the description use this tool to get the contents of a specific file in the GitHub reposit and then again for the workflow we just want to call that same subw workflow so it's going to be workflow. and then the field to return um is just going to be data this so this is like structure was the name of the key returned by the other tool this tool is going to return something called Data because it's
just going to be the data for the file essentially and then we're going to add the similar uh extra workflow input so first of all we have the session ID which I guess I don't really need to use this in this prototype but once we integrate with the live agent Studio this is going to be really important um so we'll get to that later and then for the action instead of get repo structure it is going to be get file content and then for the input schema here again I'm just going to copy this to
move a little bit faster here especially because we already set up a tool um for the repository URL this is the exact same as the other parameter but now we have a second input to this tool so the first tool only cares about the repository URL but this one also needs the path to the file that it's going to analyze because it's going to combine the repository URL with the file path to get that full path to the raw file to get the contents of to use that as the response back to the user to
answer a question about the file or whatever and I'm also giving it some examples here because I want to make it clear to the large language model that this is a relative path based on the the root of the repo like for let me just Give an example here the read me for example that we have right here this is at the root of the repository as in it is not in any one of these folders here and so the path to it is literally just the name of the file readme.md but then if I
want to go into a folder like if I want to look at the contents of this file right here the path is going to be GitHub slork flows slcio and so that's kind of one of the examples I give here like the icon. PG that you have to go into the public folder and so that relative path is going to be combined with the repository URL to get that full HTTP request that URL for that file so I hope that makes sense so we have our tool set up now super neat now we need to
edit this the sub workflow here because we have to Branch based on if we are using the get repo structure tool or if we are using the call or the repo file content tool so key distinguishment there and we're actually going to make that split right here so no matter which tool we're using we want to extract the organization and repo because whether we're getting the structure of a repo or the contents of a specific file we need these two pieces of information so no matter the tool we do this but then this is where
we use a switch to route based on which tool we're actually using and so for the value here we're going to use the action that is given in the execute workflow trigger so this value right here the action that is what I am referencing right here in this switch statement and so if the value is equal to get repo structure then we'll go to that first route and then we'll have another route right here if the same thing is equal to get file content and again I'm switching this to an expression so if it's fixed
that means this is just raw text if it's an expression that means it's actually using JavaScript to resolve the value and that's what will actually get me the action from the workflow trigger and if that is get file content then we go to the second route and that is everything that we have to set up here and so now we have these let me close out of that now we have these two nodes here there these two branches so I'm going to have my original git file structure go to this path right here and then
for this new tool this is going to go down this second path right here because if the action is get file content that's when we need to actually get the file content and this one this tool is actually simpler thank goodness we don't even have to have a code node at the end here all we're going to need is an HTTP request and the cool thing is um this one is also just using the GitHub API in a very similar way so the URL for this is kind of the most complicated part so let me
go into the expression editor here and paste this in and then I'll explain this really quickly so the URL is raw. GitHub userc content.com and then you have the organization and the repo just like we have with that other request and then we're just looking at the main branch again we're assuming that there is a main branch for the primary branch of the repo that's a fine assumption to make as we're just building a prototype and then we have the file path right here so whatever is passed into our execute workflow trigger as the file
path that is what we use at the end of this URL and I'll even show you that this looks very familiar if you click on the RAW button right here within a file in GitHub click on this right here you'll see that oh wait a second that looks exactly like what I formatted in n8n we have raw. GitHub userc content.com slash theorganization slash the repo name and then we have main slash and then the path to the file so that matches exactly what we see right here so there you go that's how you kind of
that's how I figure figured out initially how to format this URL here so just a little bit of an inside of the research that I did to make this possible so let me make this look all nice um and then let's see what else do I need to do oh yeah I have to add in my GitHub credential again so again predefined credential type and then I already set up GitHub and showed you how to do that so I'm going to select this here and there we go and then the output field for this HTTP
request is going to be a key called data and again that is what I said I'm expecting in this tool right here the field to return is data and so now I'm going to save this and we can actually test it out and if this works first try like the other tool did then we already have the prototype for our agent built out there's a lot of things that are missing like it might be nice to actually Implement a rag pipeline to process all of the repositories that we're asking questions about but that is going
to come later in the series right now for a simple prototype this will do like we can ask questions basically any question that we want over a repository and as long as it can understand which file to look at to answer your question we don't even necessarily need rag we don't have to have that kind of vector lookup if it understands the full file structure and the file names are intuitive enough and we'll definitely add that later but yeah this will work pretty well so I'm going to go into the chat here and I'm going
to say what are the contents of env. example and so so now it will actually call this tool looks like it failed so I I messed up somewhere but it did call this tool to get the contents of the file so now all I have to do to fix this is again go to the executions look at the execution of my tool and see what the heck I did wrong let me see here the resource you are requesting could not be found um okay interesting so it looks like the llm hallucinated because it for some
reason uh gave me the org and repo of streamlit which is super weird um wow cuz the repository URL it gave streamlit not really sure why it did that it it might be because I was using the same conversation within different um oh yeah it's because I restarted the conversation without realizing it okay so that's my bad but also that's that's kind of an issue like we'll probably need to somehow in the system message say something like if you don't know the URL to the GitHub repository do not just assume it like you have to
know it like that's the kind of thing you can add to fix that sort of issue so I'm not concerned by that I'll take care of that later though because again this is just a prototype like we're giving ourselves permissions to mess around here and not fix all the super tiny things right away because we just want to get something up and running that is working well so um giving it the repo again and I'll say uh what are the contents of the EnV example file this time it shouldn't uh totally botch that there we
go okay so it called the tool and this time it was a success and it's taking its sweet time here to give me a response not really sure why but there we go yeah so here is the content of the file I guess it's taking it sweet time because this is actually a larger file U but yeah this looks phenomenal it's got all the different uh API keys with instructions on how to get them and this matches exactly with what we have right here in the repo so looks like it got it perfect so there
we go we now have this agent that is able to when we give it a GitHub repository analyze the file structure of it and actually answer questions about specific files and if we ask it a question where it needs to dig into the files hopefully given the structure and names of the files it can distinguish which files it should look at to get an answer like if I ask it a question about the workbench in bolt. DIY or how it handles conversation history it should be able to look through you know like app and then
go into the components and be like oh yeah okay here we go I'll go into chat and this is how I can understand the chat how it handles chat history that kind of thing and then later on we can Implement rag so that it can actually um do a vector search to find answers to questions as well that' kind of be like a third tool that we'd have here and that's definitely a more complicated piece um to this whole thing which is why I'm not doing it as a prototype here but later on in the
series I think that'll actually be a very important addition and a little teaser here I I have actually been working on that already let me show you this right here I've already started to work on the pantic AI version of this agent so a little bit of a sneak peek of what's coming in the series here um none of this code is like super bulletproof at this point I just been I just started working on it but a lot of what you we're building an n8n right now is the Prototype is implemented through these agent
tools right here like getting the contents of a file in GitHub um and I even have like extra logic here like uh check the main branch and then if the main branch doesn't exist then check the master Branch so again going from a prototype into something that's actually a bit more production ready implementing that kind of logic here as well and then within the pantic AI agent version that is also when I'm going to implement some sort of rag system as well to have that look up along with what we are building right here in
n8n so a lot of more cool things to come for the series as we're building up this agent but we already have something that's pretty neat like this is pretty sweet the kinds of questions that we can ask already um and again I have this agent right here already available on the live agent studio for you to try right now basically what we built right here is what is available within this agent so this exact prototype is available available for you to try right now on the live agent studio and with that I'm going to
take another s of coffee here this is definitely a longer live stream than I thought it was going to be um but I appreciate that because this is a blast um so I'm going to take a a break and answer some questions here and then the next thing that I want to get to is the uh making it compatible with the live agent studio and that's going to be kind of the last part of this stream and then we'll close it off and I hope you guys have a fantastic weekend so let me scroll through
some more questions here uh there was a lot after I finished answering questions I'm I might miss a couple I really apologize for that cuz my chat was flooding before uh oh I had to delay the ads again shoot okay I clicked on delay ads again so sorry guys if you uh were seeing a bunch of ads I'm really I really wish it wasn't like that yeah let me scroll up and try to find where there was a last okay here we go all right okay so Kamar said I'm trying this for a local repo
and trying Lang flow should I shift to this approach you know llow is pretty good I haven't used it too much myself besides just a tiny tiny bit of prototyping um but I I like this but obviously like if you want to follow along with this whole series it would make sense to shift to this approach um cuz I'm going to make it very straightforward throughout the whole process and again all of this stuff is going to be open source and available for you to download so yeah I mean it might be easier to switch
um I hope that helps Jake asked any recommendations on where to host n8n for a relatively low cost my recommendation is digital ocean this is where I host my n8n instance it's a really awesome and affordable platform on cloud platform that would be my recommendation you can literally get an instance like a regular CPU instance like not a GPU droplet but a regular CPU instance for like $7 a month that can definitely host n8n especially for any personal use I mean you obviously have to scale it once you have a ton of users on your
platform but to start literally $7 a month um let's see what else we got is this the first part of the rag miniseries you were talking about good question so no th this series is focused specifically on building an AI agent from scratch and so rag is going to be a part of it because I think it's going to be really powerful to have that be a part of this whole workflow as another way for the agent to get to the correct answer um but there's definitely more dedicated content that I'm looking to do for
rag in the future so kind of another series that I want to do um within the next few months here on my channel going into the new year um let's see let's see I tried your Google Drive rag AI agent but when I tried uploading multiple files at once to Google Drive the execution always came back as an error uh yeah so unfortunately that's something that I wish I I I wanted to start simple so I created the workflow to only handle one upload at a time I kind of wish that I had taken that
extra step um to make it handle multiple because even though it makes the workflow more complicated it would have made sense um but yeah there is more that you have to do in the setup for that rag agent to handle multiple files being uploaded to Google drive at the same time because when multiple files are uploaded at the same time they're all added to that single trigger um so the workflow will trigger because it pulls Google Drive every minute for a new file and there'll actually be multiple files coming in at the same time so
you have to add a loop to Loop through each file and then do what I'm already doing in the workflow where you extract the text vectorize it and then add it into the superbase uh Vector database so yeah it doesn't take much work to add in that looping um but you do have to do that and yeah I might make more content on expanding that um but also I might just do that as a part of this miniseries as well to do a more advanced rag within nadn um so yeah um does pantic work in
flow wise that is a good question Jason so flow wise and pantic I don't think they would necessarily play well together unless you're using flow wise as your tools for pantic kind of like you could do with n8n um I don't think that super makes sense so I would maybe i' maybe say no to that like I wouldn't I wouldn't try to combine pantic with flow wise because they're kind of doing the same same thing flow wise is just a no code way to make your AI agents um and then pantic is obviously very code
Centric way to do that so if anything I would I would prototype and flow wise and then build your agent with pantic AI um Nar Mas great work Cole thank you very much in November live session I mentioned a live session with 25 agents working simultaneously to get the final output when is that I think you're referring to I said that I was going to test bolt. DIY with a bunch of local llms and I still want to do that I just think that my content is kind of strayed more towards AI agents at least
for now so I don't really want to make a whole video on using local llms the bolt. DIY not yet um especially because most local llms just aren't that powerful at this point they don't do very well as AI coding assistance so I got a little ahead of myself uh saying I was going to try 25 of them I still want to do that but it it's going to be pretty underwhelming for a lot of especially the smaller models cuz they just can't handle large prompts like you have with bolt um but it's yeah it
would be cool to do that at some point I just don't have a set date and time for that um perfect project thank you Spicer a needed utility leveraging agents can we make it autonomous I'm not really sure why you'd want to make a GitHub assistant autonomous because it's meant to be a conversational bot for you to ask questions and then it pull from GitHub in real time um but if there's some way that you can think of modifying it where it'd be useful to be an autonomous agent you definitely can because you can have
n8n workflows actually let me show you guys all this quick um if I search for KRON a schedule trigger so there's another type of trigger in Nate end called a schedule trigger and what this guy will do is it'll just trigger the workflow automatically every day or you can do every minute every five minutes that's how you can set up an agent to run autonomously if you have to have it do something thing every day or every hour or whatever and so you can use that kind of schedule trigger to run an agent every so
often to do things autonomously like maybe you want to set up a workflow that processes a GitHub repository every day because repos change every day and then reindex it into a vector database like you could set that up as a schedule trigger in n8n and that way you have something that's running autonomously maybe that's a good example to what you're asking uh scuff Ron asks what is the best free AI code so um I mean bolt. DIY is an awesome open source project that me and the community are working on that's a good option as
well um I would say I think cursor is open source so I think that's free I forget if I paid for it or not um but that cursor is awesome as well wind surf was free at one point and then they made it paid which is super unfortunate uh but it makes sense though I mean they were they were blowing through so much money when they were in beta giving it away for free um so unfortunately that's not free anymore but it's worth the money I'll say that at least um yeah let's see how can
one pre-made agent on automator invoke another agent for example the GitHub agent handing off to the n8n helper agent so I haven't implemented anything like this where the agents on the live agent Studio can actually communicate with each other uh but that would be super cool I haven't really found or I haven't thought of a good use for that yet um but it would be very easy to do because all of my agents here I'll even show this really quick um let's see I'm going to I'm going to do this off camera I'm not going
to show like all my workflows on camera but let me let me go to an example of a workflow that I have um like here's one so this is the Reddit small business researcher agent that I've got with um NN this is one of the agents that you can try right now on the live agent studio and it's a web hook request like I said to make it compatible with the live agent studio and to use n8n as tools like all of that is using the web hook um trigger to the workflow and so because
I'm turning this agent into an API endpoint right here I can call this within another n8n workflow literally just using the HTTP request node and so let me close out of this uh leave like I'm making HTTP request right here I could make a request to the agent instead and then boom I'm now talking to that agent so it actually is very easy because all the agents are API endpoints to have them communicate with each other and so if I were to ever find a really good use for that which I'm sure I would to
have that agent interconnectivity it actually be very very easy to set that up all right then jasel asked how is NN different from L graph so in my eyes they're not actually comparable at all because n8n is a no code workflow automation tool you build workflows to connect to different services like slack and Gmail and you can build AI agents kind of like what I'm doing right here with the AI agent uh tools node right here Lang graph on the other hand is a completely coding kind of framework like na is no/ low code Lang
graph is all coding and it's more for agent orchestration so having different agents work together and that kind of thing you could set up in n8n but langra does a lot for you to handle different steps of your agent workflow managing uh human approval for things and conversation memory and all of that and checkpoints in your workflows there's so many things that it handles that it's very it's very much involved in agent orchestration the kind of thing that I wouldn't try to do too much in NN so I hope that makes sense I'd say they're
very separate platforms and again they can work well together like you could have NN agents that are called as a part of your Lang graph agent workflow just by making the agent um a API endpoint like I was just showing with the um small business researcher um so yeah I hope that makes sense what else we got uh still scroll [Music] in are the agents in your studio available on your GitHub yes they are so going back to the live agent Studio you can click on view source for any of these agents and you can
also click on this link in the footer here which is going to bring you to the GitHub repo and so all of the agents that you see on the studio are open source and available here and so like I can click on like sample python agent and then boom there's also super extensive documentation as well so all of these um and the way they integrate with the live agent Studio are all open source and available for you which by the way for the hackathon competition that I'm hosting let me go back over that I got
to plug it again because this is just super exciting for all of us here the hackathon competition that I'm hosting with a $5,000 priz pool where you just have to build an agent for the live agent studio um that all of these agents right here are really good resources for you to look at and see how you can Implement an AI agent not just for yourself and your business I mean that's why they're open source but also so that you can see how to implement them for the live agent studio so definitely check that out
as well also just because I'm plugging the hackathon again here I'm going to paste a link in the chat so go go ahead and register for this hackathon if you're interested in building AI agents and you want to win some prize money and showcase your AI Mastery to the world definitely register for this hackathon it's uh it's happening very soon here so we got registration open until the 8th and then the competition goes for a couple of weeks in January and then I will be announcing the winners after we have a community voting on the
best agents so it's going to be an absolute blast and that's a big reason I'm doing this miniseries as well to help you understand how to build AI agents effectively for yourself and then also so you can you know use this as a resource to compete in the hackathon as well and that's why in a little bit here I'll actually show how to turn this agent into one that works for the live agent studio just kind of as the last part of the live stream here so yeah um Rita asked in the future will you
give us a step-by-step tutorial on super base locally hosted and connected with n8n I actually will um because what I'm planning on doing is taking that local AI starter kit that I've covered a couple times on my channel and replacing quadrant and vanilla postgress with super base um and that also includes n8n in the package so it's this local AI package that has olama and it's going to also have superbase and n8n and then open web UI as well and I think I'm missing one as well but it's all that package together and I'm going
to show how to run that entirely locally so definitely we'll be doing that in the actually very near future because that local AI starter kit that I've covered a lot on my channel um the most viewed video on my entire Channel like over 300,000 views my first video on this like I'm going to keep expanding that going into this year definitely uh so yeah let's see what else we got pine cone or super base for rag in n8n both are great I would say my recommendation is always super based to start cuz it's very simple
and it's nice how you can use it for your conversation history and for rag because it's also a SQL database um but pine cone I would say you sometimes get a little bit better performance for actually retrieving the right Knowledge from your knowledge base especially when you start to have like hundreds of thousands or millions of Records in your vector database um but until you get to that point I would say it makes more sense just to use super base because it's simple and you're sticking to one platform um so yeah top travel said super
base as well um so I got some backup on that um Mark asks what would be your best guess as to how long it would be before all the process you're doing now will be automated by agents completely well I guess I'm curious what do you mean by all the processes I'm doing you mean like everything I'm doing for prototyping an agent with NN and then building it out with custom code as well maybe that's what you're getting at um and I would say that like in general I don't think that AI is going to
completely autonomously build entire things like this for a while what's going to keep happening though is it's going to be more and more of us using AI as sort of a pair programmer taking the wheel and us just kind of guiding it more in what we want to build and then we're already starting to see that happen with AI coding assistants out there like like bolt and wind Surf and lovable and cursor like all those are doing so much for us but it still requires us to give that highle direction of what we're actually looking
for like maybe I could have an AI agent that builds an entire NN workflow for me um but it's still going to be me saying like oh yeah I want this tool to get the file structure and I want this tool to go and get the content of specific files um and so that that kind of human oversight I think is going to be around for a very very long time we just can't trust agents at this point to do really complex things like building out a full complex production ready agent without our input a
lot um I mean we'll probably get there at some point maybe like maybe later this year maybe 2026 I hard to say um but yeah I mean personally myself I would be very uncomfortable allowing an agent to actually fully build and deploy an agent like even for the live agent Studio I'd still very much want to uh take care of that myself and and just have ai help me along the way I mean that's the main thing um so yeah let me keep scrolling here what is the best option for hosting open source llms to
be used in tools like n8n or flow wise so if you want to completely host the llm yourself there are a lot of really great Cloud platforms if you don't want to be running it on your own computer soz if you want it running in the cloud 247 to be used in your workflows then you can use a service like runpod or novita AI there's a lot of cloud providers that provide uh GPU instances is what they're called they're Cloud instances that have really good gpus to run large language models that would be my recommendation
and then if you don't actually want to host the llm yourself you just want to use an open source large language model through an API then I would recommend going with open router open router is a fantastic platform where with an API just like using open ai's API you can use all these different open source large language models like deep seek and quen and llama super easy and it's dirt cheap as well like the new deep seek V3 model it's basically as good as GPT 40 and clae 3.5 Sonic and it is dirt cheap like
it's like 7even times cheaper than clae 3.5 Hau and it's just as good as clae 3.5 Sonet um so I would recommend using open router if you want to use those models without hosting yourself so I hope that helps all right let me keep scrolling here um all right how would you verify the JS code any tips so I assume you're talking about like this code that I'm running right here um and I mean the way that ID verified is just kind of what I was doing where I'd run my test work workflows so if
my code was broken right here I would you know do the chat I would say something here and then it would like try to call the tool and then it would break and then I'd go into the executions look at the tool call and see what went wrong within my JavaScript function if that actually was the issue and then there'll be an error here that should help you understand like what actually went wrong in the function because it'll give you like the raw error from the the console um when it fails so that's how I
would debug and and just like verify any JavaScript code that I'm using within an n8n workflow so I hope that helps um oh wow there's uh there's so many more questions um sandep asked about open AI swarm I do have a video on Swarm on my channel I'm not really going to focus on this in this series because I'm going to use pantic Ai and then potentially Lang graph together and I think that combo very much covers what swarm would typically do um I think swarm is great but it's definitely a more experimental framework work
for an open AI I don't know how much they're really going to be supporting it and developing it it might just be more of like um I mean honestly like something they did for marketing just to continue to provide open source value to build up their reputation we'll see how much they they actually build it out but I'm just not sure on that so I won't be focusing on Swarm a ton um but I definitely did like it when I used it in my video very similar to crew AI um do you have any dedicated
courses for AI agents that we can buy I do not right now but that is something I am planning on doing for 2025 because as much as my channel is a great place to provide resources for you to learn about building AI agents and working with local Ai and all of that there's definitely an opportunity for me to go much deeper with something like a dedicated course and so I'm definitely planning on doing a lot of that going into this year uh it just takes a lot of time a lot of time to develop that
out and I've just been so busy providing you guys so much free value with these videos and the hackathon and the live agent studio and all of that um but I'll get there I'll definitely get there okay so cloudways AI asked a very very good question um that I will be talking about a lot on my video on the 1 he said 2025 will be a turbulent time in my opinion um and I definitely agree for AI um he asked what are your views on what will be big changes that we will see so very
good question I don't want to tease too much about what I'm going to be talking about on my video on the first but I'm going to be answering question exactly um just to name a few things very quick here um first of all uh reasoning large language models like qwq 01 03 Gemini 2.0 flash thinking these kind of models are going to absolutely dominate 2025 and they're going to be so important for building agent workflows uh which Speaking of agent workflows AI agents are also going to be massive going into 2025 they already kind of
are but they're just going to be doing so much for us uh going forward and we've already kind of set the groundwork in 2024 with best practices for AI agents and creating large language models that are actually good enough for agents to make good decisions like we have all this now now is the time to just build on top of that and build agents for our businesses and we're just going to see that like so much all the way from small businesses all the way to corporate Enterprises um so those are a couple of things
also local AI I think is going to be huge for 2025 um so yeah I I'll just leave it there got an awesome video coming on the 1 uh 7:00 central time that you don't want to miss we're all really be talking about that um let's see wow there are so many questions unfortunately I really don't know if I'm going to be able to get to all the questions what I'm going to do in a little bit here so I'll get through a couple more questions and then I want to make this AI agent compatible
with the live agent studio so I'll do that and then I'll end off with some more Q&A cuz this live stream has already been going on for an hour and 40 minutes I'm going to lose my voice today honestly um so yeah I got to protect my voice so I can actually talk when I have my time off for the end of the year otherwise that wouldn't be very fun but yeah let me get through a couple of questions I mean it's totally worth it but still it' be nice to not lose my voice so
let me get through a couple of more questions here and then I will make this agent compatible with the live agent studio so next question might be a stupid question no question is stupid um but can you recommend any courses or books for non-te to understand and work with AI agents there's a bombardment of info but a majority of them leave me lost that is totally fair um so the first thing that I'll say is that um my series that I'm working on like with this live stream and videos that I got coming out on
my channel um I think this is actually going to be a really good resource for non-technical people that are trying to become more Semite technical that's especially why I'm focusing on n8n a lot to start and building with no/ low code tools so I don't just mean to plug my own content but I just know like like I'm working hard to make this something good for you so it definitely is worth mentioning um and then in general like there's not a lot of books that are really good for AI agents at this point like the
whole sphere of AI agents is so much the wild west right now there's not just like some big book that I can recommend let me adjust the lighting here there's not some like big book that I can recommend like I could for you know growing a business or um you know understanding AI in general or deep learning like all those more wellestablished Concepts and so there not something big I can say as far as courses though I would just say like YouTube University as I always call it like just Googling like understanding AI agents and
building AI agents with tools like n8m YouTube is honestly the best place for that and it's free usually so that's definitely what I'd recommend and then also getting involved with a community that's interested in AI is also super important for learning and growing with other people as well as well and that's kind of a whole course of its own just to network with people share what you're building have them share with you and just learning from each other U so yeah maybe not like exactly the answer you're looking for it's just hard because everything's just
so new with AI and especially with AI agents um so yeah just a couple more questions here and then I'll I'll get back to it can we do predictive analysis from analytics data platforms that's a good question I don't know if it's really related to this stream though maybe if you expand on the question more I'd love to answer um but yeah let me go to the next one here yeah so uh this is a good question if I have an agent that I want to do something with in notion like to take a bunch
of actions in notion do I have to create a separate node for each action and the answer is yes you do so if you want to um add a new note in notion or edit one or delete one or add a new member to your team like whatever action might be within the platform those all have to be separate tools that you set up either through separate workflows or you might just have like a notion tool right here so notion tool and then I would select what I want to do like I want to um
yeah like create a page or I want to um let's see database uh get many I want to get many databases like each of these you'd have to set up as different nodes in your workflow or if you're coding an agent they'd have to be different functions that you set set up and give to your agent again using the title and description and parameters to tell it when and how to use each of those functions within notion so I hope that makes sense um let's see what are your thoughts on all right this is my
last question then I'll get on to actually making this compatible with the live agent studio so uh what are your thoughts on building an agent to retrieve information sln knowledge on applications within company from technical documents stored in SharePoint and Doc X format um maybe you could clarify what thoughts you're looking for I mean that's definitely a really common use case within companies a ton of companies use SharePoint and then the format that you're storing in SharePoint doesn't really matter too much I mean any format you're going to be able to take in from the
SharePoint API and then add to your knowledge base just turning whatever you have in raw text whether it's docs or PDFs or text documents share or um PowerPoints like whatever it is like you can just turn that into raw text and then ingest it um but yeah I mean that's super common use case for for AI agents typically like one thing I will say is like typically you're not going to have the agent do any kind of like this vectorization or indexing in real time usually what you would do is you would have a process
that runs in the background that searches through everything in SharePoint and updates everything in the vector database every so often like every hour or 15 minutes or day or whatever whatever and then you'd have separately your n agent or your agent that's custom coded whatever that would answer questions based on that knowledge so you have the whole indexing separate from the agent itself because a lot of times taking all the files from SharePoint or Google drive wherever you have your knowledge base that can take a while so you don't want to actually have the agent
do that in real time typically um so I hope that makes sense um but yeah I think with that we can actually go into making this agent compatible with the line agent studio and so this is going to be the last thing that I do before I close off this stream and luckily it's actually very easy to do so and so I'm not going to build this from scratch instead I actually want to just show you what I already have built let me paste this in here I want to show you what I already have
built so that way we don't have to go through this entire process because I know that we've already gone pretty long with this stream but I want to walk you through and kind of compare between the two what I did to make it work for the live agent studio so first of all if you want to build an agent for the live agent Studio it's a fantastic way for you to contribute to an open- Source Community Building awesome AI agents together and I have this guide right here which this is also what you use if
you want to participate in the $5,000 prize pool hackathon as well you'll use this guide as well to build your agents for the platform now the reason that there has to be a guide in in the first place and you can't just build any agent you want is obviously because I host all these agents myself and have it all on one platform with my database there has to be a standardization a way that the agents accept input and output information and the way that they handle conversation history because without all that being standardized there's no
way for me to host all these agents on a single front end and have them all available for you to try in a very standardized way and so this guide is what we have to follow when we take our agent and make it compatible with the AI agent or with the live agent studio and so this documentation is super super long but the only reason for that is because it's super comprehensive I tell you everything that you need to make your agent compatible and I give you sample agents as well so if I click on
Sample n8n agent this is going to bring me straight to our automator agents open source repo that has all the agents that I was showing earlier and we just download the workflow Json so I can use the agent node example I can just take this and I can download this file download it boom and then I can import it into n8n so actually let me do that quickly here I'm going to one second I'm going to make a new workflow bring this into here and then I'm going to import from file downloads boom there we
go all right so that's all I did so I downloaded it from GitHub and then I brought it into my n8n and what this shows right here is an example of how you build an AI agent in n8n using the tools agent node that we have in our workflow right here to make it compatible with the live agent studio and you can see that it's not actually that complicated we turn our agent into an API endpoint that's what I did right here and then we have all these specific input parameters that we need to handle
so going back to the guide here we have all these specific input parameters that our agent needs to handle and I explain all this in the documentation I even have this video right here to walk you through it if you prefer watching a video um and so yeah let me go back down here so we have all the everything you need to understand the input parameters everything to understand the output parameters and what it takes to manage the conversation history I make it so easy and this template gives you everything that you need so we
and we take in all of our parameters right here and then we have our AI agent tools node and yeah like there's not much we actually have to do here because I made the live agent Studio chat memory compatible with the way that the tool node sets it up so when we have our postest memory right here which is literally just connected to um my superbase account that's all that it takes to have the conversation memory for the agent that is compatible with the live agent studio and then for the chat model here I can
use anthropic like I'm using clad 3.5 hi coup I can set up Google Gemini chat if I want to use Gemini flash I can set up whatever model I want and then all the tools right here I can set up just like I have right here so it's super easy to make an agent so you just take whatever you built in n8n and then build it on top of this template right here and then boom there you go you now have an agent that is compatible with the live agent studio so you can go ahead
and uh right here go over to the hackathon you can register all right right and then you can go to the submit agent and then submit your agent and as long as you did everything to make it compatible with the live agent Studio your submission is good and then boom you've now entered in the comp the competition and you have a chance to win a bunch of money uh for participating in the hackathon here so that's all it takes to build an agent and so specifically for this GitHub agent this this n workflow we're looking
at right here this is exactly what I have hosted right now on the live agent studio so this agent right here that you can try right now that I showed at the start of the live stream here I'll even show it again quick so this agent right here that I can talk to with a GitHub repository link what I just showed in n8n is exactly what I have hosted right here like node for node I didn't do anything else um and so yeah I can just go over this really quickly here so I have this
web hook right here and I have this pin data so this example request that has my query and then it has the user ID request ID and the session ID the session ID is what I use for basically determining the specific conversation that the message is a part of and then for my chat memory I'm just hooking into my postgress account which by the way to get your credentials for postgress I can just show this really quickly here all you have to do is go to your superbase dashboard and then go to Project settings and
then click on API and then here's everything that you need so you have your UR L your Anonymous API key your service roll key if you're using superbase you get all your credentials here and then you just have to go into n8n let me go back here click on create new credentials and then yeah you can see that everything that you need is all right here um well actually that's the wrong tab you need the database tab sorry so you go to superbase you go to Project settings and then you go to your database tab
this gives you everything that you need for connecting to your database um oh okay they actually moved it that is super interesting you can find Project connection details by clicking on connect in the top bar this is actually brand new uh okay cool that is good to know all right so yeah you click on connect at the top and this gives you all the information that you need uh for your host Port database user and then you just enter all that into postgress so that's how you hook up super base it's super super easy and
the beauty of using the AI agent node in n8n is it automatically creates that messages table for you so this is the table that handles the conversation history in superpa base and this node automatically creates it for you so once you hook up your credentials and you say I want to use postgress with superbase for my memory it'll automatically create that messages table which by the way I'll just show that really quickly here messages boom like this is my message table right here and it creates that automatically within n8n which is super neat and then
for my tools these are exactly the same like we're looking at right here is almost exactly the same as this prototype that we just built right here the only difference is I have a couple of extra oh I wait I deactivated a node okay I did not mean to do that all right the only difference here is I have an extra step for both of my tools that adds an AI message to the messages table and this is pretty neat because what this does is allows me to report in real time what the agent is
doing so right here when I'm chatting with the agent and I say something like uh what are the contents of. example it's pretty neat because before it actually gives me the final response it'll actually tell me what it is doing like it'll say like oh I'm getting the contents of this file and then it'll give me the final response and the way that it's able to do that and report back to me in real time before it's even giving me the contents of the file like we're seeing right here it's inserting a message into the
database and so it's saying if I click on this I'm inserting into the messages table in postgress and I'm giving the session ID CU that's for that specific conversation and then the contents of my message is just just Json object or it's of type AI because it's an AI message and then I'm saying I'm getting the contents of and then the file path that I passed into this tool super easy to set this up and now in real time we're actually getting these updates which is super cool and if it were to um you know
like understand the contents of mult multiple files we would get a bunch of messages streamed here with all the different files that it's taking a look at and so that's a way for us to get visibility into the agent and what it's doing for the tool calls and that's all possible with the live agent Studio because I just have to add inserts into the database and so if you're curious how this agent Works more specifically you can literally just go to my automator agents repository here I'll even link this right here let me send this
in the chat really quick I'm going to send a link to this this get repo you can download this workflow Json for the GitHub assistant which is exactly what I have hosted in the live agent Studio here it's exactly what I'm showing with this n8n workflow and you can bring it into your own n8n to use this as an example for building an agent with the live agent studio for the hackathon or Beyond whatever you want to do this is just a resource that I have for you to make it so easy to understand not
just how to build something for the live agent Studio but also just how to build a robust agent in general I mean this agent is pretty sweet especially just for a prototype um and so that is pretty much all I have to do and then the only other thing that you need to do to make it compatible with the live agent studio is essentially Just Produce that final Json that says whether or not the workflow succeeded so you can catch any errors and say that the success is false or if it worked then you just
say the success is true and again that's all going along with the documentation that we have here for the developer guide so we talked about the um input parameters and we talked about store messages in the database and how that is automatically uh working with the AI agent node and then we get to the output here where you just say success is true or false and the reason that we just have to Output this is because all of the AI messages are stored directly in the database which is really nice so our output from the
API call is actually very very basic so I hope that makes sense it's pretty neat that we're using Gemini 2.0 flash for all of this again because it's such a fast model it's super important as we have the let me go back to that conversation here um let me go back it's kind of important like um oh Let me refresh here oh no here we go all right it's like look look at this output it's completely massive like we need to have a fast model if we're going to have something that outputs this much text
and so that's why this is a really good example not just to build a cool agent for the live agent Studio but specifically to take advantage of Gemini 2.0 flash because if we had a slower model like 01 or something oh this would be way too slow like for the amount of output that has to do here or if I had a a local llm that was running on my computer and it's putting out like 10 tokens a second like this would take forever to Output but with Gemini 2.0 flash it's super fast which is
really neat um so yeah the other thing I want to say for the live agent studio in general is please use these resources to understand how to build your agents for the hackathon just for the live agent studio in general I have some sample agents as well that I call out in the documentation I already showed that so we we took a look at the sample n8n agent but also there's a sample python agent so if you want to know how to build an agent with python specifically because you want to get into some custom
coding then definitely check this out um because I have really extensive documentation for how you can use both superbase and just any postgress database to build an agent for Python and so this template allows you to basically just add the little part in the Middle where you actually get the response from your agent and you could use pantic AI or laying chain or something entirely custom with the open AI API or anthropic like whatever you want to do you can do that within here you get the agent response and then I have all the logic
here to manage the conversation history and the authentication for the request to the agent like using fast API super easy to get started so don't feel like like you're limited to using NN to build your agents it's very easy to do with python as well I just offer no/ L code because NN is awesome and a lot of people don't want to custom code an agent anyway so that is that and then the very last thing I want to say is that you can also use voice flow so voice flow is an awesome platform let
me go to it really quickly here voice flow.com voice flow is an awesome platform for building agents without code and it's not free and open source like n8n but I still think there's a really good time in place for it especially because of how easy it makes it to build knowledge-based agents with rag n8n it gets a little tricky with that but voice flow takes care of it so well it's an awesome platform and I developed a custom integration with the live agent studio in voice flow so you can build any agent that you want
in voice flow and it will automatically be accessible and working with the live agent Studio without you having to do any of the stuff that we see here managing the API call and the parameters that come in and doing specific things with the database and things like that like all that is handled automatically with voice flow which is super cool so those are kind of like the three big options for building an agent for the live agent Studio I'll even show this back in the developer guide here as I talk about the U platforms so
any any python framework like I was saying n8n flow wise as well and then voice flow and then also the other thing is you can use anything as long as it's within a Docker container so if you want to build a go agent for example or if you want to build an agent with rust like be my guest I don't want to host these things outside a Docker because then my infrastructure has to have everything set up for go and rust and all these other languages but if you put it within a Docker container and
you handle the API request and the conversation memory and everything within your go or rust agent or whatever then I can instantly host it for the live agent studio in minutes and so yeah there you go I mean like it's you can pretty much do anything you want you just have to get a little bit creative if you are going to do it in a Docker container but also my example here for the um sample python agent if I go back to it here I also have a Docker file so I show how to containerize
it as well so you can use that as a starting point even if you're building an agent not with python so I hope that's helpful so I'd love to have you guys participate in the hackathon I'm just going to plug it one more time quick as we close out this stream here I'll do a Q&A and then end off the stream yeah register for the hackathon it's going to be a blast Reg ministration is open until the 8th so definitely sign up and start planning and building your AI agent and I'm going to keep pumping
out a ton of content and resources for this hackathon and really this whole minseries that I'm doing helping you build AI agents part of it is to help for the hackathon as well I mean I wanted to do the series anyway and so I'm providing a lot of value even outside of building an agent for the live agent Studio but it's part of the focus because it's just really good timing to have this be another resource for you as you're building an agent for uh the live agent studio for the hackathon so yeah that's everything
that I got I think uh with that I will do some Q&A and then turn off the stream here all right wow there are so many questions I appreciate it guys like this this has been a blast of a stream and and the fact that like concurrent viewers has stayed like at or above 200 is just just amazing so thank you everyone for being here and supporting and then just checking out everything and yeah building an agent together I've never done that live on a stream before so even though it was just a prototype something
pretty simple within n8n it was still a blast to do that so thank you everyone um wow okay so many questions I'm trying I'm just trying to like scroll up right now to where there was the last question that I answered and I'm going up pretty far here um okay here we go I finally found it all right by the way I'm watching for every AI YouTuber every video for 2.5 years you are the best absolutely number one thank you very much E I appreciate that a lot um yeah I mean that means the world
um yeah let's see here all right where can I get the templates that you are showcasing so yeah fantastic fantastic question all of the agents on the live agent studio all you have to do is go to the URL studio. automator doai you can try all these agents here and then to view the source for them you just have to click on The View source button and that'll bring you straight to this GitHub repository that I've been showcasing here where you can download all these agents yourself so the workflow Json to import into your n8n
or if it's like a python agent you can download all the code super easy to get started implementing any of these agents yourself so I hope that helps a spiritual AI said n makes sense for small businesses and I definitely agree like it's so easy even for a non-technical business owner to automate a lot with n8n um without even having to spend too much time so yeah I definitely agree with that especially when you don't want to pay a bunch of developers to build complex things for you um okay timers asked I just added other
AIS in one big AI is it in how can I do this I AP olog I don't quite get your question you added other AI in one big AI well the one thing I will say is that you have if you have like a custom agent set up somehow like you want to bring in AI I would make it an API endpoint and then within n8n you can call it as an API endpoint that that's what I would say I hope that helps answer your question because basically if there's not a direct integration with n8n
and whatever you're trying to call just turn it into an an API endpoint and then you're going to be able to call it and you can have authentication for the API endpoint and everything like that um so that's what I would say um when creating a chatbot with a knowledge base for a client using an N end workflow what do you recommend for the interface full screen with URL flow wise or voice flow um so that's a good question it depends a lot on what you are building like if you're building a customer facing chatbot
that's going to sit as like a widget on a website then I would just use voice flow or May mean an also has some widgets that you can embed as well they don't look quite as nice but you can use that as well that's what I'd recommend if it's like an internal facing site where you want to be like a full screen window for someone to interact with an agent then I would recommend creating a custom front end because you can use tools like bolts or lovable to build them really really easily and deploy them
super quick and so you can you can do that very easily so I just be like creating a react front with an AI coding assistant um or another good option is streamlit so streamlit is a python UI library that you can use a AI coding assistant as well to help you build very very easily um I like streamlit a lot it's meant for they have a lot of Integrations for creating specifically llm chat Bots which makes it really nice like managing the conversations in the UI and things like that so I would definitely recommend checking
that out as well so I hope that that that helps [Music] um is it possible to connect a human mind to AI I think it's being done and how can we get to know what's going on in Silicon Valley USA that is a good question I wish I had a good answer because I would love to know everything that's being done there as well U cuz sometimes I wonder like what if all these big things we're working towards have already been achieved and we're just waiting for them to be released I mean who knows connecting
a human mind to AI I mean Elon Musk basically already did that but I'm sure there's going to be a lot more coming this year for that kind of stuff um let's see Rita said I remember now why I couldn't get superbase to work locally you can't set the embedding size H is that really true you can't set the embedding size like quadrant you have to use opening eyes embeddings with size 1500 I think that depends cuz when you you create the embedding column with PG Vector you should be able to set whatever size you
want so you just match whatever um the sizes for the embedding model you want to use like if you want to use the um uh text embedding small embedding model from open AI I think it's like 700 some parameters instead of 1500 I think you can just change your embedding column to match that number in super base so I'd be a little confused if you wouldn't be able to that's super unfortunate though if you can because that's super important but I know that I've done it with the cloud super base so it should be done
doable with the local one as well um yeah that's interesting um okay so Cloud where X asks where do you get all your API access from direct from anthropic or open aai or through thirdparty services so typically it's directly through anthropic or open AI U if I'm using those models specifically also the my favorite option in general for working with a huge range of llms is using open router cuz open router is fantastic they have access to basically any model you could ever want so you send a request to their API and then they'll direct
it to the right provider and so you can also use a lot of cheaper llms that are still very very powerful like my recommendation would be to use open router to connect to deep seek V3 it's a new open source model that was released recently well maybe not open source cuz I think that their data is proprietary but it's an open model that you can download yourself um that is accessible through open router super cheap it's like seven times cheaper than clae 3.5 hiu and basically as powerful as clae 3.5 son it so definitely check
that out through open router um let's see will you post this live stream as a video to rewatch yeah so live streams are automatically uh rewatchable so you just have to go to the live section of my channel and then also I'm probably going to be posting some sort of concise version of this live stream just because I want people to be able to follow through this miniseries of building an AI agent without having to watch a two-hour live stream I mean it's been a blast doing this with you all um but for someone just
wanting to quickly digest that the series I think it makes sense to um build the agent as a part of a concise video as well so that's probably coming early next month as well is it legal to host n8n uh to self-host NN within a business so you're allowed to use NN for free without paying unless you are commercializing it so I think as long as you're using it internally it's fine cuz that's technically not commercializing it but if you are creating a user-facing interface that is using nend workflows then you would have to buy
their commercial license which is still way cheaper than most things like if you're using na or um make.com or zapier or something something it's still way cheaper um but you do have to pay for that so that that's something important to keep in mind I believe that's how their licensing Works uh do I prefer cursor or wind Surf and why uh so yeah this is a debate of the century for sure and honestly I haven't tried cursor too much recently like even after they like cursor compose uh features and stuff so I wouldn't say my
opinion is like the most trustable for this but in general I do preer prefer wind surf because of how well it works agentically as an AI pair programmer like the way that it's able to uh take my question and actually understand like which files to look at and change and what information to give back to me to like describe what it did I find that to be a lot more powerful um but cursor is also open source while wind surf is not and so that's definitely a huge Plus for cursor overall U so that's what
I would say um is 54 better than 01 I don't think that 54 is better than 01 01 is an insanely powerful close Source reasoning model 54 is open source which is cool don't think it's better than o1 though um can you run the hugging face inference API with the n8n AI agent I tried but okay um I'm not sure if there is an integration let me check that quick what we'll go back to the agent and uh see if we can replace like I'll delete this connection here and see if there's hugging face um
there is not you know I think the problem is the hugging face inference API doesn't support tool calling at least not entirely I've tried to use hugging face within my Lang chain chat agents and it doesn't work with tool calling right which I I think is why it's not one of the options here um for the AI agent node in n8n I'm pretty sure that's the case I think they're working on it like I heard that it was like an in progress feature but that was a while ago so not entirely sure where they're at
with that let's see front-end suggestions for a chat agent for a client yes so I kind of covered this in a recent question already but I definitely recommend using an AI coding assistant like bolt or lovable to create like a react running super easy to build these without even that much of a technical understanding and you can deploy them super easily and then also streamlit is a really awesome python UI Library um that you can also use an AI coding assistant like winds Surfer cursor to build really easily so those would be my two main
recommendations um so yeah let me see what else we got here um where can I get your Reddit business research n workflow template so yeah if you go to studio. automator doai and find the business idea researcher agent you can click on view source and then boom it takes you right to the open source repo where you can download the workflow Json for this guy and then import it right into your n8n instance so yeah super easy and accessible that's the goal of Open Source after all Cam said what up bro another super helpful video
keep it up can't wait to see what you build in 2025 thank you Cam I appreciate it my man uh well I think I missed a question it's really hard to scroll through the YouTube comments within my streaming window here um do I think AI is over the wall or down the wave I assume you're asking like are we still going to be moving fast going into next year or are things going to slow down and my answer is things are not going to slow down not at all there's way too much that's happening with
reasoning llms and AI agents and uh local Ai and just the fact that like there's so much that has been laid as the groundwork in 2024 going into this year there's going to be incredible things coming out like continuously just like there has been this year yeah not slowing down all right does the live agent Studio accept more parameters besides success or failure so that's just for the output when you output the result of your workflow basically responding to the API request you just say if it's true or false for the success but for whatever
you want to have the AI output you just insert those as messages in the datab base you can do whatever you want so yeah definitely check out the developer guide that I have right here on the live agent Studio covers everything for um how you can have the AI respond in the database and stuff for everything that it's doing and its final response as well um so yeah anything that you want to Output you can definitely do that just through the database instead of the API response uh timer said it kind of worked I used
it I assume you're talking about the GitHub agent and the fact that it kind of worked is I mean that's all I'm looking for right now because it is just a prototype it's not always going to answer every question right uh maybe it'll like and it'll look at the file structure and try to answer a question just with that instead of looking at specific files like the readme I've seen that happen before with this prototype as I'm testing it out but like overall it's a good prototype um so yeah digital Alchemist good morning by the
way it's nice to have a free moment for a change glad that you have a free moment sometimes life can be crazy I am with you man all right all right let me keep scrolling I do not have PID Gemini I'm just using the free one right now actually um the issue I'm facing an NA my agent is hallucinating when I have too many messages stored in the memory uh that's interesting if you one thing you can do to adjust it is you can change the context window size so this is the window buffer memory
and then if I go into my postgress chat memory in my live agent Studio agent I can change the context window length right here which actually the fact that this is only at one is kind of concerning I should probably up this but this is how many past interactions the model will receive as context so if this number is too low it's going to forget prior messages and that's probably why you're seeing hallucination so if you up this to like 20 or something that's when you're going to actually have it remember everything in the conversation
which will definitely prevent hallucinations um so yeah what are the times for this miniseries I don't have that established yet but I'm basically going to be doing it for the month of January um so probably either like all of my Sunday videos or all of my Wednesday videos are going to be on this miniseries all right uh great stream thank you very much appreciate that I'm going to answer a couple more questions and think I'm going to call it here because I'm I'm definitely starting to lose my voice like it has been over two hours
of talking U so yeah a couple more questions here how can I contact you I want to give you my AI well I would just say if you want to give me your AI to put up on the live agent studio just go to the submit agent page here on studio. automator doai because you can just submit with this form and then boom there you go that is what I would say [Music] um let's see front end UI for a client agent kind of already answered that a couple of questions how did you make the
thinking indicator I assume you're talking about the one on the live agent Studio when it answers the question um I mean I just did that with a custom react component this entire website by the way is built with nextjs that's how I built the live agent studio for those of you who are curious uh please suggest some other AI influencers that I think you should follow that's a good question so there's a lot that I can recommend I would say um Matthew Burman is a really good YouTuber that covers a lot of like AI news
in general and so as you're trying to just like keep up with everything going on uh he's someone I'd recommend it's not like you're going to get super technical tutorials with him like he would with me and some other YouTubers but he's a good guy for with news in general and then also since we're on the topic of n8n and we're building all of these n8n workflows uh zubar over at AI Workshop he is a phenomenal YouTuber doing cool stuff with n8n and then also Nate Nate has a channel doing a ton of awesome stuff
with NN as well so those are the guys I'd recommend um kind of just in topic right here with what we're doing with n8n there's not lot of YouTubers I know that get super technical with actually like building AI agents within custom code like using pantic Ai and things like that um I will say the Lang chain YouTube channel is a really good resource if you want to get more technical with some of the more complex architectures and stuff with AI agents so definitely check out the Lang chain YouTube channel I think that's what the
biggest thing I'd recommend if you want to also get more technical so I hope that helps let's see oh I got to delay the ads again I've been delaying the ads like as much as I possibly can so I I hope that there hasn't been too many ads for you guys um let's see here do you recommend the Asus board you use if you're building your AI rig today or would you choose something else yeah so I honestly am forgetting exactly which Asus motherboard I'm using um Asus Creator Pro I think oh no that that
was the old one I got um hold on one one moment let me pull this up for you because I have it on PC part picker uh is this the right one motherboard no hold on H I might I might have to Circle back to this one um oh I'm not logged in that's why okay let me log in quick also I'm signing into PC part Picker by the way on my other monitor um but yeah I'd love to share quickly what motherboard I have uh where's my part list Save Part lists there we go
okay all right here we go so this is my computer build for those of you who are curious I did get two used 309s uh so I didn't actually pay 1,400 from each of my gpus but this is the build that I have and the motherboard is the x670 Creator and I love it I mean this motherboard has been working fantastic my gpus are working well um I will say I think Asus is putting out some new versions of this motherboard so you might want to wait and they might actually already be out so this
could already be a slightly outdated motherboard but aside from that it's absolutely fantastic so yeah this is my this is my full build right here it all fits without even having to scroll so if you're curious is what I'm using for my AI setup this is it and it's definitely more pricey um but I is a very worthy investment for me because AI is my life my channel is my life everything I do for the live agent studio and with my platform automator that is my life and so it's definitely a worthy investment because I
do a ton of stuff with local Ai and it's only going to be more and more going into 2025 so that is my setup right here that I've got uh for my AI rig [Music] um can I do a deep dive video on an superbase and postgress I I mean honestly like what I'm doing with this series is pretty much a deep dive and I'll be putting out a lot more content on an and super base of course going into it and then obviously postgress as well um since that's what super base uses under the
hood um yeah okay I think that is good for now um yeah I've pretty much gotten to the end of the questions and I'm totally losing my voice so I think I'm going to call it here uh so yeah I appreciate all of you guys being a part of this live stream building a prototype of an N agent for this miniseries that I am doing the last thing that I'll do is plug the link to register for the hackathon again definitely come be a part of this competition and build some awesome AI agents following a
lot of what I actually did in this live stream like we actually built an agent together and then turned it into one compatible with the live agent Studio using this template that I have in that open source repository so super exciting stuff a lot more coming for this miniseries soon and also stay tuned for the super exciting video that I've got on the 1 so the first of January at 7:00 Central Time my usual posting time just happens to fall on the 1st which is perfect because it is a video specifically about AI going into
2025 so you don't want to miss it and probably a lot more live streams coming in the future as well not probably definitely because I had a blast with this um yeah and I mean I liked how casual this one was as well especially compared to the other one that I did for all the bolt. DIY announcements that was a fun live stream as well definitely not as casual though so it was fun just to be sipping on my coffee which I never actually finished on this stream but yeah I got my cool n mug
so it's just fun to have that but um yeah thank you everyone for watching and I will see you in my next video on the