The only video you need to Master N8N AI agents (For complete beginners)

51.55k views40557 WordsCopy TextShare
Simon Scrapes | AI Agents & Automation
Serious about Implementing AI? Shortcut your Path HERE, and connect with 300 entrepreneurs on the s...
Video Transcript:
this is the only video you'll need to get started with an it will take you from a complete beginner all the way through to an Master whether you're a business owner who wants to implement AI automate your work and earn back your time or you're someone who wants to build and sell AI automations to other businesses this is the video I wished I had access to when I first started my Automation and AI Journey Back Then I wanted to take advantage of this new tech and implement it within my own business and my personal life
but most importantly I didn't want to get left behind but the problem is I got overwhelmed very very quickly there was too much stuff out there some of it was useful but most of it was unnecessary overhyped and just a bit complicated and only after making hundreds of workflows within n getting a top rated upwork profile for selling AI agents and automation to other businesses for all sorts of tasks email inbox agents content creation agents invoice management agents and many many more I got to see which workflows were actually useful for businesses and which ones
they were willing to pay for I made this video to help you shortcut your path SA save you a lot of time and headaches and share with you only what works today so get your notes ready and let me show you exactly what we'll be covering in this course we'll be starting from the basics with what is n and why would you choose n over other Alternatives we'll be covering if you need to know how to code hint you don't it's all no code and easily accessible for everyone we'll be covering what you can actually
build for your business with a tool like n and how to actually set that up we'll then be mastering the basics learning how to work with different data and building out your very first automation we'll then be working with data from your business working out how to handle different file formats but also building out your very own invoice processing agent that you can use every day in your business and it wouldn't be a good course for n if we didn't cover Ai and how to use that in your workflows so we're going to be breaking
down exactly what is an AI agent versus something like chat gbt and what useful things can we build for your business using these AI agents to help you every day we'll then be going on to some more Advanced Techniques of gathering data from outside your business covering from scratch what is an API when would I need one for my business and how to connect to any data source that you need in less than 2 minutes and finally we'll be talking about how to make your life incredibly easy talking about techniques for scaling our workflows and
making them reusable with a modular design as well as all the shortcuts that I've learned over my journey as you can see whether you're a complete beginner or somebody already using n this video will help you advance and get even better so I encourage you to use the chapters within the video to skip to your current level and by the way for those of you who are more serious about implementing AI within your business this course is a part of my paid School Community inside you'll find a network of over 300 entrepreneurs on exactly the
same Journey as you from all over the world you'll find a template library of n workflows that you can Implement inside your business today and many more resources to help you short cut your path even more so if you feel like that's you you can check out the link in the description of the video or go to school.com SLS scrapes but for now let's get started with the course so getting started we're going to be talking about workflow automation now simply put workflow automation is about getting technology to automate your repetitive tasks so we've got
a few examples before we go to n and understand what that does there but say a customer is filling out a form on your website it might automatically send them a confirmation email and then automatically add them to your database that is a workflow that's been automated therefore workflow automation when you receive an invoice and it's automatically sorted processed and sent for approval again just another process that's been automated when you have a new employee that you need to on board they join their accounts and access are automatically created based on their role in the
company again a really clear example of something that has been automated already within your business but we can automate so much more and we can do it ourselves really simply using something like n and the main benefits if they're not already obvious are it saves you time and reduces your manual workload there are fewer human errors is and we can get consistent results every time because a computer's doing it for us and it takes away all the mundane work that we have to do day-to-day and it allows you to focus on the important tasks within
your business not the ones you don't want to be doing so n is just a workflow automation platform and there are a few commonly known platforms out there it enables you to connect to all your favorite services that you currently use within your business so that might be slack Trello Google Sheets Outlook intercom your database it has over 400 pre-built Integrations which mean we don't have to build anything from scratch if for example you wanted to post a LinkedIn this is inside an NA workflow which we'll get to but you can see they've got a
pre-built integration with LinkedIn so it's plug and play we don't need to code anything we can just connect it to our LinkedIn account and tell it what data we want to send you might have seen some alternative services or use them already such as zapia or make.com all of these are viable solutions to build out your workflows but there's some key differences or key benefits that n offers over the Alternatives and that's why we've chosen n firstly pricing we're just going to compare something like zapia to the N price at the basic level for zapia
you're going to be spending $30 per month and you have restrictions on what you can use in terms of functionality if you're a bigger business and start to use more tasks per month it's going to really really add up and you're already at $200 per month without all of the features make.com is cheaper than zapia in terms of pricing but again we have limited functionality in make.com versus n8n when I see make.com automations posted they're often looking like this with a lot going on and then I see comparisons of NN flows built with a better
feature set and actually you could build something like this in fewer steps so it makes n far more flexible and then in terms of pricing it's the cheapest option out of the lot firstly you can set it up directly on n for £24 a month and have 2,500 workflow executions so that's 2,500 tasks and secondly n uses a sustainable use license which means we can actually host it ourselves and for unlimited workflow exe executions it comes in around $15 to $17 per month so significantly cheaper and the feature set is far more advanced to do
things in fewest steps N is a no code software so you don't need to know any coding to start this course and build your first Automation and from day one you can have useful things running automatically for you for your business so you might be wondering at this point what could I actually build with something like n the list is endless here on screen is a picture of a LinkedIn and Twitter post generator so it automatically creates all the content to my business needs and post it to the relevant platforms I get to approve that
in my database exactly on my terms here in just seven steps is an example of a workflow automation that monitors my inbox for customer queries and any that need escalation it sends a direct message to me in order to feedback what I want to say and reply to the customer that feedback is then drafted automatically for me so I can just push send in my inbo we have other examples that create and write a blog automatically for US based on tailored SEO keywords and a complete invoice management system that you're going to actually build out
in this course step by step the possibilities are endless in terms of what you can build out for your business you just need to take the first step and get started so let's talk now about hosting and how to set up n so we're going to talk about the two options for hosting here the first is hosting directly through n so if you are happy to use my affiliate link then go to n.p partner links. scrapes a and come to Nan's homepage or alternatively go to nana.io you're going to land on this uh homepage here
and what we're going to do is we're going to run through the sign up process of getting started for free and you get a 14-day free trial directly through n you're going to fill out all of your details and give it an account name like your business name you're then going to start the 14-day trial and once that has completed you'll be put onto the starter plan which is around £24 a month this is the most straightforward way to get set up however self-hosting has other benefits and is really really simple to set up if
you follow the steps provided we can do it in under a few minutes so think of self-hosting as your NN service being hosted on your own computer but on a server somewhere in the world so it means that you don't have to keep your laptop or computer on all times but your automations will continue to run 24/7 when we sign up through NN directly they're just using a server and we're hosting through their server if you're self-hosting you're just doing the same setting up your own server elsewhere now sometimes that comes with having to manage
the infrastructure but I found a service called ls. which we're going to setup now that fully manages all the infrastructure so you get the benefits of the low cost of self-hosting so unlimited workflows for around $15 to $17 a month but they also have the benefits of managing the infrastructure for you so it automatically updates the software you don't have to worry about any changes it will do that all for you and we still get the benefits of the lowcost and the community features that come with it so you're going to go to ls. and
you're going to set up an account there once you set up an account you'll come into a dashboard like this and you'll have no Services here we're going to go and create a new service in the top right and thankfully they've already preet up an N service for us so it takes away all the technical details so we're going to select na there we have the options of different service Cloud providers we're just going to click hetner because it's the cheapest and reliable and then choose the region that's closest to you it will automatically pick
that for you so leave it like that we then need to select a service plan don't worry about the details here this basic plan the cheapest plan is definitely good enough for your business when you're starting out I run hundreds of workflows on my server and still have this mediumsized service plan on the right hand side you can see the estimated monthly price of $15 and we're going to hit next on that we then want to give it a name so we'll give it a name that makes more sense for us us an admin email
and then we'll stick with level one support later in the course we'll talk about backups and error handling and all that but we don't need any extra from alis.io so we're going to go down here and click create service now that's going to deploy our service it'll take between 1 and 5 minutes depending and you're going to come back and we'll load it up we're going to come back here and you can see now 5 minutes have gone but the service is now running and it's shown green here so we're going to click into that
and then we're going to display admin UI and this has all the details about your server so if you ever need to change anything you'd come to this dashboard so we're going to click on the link that it provides under that and that's going to open up the N page to sign up so we're going to fill out the details here and then we'll hit next and that is basically it for setting up your own self-hosted server so incredibly easy we're just going to fill out the survey for n there you get three additional paid
features forever if you register your community Edition and they're really helpful and we touch on them later on in the course the first is getting workflow history which we know Version Control is very important Advanced debugging and then you're also able to search your past execution log so I definitely recommend uh getting that free license key and registering that because those features are really helpful and we'll touch on them later you then come into the dashboard for Na so this is where you start by creating your own automations a personal preference of mine is changing
the theme so I go down here to settings go to personal and then in here we can change it to light theme and I just think it's a bit more aesthetically pleasing and that is it you've set up your very own self-hosted instance of N and it's always going to be hosted at this address so you can come back and access it with your username and password and this is going to continuously run your automation whilst you sleep what we're going to move on to now is navigating the dashboard here and working through the basics
step by step to create our very first automation N is a workflow automation platform and what that means and what that enables is us to automate pretty much any of our work by connecting different softwares visually on a workflow so I'm going to run through first the different fundamental building blocks of n which are workflows credentials and executions you'll open up your n environment in a screen like this and what you'll see is an overview on the left hand side and that's where we store all of our flows as you can see down here we've
then got a link to the NN templates which we'll cover later as another tip we've got variables don't worry about that for now and then we've got some help and support section the first thing you're going to come across here are the workflows credentials and executions you can think of workflows as the logic that connects all of our different software pieces the credentials are just our keys so how do we get into each of those softwares we need our password we need our username so that is where your credentials are stored and then you can
think of executions as our history of all of the logs so when we execute our logic we need our keys to make sure it works and that we get into the right accounts and then the executions show us our history of all previous workflows that have run including the errors and those successful as well understanding those fundamental building blocks will help you build out your workflows and connect all your software we're going to jump in now to an actual workflow so we'll hit workflows on the top here and what we're going to do is you'll
probably see an empty screen here we're going to go to create workflow up in the top that's going to open what we call the canvas and there's a few key important details here we've got the name of our workflow here which we can come in and edit so we're just going to say test workflow but you want to be whoops you want to be as descripted as possible as to what the workflow is doing we've then got some tags so tags enable you to easily filter and search through your workflows so for example on this
one we're going to tag it with demo and if we go back we've saved it we'll go back we can see it's got the tag on demo and we're able to filter here by the demo tag you can see our new workflows appeared on our home screen here and to get back into it it we can click into it so when we are in the canvas here we have a few different other options we have the editor mode where we can create visually our workflows which we'll start doing in a moment and then we have
our executions which show us all of the history or the executions we spoke through before of this workflow so individually in a workflow you can see all of the previous executions and look through whether was successful and what data was sent it's a really powerful feature if we go back to editor there's a few more things here if we make a change to the canvas by adding a first step it will prompt us to save in the top right corner or we can hit command or contrl S to save we then have a test workflow
feature at the bottom and we'll cover this more later but we can hit test workflow and if we test that it will run our current workflow that's saved in the environment on the canvas now when you come in here for the first time it's easy to get overwhelmed with the number of options available we're going to break them down now into four key categories these four key categories will make it really simple for you to understand how to build out your first workflow if we come back to this blank canvas here so delete any nodes
we've got on there then we can add a first step or we can click the plus to open the nodes panel here or alternatively we can just hit tab on our keyboard that opens up the options for the first node a node is just an event taking place on a canvas so that could be how we activate a workflow first so we always going to need to activate workflow somehow whether that be a manual trigger so we'll open up the manual trigger and the manual trigger is mainly for when we are testing things this just
means whatever is attached to this first manual trigger node will always become activated when we hit the test workflow button there are other ways to trigger a flow and that will be the ways you use in production so we might have on an app event where we've got different software that could trigger so for example air table has a trigger on new air table event whenever something happens in air table that will trigger a corresponding reaction to whatever we connect this to we'll remove the air table node and we'll go back to the triggers we
might want something to run once daily so we can add a schedule trigger and determine inside the trigger itself how often or how frequently we want it to run if we remove and go back to the triggers we're now getting to the less frequently used triggers but also important on a web hook core now this may not make sense immediately but we can pass data from an external service to activate our workflows and that's what a web Hook is and we will go into full detail later on apis and web Hooks and how to use
those they're not as complicated as they seem and then we've got a few other Triggers on a form submission so n an actually allows us to submit a form through its user interface that starts off a workflow and we'll pass in the details from the form we can activate workflows from other workflows so we can trigger one workflow from another that's really important for scalability and reusability in our workflows and we'll cover that later and then finally we've got on chat message so we can actually have a chat window in the screen and we can
type a message in here and it will activate workflow here once we have a trigger so we'll put just put in the trigger manually as we're testing for now the node will enable you to connect to other nodes that's where we come on to the second element of this which is action actions are how we connect to all of our business software so we might use Google Sheets or Outlook or slack or notion we need to take an action to connect to our business systems and as you can see on N integration page they they
have pre-built 1228 integration and then any that are not pre-built already we're able to connect to their API and that makes us effectively able to connect anything to everything really really powerful software for all of your business needs we're going to go through some of the key actions in a moment but the first action is connecting to a platform that we're using in our business so back in the N canvas we have our trigger and now we're going to connect some actions and see what actions are possible so when you open up the nodes window
here we have different categories we have advanced AI which we're not going to cover right now we have action in an app so what we just covered was that we can connect to any of these apps we have data transformation so this is a specific action that takes place on the data we're using so for example we might take data from our Google Sheets and manipulate that by adding some data or removing some data or stripping out some formatting of some data that is what the data transformation nodes are for and then within flow we're
able to also execute conditional logic and what that means is if our data meets certain conditions we can Branch it and take an action on it if it does not meet certain conditions we can take an entirely separate action so it means we can interact in the canvas with the data from our software that we're using in our business we then have some core nodes so these core nodes are just made up from these four categories anyway and they just contain some of the most popular used nodes and then newly we have human in the
loop where we're able to get feedback from a human when we're using AI finally it gives us the option of multiple triggers which we will cover later but this gives us the option to add a second trigger into our workflow so that not only can we have a test workflow trigger but we might have a scheduled trigger we might have a when executed by another workflow trigger so there could be multiple ways to trigger our workflow and all of them can be active at once the fourth tip is around connecting easily to any of our
software platforms this is the easiest way to do it so we're going to hit tab again to or hit across and open up an app and for the example we're going to use air table so we know we use air table frequently so we're going to click on the air table node we have now got to select the action but this doesn't restrict you from changing the action in future but for the sake of the demo we want to search our records in order to pull all of our records from the data and there are
other actions that we can take so if I go back to the canvas here you can see that this is totally not connected to our test workflow so if we were to run test workflow it's not going to run air table node so what we need to do is just click the cross and drag it over to air table the next part is how how do we actually connect to an air table that's outside of n and so you can see on the left hand side we've got inputs here in the middle we've got our
software that we want to connect to so air table and we make transformations in the middle here and then on the right hand side we've got outputs so it's a three-way sequence that we can visually see our data flow through so we can see what we've received into air table we can see or take actions on that data in the middle and then we can see exactly what that's returned in our outputs but first before we see the inputs and outputs we actually need to connect to the system so inside of every node you'll have
a series of tasks that you'll need to complete or a series of steps where it will ask you to choose a dropdown in order to select the most appropriate action the first one is always going to be a credential to connect with so right now we we do not have it connected to any air table that we have externally so here might appear empty to you and you're going to create a new credential here now these will be different app but they have the same Core Concepts and always n have some really useful documentation linked
that you can go directly to the documentation for that specific node I.E air table and see exactly where to access the different credentials so for air table we need a personal access token and we can get that by going to the personal access tokens page opening that up and clicking create a new token it's then going to give us this page and we're going to just for example call it n test demo and then back in the credentials it will tell us what Scopes we need to add to our token and the Scopes are just
what access are we giving that token is that token able to read our records is it able to write to our air table is it able to delete records all of these things we need to give it the right scope so that it's able to perform the right actions on our behalf so we're going to add data records write data records read and schema bases read so we've added those three Scopes and now we need to tell it what tables is it able to access so we we've got to add a base here so if
you click add a base and scroll down it will give us all the different options for connecting you can see I have a lot of different bases here so to do this we need to have already set up a table if you go to air table and set up a demo table you can see I've set up a a base here scrapes doai tutorials and then we've got two tables financial and demo data for now we're just going to use the demo data table and we're going to put test name random notes assign it to
me and it's in progress so what we now need to do is connect this table to our access token or connect our access token to this table so if we go back to the Builder Hub we go add a base and we know it was called scrapes doai tutorials that should now appear there once we create the token we've then given it permission to access that table and I'm going to copy that here by clicking here and we're going to go back to our workflow where we were setting up our air table token and all
we're going to do is paste into our access token field and critically we're going to rename this because you'll have too many otherwise that are named token 1 token 2 token 3 and when you come to use them in the future you won't have any idea what platform they relate to so we'll give it a sensible name like air table test API token and we'll say that and that should confirm then in green if it's successfully connected to our software there we go connection tested successfully perfect we can leave from that so we come back
to our air table canvas and we've connected up our test workflow trigger to our air table node and you can see that's got a name air table now tip number five is about visually seeing the inputs the Transformations and the outputs so what we're going to do is we're going to fill out in the middle here some options in order to pull our data so right now we're receiving no inputs because we've got the test workflow trigger which is just pushing an empty set of data to our air table but what that will enable this
to do is run the air table node so right now we're searching for a record and we're going to search but like I said we can change any of the methods here and they've all got a description underneath so we want to search or list all of our records we're going to connect it to a base and this is all Air table specific naming and each node you connect to will have specific names for here but air table users base so we're going to connect to the list and you can see that's now given us
access to that whereas if we had not filled out this correctly and given it the correct access this would not appear in the list and we'd have an error at this point so now we know we can connect to the base successfully we will choose the Dem demo data table and we're not going to filter by anything so we want to return all of our records so we can test that step and you can see on the right hand side it's now returned the records that we just put in the table if you want to
visually move these records you're able to click and drag and drop all of these different fields and we can now see much clearer the outputs which was the test name random signy you'll notice that this is in a strange format so this is adjacent format and tip six is about managing all of the different data formats and understanding those properly so tip number six is interpreting table data Json data and schema data and figuring out the difference and it's actually really simple when you distill it to its principles so to make this easy to understand
I've expanded the data set and we've got now name task and status and I've put in some demo data in our air table and we're going to pull that data using the test step and you will see we have a full record of all 20 items in our Json data table you have this on the inputs as well but because we've got no inputs at the moment uh we can't format those so table first is a really easy way to visualize your data that's how we conventionally see our data in a business we normally see
it in a spreadsheet table each row is its own unique set exactly like we see it in air table so this is the way that a human would probably interpret it the schema you can see just distills it to a single record and the reason for that is the schema is just understanding our fields in the data rather than the data itself so this tells us that we've got an ID field a created Time Field a name a task and a status field the a next to it just symbolizes that it's a certain data type
in this case it's just a string which is just text all of them are just text you may notice that we didn't have ID and created time in our initial Fields that's because they're hidden Fields created by a table itself to help us manipulate those records which we'll come on to later but it's got the name the task and the status that we had before so you can see the different fields there now Json in contrast to our table does not display things in rows but instead we get the full set of columns Within These
curly braces and that is identified as a unique Json object it effectively flattens our table into smaller level objects that we can then manipulate easily so you can see we have an individual record relating to David Kim his task to optimize database queries and the status there and that will be a record in the database here Json is really good for data inspection we can see the full 20 items we've got multiple Pages here table data is for quick data scanning and spatting Trends so you can see easily that this is easier to interpret and
then schema we can use if we're just looking for a certain field name it's much easier to read that off here than a complex Json structure so schema we can just grab either the data type or the field name from here now for tip number seven we're talking about mastering static data and working with static data so we're going to connect a node up to the air table and this is a node that you're going to use so frequently called a set node edit fields or set so it allows us to just pass data or
extract certain pieces of data from our software or our business use case so we're going to open this set node and you can see it opens up with this edit Fields node again we've got the inputs the transformation we want to do and the outputs really simple data visualization so it gives us the option to drag input Fields here but all we're going to do actually is we can see the output from the air table have now become the inputs for our set node we're going to click in this box and it gives us options
on our fields that we want to pass through so inside the name here we're just going to put test name we're going to give it a data type so we've got the five different data types here so we've got a string that's something like a text we've got number self-explanatory we've got a Boolean which is a true or a false we then have some more complex structures which are like our J and objects that we're passing through we have an array which is in square brackets and we have an object which could be adjacent object
like the one on the left hand side so for now we're just going to pass through a string and we're just going to say hello world so at this point we're going to hit test step again that's going to run the previous nodes pass through the data and pass through new data which we can see has passed through as test name hello now you're probably wonder wondering at this point why is it run 20 times we just wanted to pass one hello world through and it's because by default all of the nodes in N operate
on a per input basis we've given it 20 records from Air table and we've run it 20 times if you want it to only execute once you can go to settings and you can hit execute once we'll test that step again and it will just execute it for the first it item only and will out hello so we're going to turn that back off and run that again and you can see that we've received the hello but we've not received any of the air table data that we wanted to pass through in transform so there
are two ways to do this one is by including other input fields and we tell it to include specific fields or all so we can pass it through that way but the whole point in using us the set node or the edit Fields node is that we actually just want to take two things through the name and the task we're not interested right now in the status so we just want to take those two Fields only now tip number eight is where it starts to get really valuable if you're a business there's not many times
you're going to be passing static data through like Hello World you might have a fixed value like an API key that you're passing through or something like that but the majority of use cases is we want to use n because we want to transform our data or take it from one platform to another and that often requires using Dynamic data so we're going to remove this in the set node include other output Fields we're going to run it again and it's just going to come through with our 20 hello worlds what we're going to do
is first look at how do we include Dynamic data just by writing it so we want to take the name and the task through to the the next stage so we're going to copy those out and we're going to call the first field name and that's going to be a fixed field and we're expecting a string because it's a name and the value in here what we want to do is say Okay I want to pull every time for all these 20 records this name field there's two ways to do that but they both involve
Expressions the simplest way to do that is to drag from the left hand side in our inputs across here and drop it in the field and you can see it pops up with a result here which is for each of the records it gives us what the value would be so it's like an indicator of what that's going to look like on the other side the second way to do it is to reference by name and it's one and the same so if we were going to do this from scratch we would say name again
and the value in here we'd need to switch to expression then every time we reference data in n and this is just by default we have to hit shift and two curly brackets and that will open up our expression it prompts us immediately for all of the different values that we might want to reference the most important for now for us is Json which as we saw before is the data that we're passing from one node to the next so if we click on Json and it's dollar Json and that's just referencing the entire object
of the previous node the air table output to see that in Greater detail there's this little button here you can click on that and on the left hand side you've got our expression Json and the right hand side you've got the full object the Json object that we're passing through we can see that for each record here so that just visually shows us exactly what we want to be passing through now you can see that this is the full object and actually we just wanted to extract name so the JavaScript for doing this is just
Dot and that enables to access an item within an object and then we can get prompted with the different fields we can extract here and in this case we wanted to grab name so we're just going to hit name and you can now see we've got the result we wanted another way to reference this instead of using the dot is square brackets and then single quotation marks around the field name that is an alternative way to do it they're just different ways of referencing an item or a field within an object by JavaScript so you
just want to stick to one convention and I eventually use dots now dots are only good when the field name is a single word with no spaces if it was first name for example and I'll go back and can change the data object in second then DOT first name with a space is not going to reference the correct value I've gone back and changed the values to full name so we're just going to run that again and what we're trying to reference now you can see these have all failed because we've changed from name to
full name if I open that back up and try to reference the full name it's going to automatically in instead of the dot convention use the second convention which is the open square brackets the single quotation marks around and that successfully references if I try to do the dot method it's not going to find it and it's going to show up invalid syntax so just different name ways to reference I would always by convention make it a single word either by connecting the name like this or putting an underscore in between just because I prefer
to use the do note ation but with your business data sets you don't always get to decide that and you can use the other method instead but like I said if you type out Json then hit the dot it will still give you all of these different options and if you click on the full name then it will actually automatically pre-fill that for you we can see on the right all of our results for those so Tip N is an extension on Dynamic data mastering Dynamic data and this I promise you is going to save
you a ton of time when you're creating workflows and having to go back and edit them so we just passed through the data for the name and we had dollar Json full name here Json only references the previous node so often you'll come in here and you will add another node between those so we're going to open up the options here going to add a second set node here and we're going to connect delete connection and connect that up to the set node and we will just call this middle just to indicate it's the middle
step here so now we are passing our data through a node between our air table and our edit Fields node so anytime we reference this Json we're actually now referencing the previous node which is no longer the air table and it's now the middle so say for example we don't pass any values through this or actually we'll pass just a test we'll call it test test it will pass through 20 tests like we saw before but now we want to access in this next field the full names from the air table now because Json only
references the previous node I.E middle we can't actually access that from here so how do we access that there's two ways again there's a drag and drop style which is probably the easiest way to do it at first so we can go to the schema View and we can actually see all previous nodes so in here we want to grab full name so inside our expression value here we can just drag and drop our full name and we'll be able to break down the wording here and how to build those from scratch but you can
see that it's prefilled everything we had before json. full name or Json brackets full name and then it's also appended this beforehand but if we run that again you can now see we are actually still retrieving the names as well as the data from the previous node so that is the first way to do that the second way is actually writing it again so if we write in here name how we do that is again we open our expression we open the brackets immediately tells us to reference the previous node using Json but we know
we want to reference an earlier node here so we scroll down and we know we want to grab data from Air table so you hit air table and it's given us the object from Air table so now again we need to go inside the object by hitting Dot and then we will hit item which just grabs an item inside the object and then dot again to access the object or what value we want to grab from the object and then it's exactly the same before do Json and then if we hit dot it gives us
all of the values inside the object we hit full name and well we've got exactly the same as we had before knowing this is actually super critical it might seem minial but you will go back and you will edit a lot of your flows and you will add nodes between nodes where you didn't have them before and it will cause a lot of headaches if you don't name like this from the start and you just use the Json format you will get these undefined errors all the time time so starting from scratch I would recommend
highly naming it like this and it will save you a lot of time in the long run you now nearly have all of the core fundamentals around handling data between nodes you understand triggers you understand what a node is and you understand the inputs the transformation and the outputs when you're handling large data sets thousands of Records like you often do in a business this next one's going to come in really handy and that is about pinning data so inside our air table node you can see that we've got our inputs our transformation on our
outputs we've got the table data the Json data and the schema data and then we've got two Fields over here we've got this edit field which is great for testing you can go into edit and you can edit any of these values so we wanted to say this is uh done and that will actually save it as done so we can just quickly edit values rather than going back to our original data source if we just want to test something out the second thing is pinned data and it's automatically pinned that because we've edited so
if we go back and we unpin pinned data allows us with thousands of rows to save a lot of time when we're testing it also enables us to take values from our workflows that cause an error and it allows us to rerun that through the flow so you'll often come up with errors and actually you can go and push that data back into our test environment here but for the sake of this pinning data is just storing data in that node so that we can rerun on the same data if you're running thousands of rows
of data it's not going to execute as quickly as it does now so you can see it took a couple of seconds already with 20 records with 4,000 records it will take 10 15 20 seconds and that adds up when you're testing so what you want to do is just come in here pin data so the node will always output this data instead of executing so it's basically like storing the data and it has this little pin icon and you'll see how quickly this executes now immediate basically so when you've got 4,000 rows or lots
of data which you will have with your business systems pinning data is critical for testing now we're nearly at the end of the Core Concepts and the fundamentals of NN then we can get into the real nitty-gritty of building out workflows once you've understood these Basics the next one is important because at some point in time your nodes will fail so the next one is about retries so inside our air table node here we'll unpin the data and we'll go back and we'll execute that workflow again so this executed successfully the first time but this
won't always happen in particular with AI agents or HTTP requests or services that are less reliable than air table we're going to want to implement some retry logic so inside our settings here we have a few different options we have always output data which we're not going to worry about now we have execute once which we've covered already and then we have retry on fail so this says if it's active it tries to execute again when it fails so there's no reason we wouldn't turn this on because if we're reaching out to air table and
for some reason the server responds with something that means our data doesn't come back then this whole workflow is going to fail and all of the execution that we worked for up until that point is not going to be worth anything so if we click this on retry and fail it gives us the option to change the max number of tries so if it fails it will try three times and it will wait 1 second between tries I normally just leave these as the default values sometimes if you've got rate limits or other things like
that you can stick it up to a higher weight between retries and then you get the chance to say if an error does happen do we want it to stop the workflow entirely well in this case if it can't grab our air table data then there's no point at continuing and we want to in make sure that on the error it just stops our workflow and throws an error we will cover later way more around error handling because it's super critical for business implementation and we'll cover these different options but the three different options are
stopping the workflow completely continuing and passing that error message as a regular output or continue and passing that error message to a different branch so say we wanted to record that error somewhere in a log then we would use this last one here if the air table wasn't result wasn't important then we would use continue because the error would not stop the workflow from continuing however it's critical to our workflow here because we're trying to grab tasks and full names so we actually just want to stop the workflow when that happens tip 12 is the
most overlooked part that I've seen in all templates on the N template library on YouTube videos this is so critical if you're delivering client projects or you want your workflows to be maintainable and that tip is naming consistency so right now if you were somebody in a business and you came to this workflow would you be able to tell me what happens so we've got when clicking test workflow we've got air table we've got middle and edit Fields so from looking at this I have absolutely no idea what this workflow is going to be doing
we're going to cover this in way more detail later when we talk about client standards and consistency in this but to start off with the basic fundamentals are naming your nodes and the way you do that is you go into the node Itself by double clicking and here we've got the edit icon here and we can rename here so it's important to have consistency in naming so you can revisit workflows and you understand the logic of your workflows makes it way more reusable for me personally I follow a convention that makes it easy to reference
the node but also describes the action taking place so all nodes will have the symbol of the software so we've got the air table here if we bring in Google Sheets you'll see that it has a Google Sheets symbol here so I don't necessarily feel the need to reference the service name within the node however you can if you wish but the fundamental actions that you'll use with a service are getting data from the service I.E reading data like we did from Air table it's posting data to the service and that is actually sending data
to the service it's updating so that's partly posting but actually we're checking if the record currently exists and then deleting you could be removing data from that service so this is also the convention you use with API requests which we'll touch on in the API Mastery section but for now just treat those as standards for naming nodes so instead of air table here what we actually doing we're getting the task records so me we might call this get task records and if we name that and come back out now immediately it's so much more obvious
to me as an outsider looking at this what I'm doing I don't know exactly what the tasks are but before it said air table now I know we're getting some sort of record from Air table I've put this in Pascal case here which is where you have two names that are both uppercase next to each other you could alternatively have spaces between them so we just have normal case like this or you could have camel case like this it's entirely up to you what convention you use I just use Pascal naming to to make it
easier to reference those previous nodes you might want to call out specific things like triggers with a separate naming conventions so instead of taking an action like get post update and delete you might want to call this a manual trigger and the way I do that is just calling it out with an underscore so I have manual underscore trigger we then get task records we then set values in the middle and then again we're setting values but it's not very descriptive because we're not actually taking many actions with those but for this one for example
we are getting the name so we'll call it get names you can see now that's immediately more obvious we've not got to the point of adding notes to the flow yet which is entirely possible and recommended but the starting point is making your nodes easy to look at and understand exactly what they're doing without having to go into every single one it's fine when you have a work flow that's this small but I'll show you an example now of where it's not going to be okay if you have that so I've jumped into an example
here of a template in our school Community where we pass financial documents now you can immediately see I've used this name in convention on these nodes and it's immediately obvious even if you took away all of these notes we'll just take those up there it's immediately obvious on the outset what is happening so we are have some sort of Google drive file trigger we're then obtaining the file we're extracting data from the file we're formatting it in some sort of capacity and then we're processing it one by one and using an AI node to extract
data and update the financials into air table so immediately without any notes I can see from the names of these that we are taking data from a file we are extracting and formatting that data and then we are uploading loing to air table so your client will also think the same if they have access to the N Flow they will go in here and they will be able to immediately understand at a high level what the workflow is about really really important that brings us to the end of the core concept section of the course
we'll now get started and dive Deeper by building out practical business use cases using your data so we're going to use an example of an invoice passer we're going to build that and talk about the data fundamentals next so we're going to end up with something like this where we are monitoring a file source for our business like a Google Drive store of all our documents we're going to cover how we would process all of the different file types that we might use we're going to see multiple inputs to our flows and multiple triggers and
how we handle that and we're even going to touch on how we set up our first first AI agent and why we've set up a loop here to cover all incoming documents one at a time why that's important and then how to go and update our database in the back end to make sure that we're storing the correct information and validate that so before we start building out any workflows a good first step is to understand and plan out the workflow that you're going to build just so you understand the the logic that you're going
to follow throughout you'll obviously make iterations as you go through but it's a really good starting point especially if you're working with clients to plan out exactly what the flow would look like make sure you're on the same page and then move forward with the first version of the flow so to do that we're just going to use some sticky notes so we're going to open the nodes panel again and if you type in sticky then you'll get the sticky notes a shortcut to get to this is actually just the shift and S and it
will open up a sticky note you can see that there's a few options on here we can change colors so I always just use white as a standard for informational notes purple for supplementary red for things have to change but it's it's entirely up to you you can use whatever you want this is written in markdown format so if you double click into the note the double hash just says that it's a heading or a second heading if you put one hash it becomes an even bigger heading so you can see that three hashes will
be even smaller so that's just some markdown for you and all we're going to do is just plan out the steps that we think we're going to use in this template to make sure the logic flows correctly so if we're passing Financial invoices we know that we're going to need to get a file and that's probably going to be our trigger so we're either going to do that from G drive or Gmail and in this example we probably pull it directly from Gmail to show you the example so our first step is like how do
we retrieve the file or how do we retrieve the email The Next Step we're probably going to have to do some pre-processing so you'll see in a minute that there's loads of data that comes through in an email and we want to strip out all of the irrelevant data because that makes it really easy to and clean to work with so we're going to have this pre-processing node where we just do some data formatting once we've formatted that email or file we're then going to want to extract key information from the email or file so
we'll just put key info and that might be the invoice name the supplyer name the invoice number the price all of the information that we want to pull we'll do in this stage once we've got the clean data we'll then extract it extract that key info and we're probably going to use an llm or an AI agent to pull that data because it's going to be able to interpret multiple different file formats and multiple different emails and pull consistent info from multiple emails we're then probably going to want to put that into our database so
once the information is extracted we want to Output data into our database so actually you can see already that we've got the outline for this flow now we can move on to actually building the first version these steps may change but it gives you a good idea visually of the different categories you'll need and there for how to start building this out so for the first part of building out our invoice passer we're going to come back to our workflow here and we're going to remove everything that we currently have so instead of having a
manual trigger this time we're actually going to be monitoring emails or a Google drive folder so what we're going to do is get rid of the manual trigger for now and we're going to click the plus icon we're going to open up let's go type in email and we've got Gmail here let's set up Gmail to show you how to set up the Gmail trigger and you can see there's one trigger which is on message received so every time a message comes in we will be monitoring that we've got when we open this a few
different settings we've got to connect to the correct credential we've got the poll times so we can monitor every minute and there's no disadvantage to monitoring every minute given we are on the self-hosted plan for NN so it's we get unlimited executions it just takes up server time and then we are monitoring for messages received and then for now we won't simplify the response and we'll see what we pull in if you hit test event that's going to pull in your last email so right now we are monitoring a Gmail from mine and that's going
to pull in my latest email received in that inbox now what you can see on the right hand side is we are in the Json format but you can see a lot of different data you can see inside the Json we've got all of this different information we've got the ID we've got what thread the email was part of we've got all of these headers we've got the HTML content so the pure content of the email you can see this is like an overwhelming amount and even if we go to the tables format you can
see that we've got all this information and even the text inside the email comes as this big block of text that's really hard to read so what we're going to do now is understand how to strip out the bits we actually want and take that forward for processing so setting up Google credentials they've recently changed this on na so now you have to do a few more steps but I'm going to run through how we do it for Gmail it's exactly the same for Google Drive Drive Google Docs Google Sheets Etc you just need to
set it up once and then for each node and then you can connect your nodes so we're going to go into the trigger here or the Gmail node and we're going to go up to connect credential to connect with we're going to set up a new credential and you see it asks us for a few different things so we've got the redirect URL here we're connecting using the O2 method which is just a standard that's used across the industry to connect to different accounts it's effectively where you have the Google signin page that is an
oor we then have two values that we need to go and collect which are the client ID and client secret and then we can rame this as usual here Gmail account test we'll name it for now so what you're going to do is you're going to go to Google or sorry console. cloud. Google golec and you're going to come into this window and you will be able to see the navigation menu on your left hand side you'll be able to see apis and services we're going to go to enabled apis and services if you've not
previously set up a project it will probably prompt you at this point to set up a project but if you have set up a project before it will just pull you into that project you're then going to actively enable which apis we want are project to be able to access so I've already got them enabled but you're going to enable apis and services you'll type in things like Google Drive and we'll click on that and instead of saying manage here it will say install or something to that effect you're going to click that to make
sure it's installed we'll do the same for every app we need so we'll type in Cal again Google Calendar API we'll install again install all of those so then we've got sheets we've got you know all of that stuff mail in this case we're going to have to go to Gmail API and then install there so great that's the first step they're all installed we're going to go back to apis and services the next thing is setting up our credentials so we've given it access to the relevant apis we now need to give it access
to our account so we're going to go up here and we're going to click on create credentials if you've not done this before your oo 2.0 client ID is will probably be empty so you're going to go to create credentials and oor client ID if this is the first time you're doing this it might ask you to set up a an account just follow the steps through and then it once you set that up as an external web app account you'll be able to come back to this part in the tutorial and set up this
client ID so we then have create oo client ID we're going to choose a web application because we're just connecting it to n let's just call it n test demo here you can call that anything you want now this is really important we're going to need to input our authorized redirect Ur Uris and that just enables the server from Google Cloud to connect to our n nodes so we're going to go back into our test workflow you see this o oor redirect URL we going to click to copy on the right hand side to go
back we're going to add that as a authorized redirect URI there and we're going to create that's going to appear then with our client ID and client secret which are the values we need to take back to our environment so we're going to copy that go back paste them in here whoops so we'll take the client secret we'll go back and then it's going to prompt us to sign in with Google which we're going to need to do to just valid date that so I'm going to sign in with the account that I on at
the moment click Advanced and then go to alaso doapp this might say something like render or whatever your service is hosted on but we're going to go to it it's then going to ask us to confirm the access and the permissions brilliant connection successful if we go close that window we'll go back to Here We Now can close that and we're all connected to our account we fetch the test event you'll see we're pulling in an email from that account so we're just going to pull that Trigg up to our planning so that we have
it in our first planning stage and we're going to go into the trigger and we're going to pull the latest email and I've just sent an email to myself with an invoice so that we can process something that's actually useful and you can see inside the email we have a bunch of formatting which will show up as HTML code in our email which makes it really hard to read and then we have important information that we're going to pull from the invoice like the price the provider 11 Labs here different invoice numbers and line items
which we can all break out in our data when we process it so we've got the node open here for the Gmail trigger and we're pulling that email directly through so I'm just going to use what we used before and pin the data so that our data doesn't change and so we can t with consistent data and you can see on the right hand side we've got the output in a table format but you can see that this email contains a lot of supplementary information that's going to make it hard to process for both us
as a human and for the llm later on so what we actually really want to pull is just the text from the email and any key things like the subject which might give us some references to what it is who it's from all of the key information like IDs thread IDs but there's a lot of supplementary information that we're not interested at all in here so I'm going to show you how to strip that out now so we're going to open up and click the plus icon here we're going to use our trusty set node
or edit Fields node and we're basically just going to cherry pick the fields that we want to pull through so one of them that's immediately becoming obvious is like we should probably have who it's from so it's going to autofill that and we're just going to have from and instead of naming at Json we're going to use our rules around proper and we're going to call it Gmail trigger. item. Json headers from and you can see now it's pulling that email we're going to also pull the subject so we'll drag and drop the subject and
do exactly the same there then in case who wants to reply to the email in the future we're going to need the IDS and the thread IDs so we're going to pull those across as well yeah I haven't asked any then we're going to pull the text as HTML and you can see that's a load of code here and not the actual text in the email but it will contain the text in the email and the reason I'm pulling it text as HTML is that I can show you in the next stage that if you
don't have something that already strips out the HTML there is a way to do that by yourself and we'll also just pull the text and we'll update these nodes so that they will reference the naming so that if we add any nodes between them then will be easy to they will continue to reference the correct Noe and we can always come back in Future and put another value in here but for now all we want to pull from this great amount of information these things and focus on those things that's why the set node is
really powerful for doing that can strip out just the things you want to look at and then we'll just call it set email values to correspond with our naming so once we have that we have then if we test step pulled just the values we want and we now have a much cleaner format of data to work with than the previous one so we have the values as HTML and I'm just going to quickly cover what HTML and is and what markdown is so we've got a p Page open here from dev. two and it
visually shows you what markdown looks like so we've got a list of headings here and like we said before the hash represents different heading sizes so your first headings will be single hash second headings double Etc you then have these bullets or dashes to do your list items and then there's other syntax like double star for bold text single star for italic text now that is really easy to read and a really useful format for both us but also a really good format to pass it into an llm later down the line and then we
have the foundation of web development which conven uses HTML so if you open up the the console for a given web page you'll see that there's lots of these different HTML tags like head Etc like this had these are XML tags that open and close so we've got head and then the close of the header within that we've got page title now when this is written in HTML it's much harder to understand and looks like this text here long very confusing all of these different tags here that are hard to read whereas when is it
when it is written as text much easier to read it's still got these characters in which represent new lines but actually we can much more quickly identify the actual useful information from the text here so the way we can do that in N is actually if we open up our tabs here we've actually got a convert between markdown and HTML node and all we're going to do is just inside the node have HTML to markdown and we're going to drag and drop the text as HTML and you'll see it will actually give us the same
text as we've got here and this is just to show you that if you've ever dealing with HTML formats you can convert it directly using this node rather than relying on the Google node to convert for you and if we scroll down to the results you can see that we've received the data here as a much more readable stripped out the HTML set of values so we can see actually the content of the email there and that's how we're going to pass it into the llm later on so we've processed that data and it's now
in a much more readable format we're now going to move these steps across and actually just check that we are outputting the correct data and this is going to conclude your first automation completely from start to finish and we're going to take in an email we're going to clean the values and then we going to Output it to air table now to show you how easy that was to make your first automation so we're going to connect a node here and we're going to type in air table going to open up that node and what
we want to do is just create a record so for each email we're procing here we just want to create a new record we're going to go back to we can see the data that we're receiving in we've got our credential already set up earlier and what we're doing is creating a new record so we're going to choose our tutorials base that we set up earlier and now we're going to go back to air table to set up an appropriate table that contains all of these columns so from subject IG ID Etc so we're going
to come back into our air table and create a new table but when we click plus here there's a quicker way to do it so if we hit the 23 more sources and we go down we've got paste table data here and now we're going to type in the values that we are we need for our column so we have thread ID from subject text as HTML and I think we had text which was stripping out the HTML there so we're just going to go Auto detect delimiter and it's going to appear underneath that we've
got these different field headers and we're going to import pasted data and you can also and then it's going to ask us where we want to import it we want to do it to a new table it's going to treat them all as long text fields and we can actually change the data types up here so thread IDE is probably going to be a string subject again single line text text as HTML is going to be a long text and that's also going to be long text so we're going to import those and that will
create the table quickly for us now we've got all of the different values that we need we're going to go back to our workflow and we're going to hit refresh list up here and that should now appear as imported table I forgot to rename it so we'll come back here and we will rename that to emails and then we'll go back to our workflow refresh the list and we will go to our emails table so there's two options here we can map The Columns from our data to our air table manually and this is what
you normally do because you don't always have the same name in convention from your data sources to your other data tables or if they if you know the columns match exactly you can just map them automatically so from we'll search for from exactly the same case so both lowercase in our air table and if it finds it it will create that or it will add the value to that but you're mostly going to use map M each column manually so we're going to do it manually here and again it's as simple as dragging and dropping
from these over here to the correct field so we've got subject in there we'll grab thread ID and put it in there text as HTML and text we're then going to rename the node to post records post emails and that just tells me that we are pushing our emails to the air table database we'll then give that a run and see if that's successful okay so it's come up as successful and it tells us in our output the fields that have been added to our air table so if we go back to the air table
we can now see that we have added all of those fields directly to our air table including the full text of the email so immediately we can see all of the information from that email but you can see it's in a pretty poor format but that is your first automation getting from an email directly to pasting that in air table well done on getting to this point like the first one is always the hardest and you've done a really good job if you stuck to it to this point we're now going to expand on this
figure out how to format and process the data and actually get it to something usable for your business to pass invoices whilst also learning all of the steps it takes to be a master at n fundamental so we'll come back to our workflow and again we've completed a really cool first automation here so we are taking an email from our Gmail that we've connected up we are extracting only the values we want and need from the email using the set node then stripping out all the complex HTML formatting and then we are posting that to
our air table so that we have a record of every single or we could have a record of every single email that comes in to activate that workflow and actually have it run continuously we'd hit activate up here and this might give you a warning the first time to say that this will run continuously but what that means is now every single minute an email comes into that it will be processed through this step and those will all appear in the executions log that we mentioned earlier so you can see all of the successful test
executions we've done all of the production executions will also appear in this log so you can see the historic runs of that however the thing that we did not do there is cover how to handle different file formats so in the Gmail trigger here we've actually if we fetch the test event again we're actually pulling just text values from the email and we're not pulling the attachments which were PDFs attached to the initial email that's what we'll cover now so to resume our flow what we're going to do is just copy and paste everything we
had below so that we've got our initial automation still saved and then below we've got a new automation which we'll start working on we'll delete these out by highlighting them in delete but to stop any tests from running on the top flow where we're already done with testing what we're going to do is just click onto this and hit deactivate or D on your keyboard to deactivate each of these or we can highlight the whole lot and deactivate or reactivate with the so we're just doing that so that we can focus on testing our next
flow where we're actually going to deal with multiple data formats so we're going to get rid of our convert to markdown because we just did that as an example we're also going to unattach our post emails and we're going to put that later on in the stage because we're still going to need to post to air table later so we can reuse that node you'll notice now I've copied and paste did we've got ones appearing by each of the node names that's because no node name in na can be exactly the same as another so
if we want to make this look cleaner then we'll give it a more specific naming convention so this would be now we're looking at files not emails perhaps we'd say set binary values or set file values and you can see the ones disappeared there so to get the file file from our email we need to go into the settings of the message received we need to go down to the options and you can see this download attachments we need to switch that on and we will fetch a new test event it's going to warn us
about unpinning the data and that it's going to override our pin data that's fine we'll do that for now so we've now pulled our attachment data inside this Gmail trigger and immediately you can see we've got a new data type which is binary and we've got some sort of attachments here attachment zero and attachment one and the options to view and download them so let's download one and have a look so we've opened up that attachment and you can see it's an invoice addressed to me and it's got a lots of different details about the
invoice it was issued on February the 2nd here's the invoice number there's a total there's a link to pay all of these line items here we're going to be able to by the end extract and put into an air table base so that we can reconcile our invoices just automatically from our flow so we'll go back to the flow and that was attachment zero attachment one is just another version it's just a receipt version of that same attachment but going back to the different formats you can see we've got a new format so we had
the table the Json and the schema all of those still exist but we've also got the attachments which appear as binary files if you're not familiar with the term binary binary is how computers interpret data so the this file would be made up of millions of ones and zeros which the computers then able to read and translate into something that's readable for us so a binary file so all attachments and file formats that we have that aren't text will be binary that includes your docx files XML files text files sheets Google Sheets all of PDFs
all of those will appear in the binary section of the format and N is able to download that file and then we're able to use the data inside that file to get what we need so to make sense of the text within the binary file we need to convert that from binary to a format that we recognize like text or Json we're going to open up the nodes here and naturally we're going to go to the data transformation we're going to scroll down and on the there's a convert data section which is all about converting
files from one format to another and we can see this extract from file says convert binary data to Json so it gives us all of these different options in here and we know that we're extracting from a PDF here and we're extracting values from a PDF in order to turn them into text so that we can process them into our air table so this node will open up and again you can switch file types in this operation here and the input binary field will always be called Data by default so we'll leave it as is
we'll drag this node down here we'll connect it up to our flow and what we're going to do now is run our test so we'll come out to the canvas and hit run test workflow so we've come across our first error problem in node extract from file this operation expect the node's input data to contain a binary file but none was found so what I'm reading from this and you can see it inside the node itself is that the previous output of this does not it expected a binary file to be able to convert but
no binary file was found make sure that the previous node I.E the set file values outputs a binary file so basically the error is saying there's no binary file in what you've passed to me and that's because in our set file values we're just passing text we're not actually grabbing the binary file so if we wanted to grab the binary file there are two ways to do this let's disconnect it here we could connect it directly to the binary file that's from the Gmail trigger and we could run that and that would now successfully run
and we'll come back here and we will run again and you'll see that we've got another error and the reason for that is what I just said was actually incorrect they do not always call it data by default it's sometimes especially if it's coming from a Gmail trigger called attachment so what we're going to have to do is either put an attachment prefix here called Data or go into our extra from file and specify that we actually want attachment one of the attachments so for some reason when we put in attachment one it's going to
give us attachment zero and attachment zero is attachment one I'm not sure as to why but the fields are just lined up that way in n n but we know we want to just process attachment zero so I'm going to put in attachment one and you'll see from the output that that's going to give us attachment zero normally most of the time you'll be dealing with one file at a time so it would usually be from a Google Drive in which case it would be called just data and it most of the time is just
data but when you're pulling from the Gmail trigger it's important to know that we can update this binary field name and it's just going to be whatever's up here so we can see attachment zero has been passed through now here comes the importance in naming referencing the node name names correctly so inside set value set file values we made sure to reference all of these names correctly which now means we can actually put this node between them and set file values will still continue to work whereas previously we would have had Json dot which then
would have errored and not found any of the values from our text so we've now got the text values from our original email as well as the file values if we click in the extract from file you can see our output is the binary file but also it's actually translated that like we wanted into text so we've got all of the text from the actual invoice attachment of the email not just the email itself now if we consider our table again we saw how unstructured all of the data was within the text that We R
red so we got rid of the HTML formatting but still there's unnecessary spaces everywhere it's really hard to read and we don't want to pass those firstly we don't want to look at those in our air table because that's really hard to read so we want to strip those extra unnecessary characters out and the second thing is when we pass it into an llm we are paying per character or per token that we send into the llm so we want to reduce our cost there so what we need to do in n8n is strip out
all of that unnecessary formatting to make it really easy for us and the large language model or AI or chat gbt node to do that now there are inbuilt ways in N to do that if we go into our set file values node we go to the end here where we've got text and we hit dot inside the object then it gives us some suggestions on how to edit this so we can concatenate it with multiple things but we don't want to do that we can remove our markdown values so if it's marked down then
we might be able to just remove all of the markdown formatting we can remove tags so this is another way to remove the HTML or some of the HTML then we've got things like replace where we can replace certain words or certain key values with things there's a whole bunch of things that allow you to add edit like trim the end trim all spaces Etc but what we want is a catch all to remove all of the unnecessary formatting that's made it through here so you're not going to be able to do that easily Within
These functions or Within These additional options that we can put directly in the expression so what that's going to require is a separate code node if we go into the plus icon here and we go into into the core there's a codee node in here and you'll see that this is some Json formatting so again we've got our inputs on the left which are all our values we've got the code node in the middle and on the right we've got our output for anyone that's not done any code don't be alarmed this isn't as difficult
as it looks and we're going to use uh chat gbt or Claude to help us create these nodes for us and I'll show you exactly step byep step how you do that it's really simple so first let's go over this we've got a mode and by default it runs once for all inputs we're only considering one input here so that's fine anyway um or we can run once for each item and then we can choose JavaScript or python I just choose JavaScript because that's the stable version here and you can see it says Loop over
item input items and add a new field called my new field to the Json of each one so it's saying we're taking this Json from our input and taking an action on it so if you've not done any code before all this is saying is for each item within our input which we've referenced by calling the dollar sign input. all take this action and that is it we're then saying okay at the end return everything you've got so if we run this it's going to return what we we currently have plus a new field called
my new field so you can see everything we've currently got and then we've got my new field is equal to one exactly what we've asked it to do so I would not worry about manipulating this directly I'd worry about the business logic behind what you want to do so we said before that we want to take the text that we've received and strip out all of these slash back slash ends which are new lines new spaces and all of the unnecessary formatting here so what we're going to need to do to do that is understand
a concept called Rex and I've asked Claude to explain it simply because actually it's quite a a unique concept so we have an example with a phone number here and you can see it's got some parentheses and an exclamation mark however we just want to strip out the numbers so it's given us some code here here that actually gives us just the stripped out numbers it's saying okay to clean the phone number we're going to pass in the phone number and we're going to apply this manipulation on it or this Rex formula and this pattern
means match anything that's not 0 to9 or not a digit and replace it with nothing so that's where we've got this some common patterns for stripping out characters are removing all the spaces removing all punctuation entirely we may not want to do that because we're dealing with numbers and if we remove all the full stops for example then we will remove uh we will change the number the value of certain numbers and then lastly you have more of a catch all which is remove everything except letters and numbers the point in this is there are
certain terms or phrases that you can use inside reject that will strip out exactly what you don't want you don't have to go Googling all those there's a really simple way to do this and to do that we'll just come back to here we'll copy and paste the example that's given every time we'll come into Claude again we'll paste that and at the top we will or chat gbt if you prefer this is just the simple way to do it create an NA code node in the below format that takes in text with unwanted characters
and strips out new lines and all formatting except commas and full stops and it's asking it to do it in this format so it will return it in the N structure if you're referencing a certain field and this in this one we're referencing text we should also mention that the field input is called text that's going to return us probably a 95% version that we might need to modify slightly but you don't need to know the code to do all this this so it's returned to the JavaScript function which we can just copy from there
and it says okay this will replace all new lines with a single space remove all special characters except commas and periods convert multiple spaces into a single space if it doesn't work the first time just bring the new code back into Claude And reprompt it or re askk it for what's missing and that you will get there eventually it's a very simple quick way to create code nodes without having to write them from scratch this is Javascript code so it is fairly readable by looking at it again we're taking items from all the inputs and
we've got what it's called Dirty text which is the referencing our text field there good job we told it about what text what the input was and then it's saying okay take the dirty text replace the new lines with a space so this is the Rex here replace it with a space remove everything except words spaces comma periods so you can actually find this Rex expression online if you want to make this yourself but instead Claude just came up with that really quickly for us so we're going to run that and it's going to Output
a cleaned version in clean text if we scroll all the way down to the bottom we've got text and now our text output you can see has none of the formatting that we have on the left hand side it's stripped out out all of the things that we asked it to really simply this again is not 100% human readable it's still got some invoice links Etc but it's a lot more readable than the left hand side and it reduces our token usage when we pass it to the AI node finally on the code node we
will just rename it to what it's actually doing so we'll call it format text and we will save it there the next top tip is around using condition instructions so what we mean by that is we may want to process the file the binary files in one way or any emails that don't have binary files I you just text emails in another way so how can we get it to root those based on the inputs to take different actions on either so what we're going to do is come up and add a node in here
and we want the flow category and the if so we've got a few different ways to do this we could filter out conditions so if we're only looking for files that emails that have files in them then we could filter out the ones that don't however if we want both emails but we want them to take different actions we can either use the if node or the switch node so for this example we're going to use the if node and it opens up with these conditions so let's wire it up and let's wire it directly
to our Gmail trigger you can see at this point that we can have multiple Roots directly from one node and na an will process those SE sequentially from top to bottom so if I put the if node first up here and I run that it will actually run that if node first and then the nodes down here so processes top to bottom and left to right so proes that if node First Once that's run it will then run the second Branch if I put that if no down below it will run the first Branch that's
above it and then the second Branch you can see the top run and then the second run and again left to right will always run first so we so in this case we actually want to process them differently so we will consider if the Gmail trigger has an attachment then we will take Route One if it doesn't then we will take rout two and process it differently so in inside the if node we can see the inputs on the left hand side as usual the conditions in the middle or the Transformations and then the outputs
so at the moment we've got no conditions so what we're looking for is does the input the Gmail trigger contain an attachment so on the conditions here we've got a few different options we've got names that we can refer to from our Json data so for example again we can pull in if we scroll down so we want to exclude all emails from a certain address we just drag and drop the from address into the top and we say whenever the from address is equal to and then we can fix this to any email then
that condition is going to be true and it's going to pass in the true Branch whenever it's not equal to it's going to be false so if an email comes in from that email address it's going to go to the true branch and be processed down this path which we can connect to any node or if it's not it can be connected to another node so here we just put is it from example at email.com and it's going to go down the false route and you can see it's gone down the F false route because
actually it was from Simon at so what we need to do now is just identify the condition that is having a binary file attached to it or not so inside here we just stuck with a default which was string so we're comparing string values here in the from address and the from email but actually when we consider a binary object it is in fact an object type so we're saying does that binary object exist and we can just reference the binary value directly and we can do that by putting the expression brackets and we will
say Gmail trigger do item and if we hit dot again it will give us the example of uh Json or binary and you can see that's lit up green to suggest it does exist if file exists we'll call that then go through the true Branch so you can see it's now gone through the true Branch because actually there are files existing with this email so what we might say is if the file does exist then okay we're going to follow this route which is extract from file Etc however if an email comes in where it
does not exist then we're just going to process directly the text from the email and what we're saying here is actually take one route if a file exists take another route if it doesn't so if a file exists we're going to extract the data from the file and format that if the file does not exist in the input then we're going to follow this false Branch so at this point you can see that we've now connected two nodes up to a single node and you're probably wondering if this is possible at all how will the
format text react so in this case we're only ever going to have one input going into it so it's either going to follow the true Branch for if it does have attachments and go into the format text or it's going to false follow the false Branch set the file values from the email and then go into the format text one thing I didn't mention was the if node will pass through all of the data from the input so we know that if we're referencing it from the previous node it will be exactly the same so
yes this can work and it's only going to be processed one at a time the only way it can work though is if this node receiving both inputs is handling them exactly the same I.E they're named to the same so we can see in the set file values we are outputting text and from the extract from file if we just run it again it's also in the Json data outputting a field called text so the inputs to format text are always going to be called text and if you remember in the format text code node
we said reference the node name reference the field name text so it doesn't matter whether it comes in from this Branch or this Branch it will be processed we'll never be going down both branches at once and have the trouble of receiving two inputs at once so we know that a node can handle multiple inputs as long as it has the same reference from the previous value like text in here however what if we do have two inputs and we want to process both those at once so say this if node didn't exist and we'll
just get rid of all the connections here and say the Gmail trigger actually when it triggered followed both paths so One path we were extracting the data from a file if it existed and another we were setting the file values I.E just pulling the text how does that work because now we're processing two Lots at once so this works correctly for our for the same reason because actually we've called it both text and actually in this case it's output two items which we can see in the runs here run one is the binary file but
also has our stripped out from the binary file and run two has run through it which is just the text from the email so we're actually able to get both here however if this format text required both inputs to be completed before it runs then what we'd have to do is introduce the merge node if we go across here and we have we go to flow again we've got merge so if we drag over the merge node and delete these connections the merge node will effectively allow us to combine or repend or manipulate our two
inputs and we can add more inputs inside here up to 10 and the option it gives us here is to append them so if we run that we'll have one after the other so it'll effectively be two sets of table data in two different rows if we combine them we can choose a field to combine by so we might want to match up two sets of data we'd use the merge node for that and we can write the field names in here and if they have different names Mark that the outputs from that we get
to choose we could have both inputs merg together so all the data in one row or we could have um just the data from one we just want to pull the in input data from one we don't have to just combine by matching fields we could combine by position so if we've got two inputs and there's only one initial email then actually it might make sense to just combine by position and what that would do is just combine all of our Fields together but in this case it would combine all of our matching Fields together
so Tech TT but in this case if we combined by position because we've got a field called text in our file input and a field called text in our email text input we'd probably lose some of that data if we combined it by position it would just have one text output if you wanted to get more complex with it you could run an SQL query to do the merge but most of the time 99% of the time you don't need that a pend and combine will work there is one final option which is choosing a
branch and this allows us to just wait for both branches so sometimes we want two branches like this to both complete before we pass through the item and therefore we'll just use this merge node to actually take both of the inputs and wait for all inputs to arrive and then just output everything and it's just an easy way to wait for all data to execute before we pass through but in this case we don't actually need the merge node because our format text handles it directly so we're going to delete that out we're going to
reconnect those up here so every time we run this automation we are going to be flooding the server memory and actually adding more and more data to that memory you can think of it like your brain you can only hold so much in your memory before you get overloaded and you can no longer do any task s the same counts for our server if you are hosting on N Cloud I directly through n then you probably have a significant amount of memory if you are host self-hosting you've chosen the amount of memory that you want
to use and if you are handling thousands of rows and you keep running these test workflows they will continuously be appended to the memory and stored more and more and more until you come upon an error where it tells you you you have no memory or not enough memory to perform the action a simple tip to prevent this is actually between the test workflows you can go down to the bin icon and click that and that will delete the current execution data and remove it from your memory super simple tip but actually will save you
when you come across that error now one of the most powerful features of NN is that it can handle multiple inputs at any one time so we are pulling here our emails every single minute and some of you if you have a lot of emails coming in would actually be procing more than one per minute sometimes say some two emails came in in in one minute then they would also be pushed through this flow at the same time now sometimes that's okay with some flows but ones where we are processing heavy amounts of data we
might want to spread out those so that we're only processing one file at a time getting that through the loop getting it done and then processing the next and that's why Loops are a really powerful feature of NN one that you'll end up using a lot so we'll open the nodes panel here and in the flow we've got this Loop over items in Brackets split in batches because that's what it used to be called we're going to click that node open here and you can see we can choose the batch size in here so we're
just going to choose a batch size of one and it's the loop over items node you can see sometimes that when you connect in the middle if you've clicked a previous node and then you connect a node in the middle of the workflow it will just throw it on top and connect it to a bunch of different things so I'm just going to cancel that click somewhere random on the canvers type in loop again and bring that over here so it's disconnected you can see that Loop comes as a two node piece and the reason
for that is it's almost like a demo of the mini loop that you need to take so we've got the loop over items node here we've got the loop which is you need to replace this with the nodes that you want to Loop over and then after the final node that you want to Loop over we return it back to the loop to process the second one or the third one or the fourth one so it's just going to process them one at a time so the easy way to do this is to delete that
node delete that line and now we have the loop items and we're just going to disconnect these and we'll just consider this flow where we've got a file only and we're going to connect that there put that in the middle so now the process that we want to do is actually data format on that individual one format the text and then output to our air table so we're going to attach those all together and loop it back ground so now for every minute if a m if multiple emails come in here we got two emails
then we will only follow this then we will follow this Loop per each email rather than two emails going into here and extracting the files two in here two and here it will just separate them into individuals which will prevent any problems this only is important where you want to process them individually so because we're going to include an llm node that processes the input we want to just do it for that specific email and just handle them separately so we're going to add that Loop there I'm just going to disconnect so that we can
run the next part of the flow but we essentially got to the point where we'd extract from a file format the text we are nearly there with creating a full invoice extraction and processing workflow we've got one key stage next which is how do we extract the info and for that we're going to use the AI nodes so if we come into the nodes and we go to Advanced AI we're first going to run over what each of these mean so AI agents you've probably heard a load of different definitions of AI agent and you're
thinking what the hell is an AI agent and how is that different from things like chat GPT ai workflows ai nodes there's so many different terms and here I'm going to distill the exact differences between all of those terms so that by the end of this module you'll be really clear on exactly what an AI agent is what an llm is what are the advantages for your business of using an AI agent versus an llm and what are some good business use cases that we could use an AI agent for versus a non- AI agent
for the various areas of your business customer support sales operations document processing we're going to cover it all so we're going to start off with what an AI agent is not because that probably makes it clearer given what you already know so you'll be you'll probably be familiar with chat gbt chat gbt is just a layer on top of a large language model or what we call an llm it takes our queries as inputs and it produces outputs so we type on message in and it gives us an output and the way that works is
a large language model is trained on billions of parameters and lots and lots of data it works by predicting the next most likely word in a sequence so we ask it a question it then uses its training data to provide an answer of the most likely words in that next sequence so the next concept we're going to cover is really really important in distinguishing between an llm and something like an AI agent so we're going to cover agentic and non- agentic workflows and once you understand this concept you'll understand that AI agents always fall within
agentic workflows but you can also have agentic workflows that don't have ai AG inside them so a non- agentic workflow is like prompting chat gbt or typing a query into chat gbt and we call this zero shot prompting or non- agentic workflows something like please type out an essay on topic X from start to finish in one go without using backspace so we are prompting the large language model and it's giving us a response based on its training data an agentic workflow breaks down that task into multiple subtasks and makes decisions based on the data
but also decisions based on the research that it conducts so an example of it breaking down subtasks here might be write an essay on electric cars and then it's asking itself okay do we need any web research to support that and perhaps it's going out and actually grabbing that research from a tool that's connected to it then it's writing the first draft based on on that research and then it's in the next step considering what part needs revision or more editing and actually then telling itself okay these are the bits we need to edit and
editing itself so instead of this single FL linear flow from start to finish the agentic workflow breaks down the different subtasks and actually thinks and revises based on the information and feedback that it's provided back to itself and this can be inside One Singular AI agent that's giving itself steps and planning out a task or it could be multiple llms chained together in order to create a sequence of feedback or a sequence of decisions the important part here is that while both of these use AI or can use AI the agentic workflow exhibits agency IE
becomes agentic through its ability to make decisions and adapt its strategy based on the content it encounters rather than following a fixed processing path so it's able to make decisions dynamically based on the inputs and take Dynamic paths based on that information that will all become really clear in the next few examples we're now going to cover a business example if we were building out a workflow what would it look like side by side if it was agentic versus non- agentic so we're going to going to use an example where we're building out an invoice
passer so we're receiving in an email with an attachment of an invoice and actually we're processing that invoice and extracting the key information like supplier details total price invoice number and various details based on that attachment that attachment can come in any format so it could be a PDF it could be an XML could be a docx it could be the text inside the email we have two flows here which represent how that could be done in a non- agentic way and on the right hand side how that could be done in an agentic way
so on the left hand side we have a single linear flow that takes our email an AI node or llm extracts all of the information from that template and we are looking for template matches I.E supplier number in invoice number vat references it's going to extract all the information that it can from that text and then it's either going to take a linear path of okay was it a success yes we'll add it to our database was it a success no then we will flag an error but no further action is taken from this it's
from start to finish non- agentic because it's not making any decisions it follows a simple linear path from the email receipt to story storage or error handling there's a single llm extraction attempt so it doesn't matter what type of content has come in if this is only Built for PDFs or only Built for text then it's just going to try and extract based on the information we've given it and it's not going to adapt its approach it's just going to be a one siiz fits-all approach try and pull the text from the PDF but if
it doesn't have the functionality to do that then it's going to fail we then have a binary success failure option so we have no option there to retry the process and then it's going to try and match a fixed template regardless of what format the invoice comes comes back in so in summary it makes it non-agent it because there are no decisions or adaptation in the flow there's no feedback loop where it's iterating on its own understanding it can be connected to tools but no decision is made based on the input of what tools to
use and this example we've not connected it to any tools and then there is a fixed output every time same output this contrasts to the agentic invoice passer which you can see when we go through makes decisions and provides feedback to itself so we receive the email in we're then classifying the content I.E understanding what data format is it in has it come in as a PDF has it come in as text has it come in as XML file we're then passing it to the AI or llm node here and this could also be an
AI agent we'll come on to the specifics of what makes it an AI agent and then based on the content it's received it's then taking a decision okay I've received a PDF so I'm going to take this route so it's making a dynamic decision based on the inputs okay I've received an image so I'm going to go and scan the image using an OCR tool or I've received text and therefore I'm going to take this route so it's dynamically making that decision based on the input data whereas previously in the non- agentic flow it was
just a linear flow it didn't matter what the input was it would always take the same actions we've then got a validation step so we will have another AI or llm node that tells us and feeds back to the previous step was that attempt successful if it wasn't then we have the chance to actually do it again and feed it back into the dynamic field extractor so in summary this is an agentic invoice passer because it features multiple extraction routes based on these different file types it includes confidence-based processing and actually feeds back information to
itself to make sure that it's correct it has access to these different tools so it can actually dynamically choose which route to take and which tool to use and importantly it can refine its own strategy to improve extraction success So based on the feedback it can run again to make sure that we've got a successful run so the thing that makes this agentic is yes there were decisions on actions to take and it took those decisions yes there is a feedback loop I.E it can process it own its own output again in a second run
yes we can connect it to tools again like the three we've connected to and most importantly it's got a variable output we're not expecting a fixed output when we've got variable inputs we're expecting variable outputs so we're going to go across now and understand how this presents itself in our workflow automation software n8n so when we consider our workflow automation platform n we have a bunch of different options if you open up the nodes and open up AI nodes you have a load of different options you can see down this right hand side I'm going
to explain now what makes some llm nodes versus what makes some AI agents and we'll cover the specifics of what is an AI agent within an agentic workflow so the llm nodes going back to our agentic versus non- agentic workflows llm nodes and workflows can be agentic only if you make them so if we have feedback or decisions or we chain multiple of these together to create Dynamic roots and actions then they will be agentic but they also cannot be agentic if we just had one basic llm chain we effectively using that like we would
chat gbt we're just inputting a message and it's retrieving a response and giving us an output based on its context I'm going to quickly run through now all of the different llm nodes that you can use and which ones you'll most frequently use for automating your business workflows in N so I've put them all on the canas here so you can see exactly what they look like and they're all connected to one single open AI chat model but you can actually connect these to individual models they don't have to be open AI they could be
any other platform but it's just a demonstration these are all connected to one model because you can actually do that now on the right hand side what you'll see in the tools bar are the different options and each of these options have been created so that it's easy for you to understand which one you'd pick for a specific use case but under the hood it is effectively an llm with a different system prompt designed to do a specific task so take for example the information extractor this says it extracts information from a text in a
structured format so this is just an llm node or we're just sending a prompt or an input to something like chat GPT and a model of chat GPT like 40 mini or 3.5 and it's just told to extract information and inside that node we tell it what information and what text to extract from we then have the sentiment analysis to which again is told okay we need to extract the sentiment neutral or negative from the input text and that's really useful when you want to understand from a large piece of text or customer feedback whether
things were positive neutral or negative we could do the same with just an llm prompt we' just change the prompt and give it more information about what makes a positive text what makes neutral text what makes negative text but these are predefined nodes with prompts already made for us that make it easier to do certain actions we then have the text classifier which again is just a prompt that says classify this text into distinct categories and it will have underneath some clear examples of categorizing that text but at the end of the day it's just
an input and an output based on our input text we then have a summarization chain same here we're contacting an llm we're using an llm to summarize a block of text that we pass in and then we have the most generic basic llm chain which you'll probably use the most because it's the most flexible and we can adapt the prompt to how we want to see the output we can give it clear examples Etc the basic llm chain is a simple chain to prompt a large language model so this is the same as going to
chat GPT choosing the model you want like G PT 3.5 and putting in a set of instructions a roll some examples of the output you want we can do that all in the basic llm chain but it's still not an AI agent and not necessarily an gentic workflow we can use this basic llm chain and it can be in a non- agentic workflow if we've got that linear flow there's no feedback there's no actions and there's no decisions being made based on that information if it's just a linear flow input llm output then it's a
non- agentic workflow and for all of these llm nodes that we've covered here none of them can connect to tools none of them can look at historic messages they've received so they have no memory and none of them can perform function calls so they are simply input text and output text based on the content that you provide and its training data we finally have the question and answer chain which which looks a little bit like an AI agent because it has a tool attached to it so we can actually attach a memory to it or
a database like a vector store and this is an llm chain designed to answer qu the input questions based on the attached documents so this would be in the non- agentic workflow and llm category because it's a linear flow we're not asking it to make decisions based Bas on the data we're asking it to retrieve data based on our inputs so you can see there's quite a lot of llm nodes in N but they're designed to make it really easy to do specific tasks but if we want to keep it more generic then we're going
to use the basic llm chain and create our own prompt and that's still a really powerful tool and remember these can be agentic workflows or involved in agentic workflows only if you make them make decisions have feedback at attach them to tools Etc but they also cannot be they can also just be llm nodes if that's what you need them for and the idea behind this is we are trying to leverage the best thing for what we need for our business there's a snippet I took from the na documentation here which I thought was a
really powerful sentence that distills what an AI agent is versus an llm so while llms only process input to produce outputs AI agents add goal oriented functionality they can use tools process their own outputs and make decisions to complete tasks and solve problems so they've got a little table here of the features of an llm versus an AI agent and it just simplifies the view of what each of those does so again an llm does not make any decisions it's just generating texts based on our input query whereas an AI agent is actually looking to
complete a task and making decisions on how best to complete that task so yes it's making decisions yes we can connect it to tools and apis and it's given two clear examples at the bomb here so an llm can generate a paragraph whereas an AI agent actually could schedule an appointment and the reason we progress to AI agents over llms is when we want them to perform comp comp Lex real word tasks where they actually make decisions on our behalf so we finally got to it AI agents understanding all of that is really important to
understand exactly what an AI agent does so I've got this great graphic I found um made by cobus gring and I saw it first on LinkedIn and it distills quite nicely everything we've spoken about so far that would make something an AI agent so we've got the user input coming in on the left hand side then we've got got the llm so we always have some sort of llm interaction here however the difference here is that the llm is taking decisions and also to able to access certain tools so for example here it might have
access to the web it might have access to certain tools like a weather API a mass library or calculator it might have access to a document database like a rag database it's then going to make an observation and based on that observation understand if it's completed the task or not and often it won't have completed the task on the first go and we're actually going to cycle back and speak to the llm again which is going to give us the next set of tasks that we might want to take so it might now say okay
we want to search the web for more research or we want to edit the content we've currently got what tools do we need to do that it's then going to pass the observation and it's going to cycle through this Loop probably multiple times especially for complex tasks and when it observes and understands that it's actually completed the required task at that point a final answer is reached and it will send the output so instead of just an llm where we'd have the user input the llm straight to the output we've got this observation and decision
phase that could include multiple external tools until the final answer is reached and it's able to iterate on its own understanding as we go through to get to a better output so you might be wondering at this stage when in our business would we use an AI agent versus just using an llm and we're going to get to that but first we're going to cover the AI agent nodes inside our workflow automation n8n so contrast to the llms if you have an AI agent in your work flow then they are agentic workflows by Nature because
the AI agent is making those decisions taking action connected to the tools so we're going to run over the key AI agent nodes within Na and how you use them and why you use different ones so the most familiar agent will probably be this tools agent so inside our nodes we can type in tools agent or AI agent and it'll be the first drop down within the AI agent so the description for this is it utilizes structured tool schemas for precise and reliable tool selection and execution it's recommended for complex tasks requiring accurate and consistent
tool usage but only usable with models that support tool cooling so there's a couple of key things we can take from that description the first is that we use this for complex decision making that might require EXT external tools so on the left hand side you can see that we've connected this tools agent to a series of different tools the first is a database of memory so we might use this to understand what a user has queried in the past say for example we're a customer service chatbot we'd have that memory here we might attach
a company knowledge base to that so that it can search the company knowledge base and find the appropriate answer we've then attached some more functional tools like an email workflow so we can connect up other Nan workflows that conduct specific tasks we might for example have an auto emailer that is emailed based on the user query this tools agent might have the ability to schedule calendar invites and we' prompt that inside the llm prompt telling it what information it is likely to send to the calendar but again it is an agent so it might be
able to interpret from the input data one if it's missing any information but two if it's got all the information what information it should send through and inside these nodes when we go and build them out you'll see that we're defining specific input parameters that we're sending to these tools so for example a Google Calendar we know to set up a calendar invite we need to know several key bits of information we need to know who it's with on their email we need to know what the invite is about and we need to know the
time so the AI agent the tools agent here is able to determine that info or work out whether it doesn't have the right information based on the user input and then use the tool if appropriate and then finally we've connected it to a HTTP request so we're able to connect it to any API that we want and this might be to pdf. for document processing so say we receive a document in it might understand at that point that we want to merge or split out a PDF and actually use that tool but the key thing
here is it's making a decision based on that and providing itself with feedback on the success of what it has done so far we then have the second most commonly used which is a conversation agent this is most similar to our llm chain but the difference here is we can give it a conversational memory so this is great for chat Bots customer service chat Bots things like that or company knowledge chat Bots where we want to retain memory of what the user has said before so we can connect it to a memory like an A
computer memory buffer memory but we can also connect it to all the different tools that we mentioned and on the right hand side you can see a sample of all of the different tools that we can actually connect it to it includes things like Google Mail Google calendar calculator there's a lot of different different tools and they are always bringing out more tools that we can connect these agents to making them even more powerful out the box and any that we cannot connect to we can usually connect to through the HTTP request tool through the
apis there are some other AI agents that we've not touched on here that are available inside n and have each their own capabilities you'll use the tools agent and the conversation agent most but if you have specific tasks then you might use use the functions agent which is excellent for tasks requiring specific structured outputs and specifically working with open AI models because they support function calling you might use the plan and execute agent where actually your task is more planning based and you need to solve multi-stage problems and the agent is able to iteratively work
through that problem give itself feedback as it works through the react agent is similar where it combines reasoning and action so where we require careful analysis and step-by-step problem solving then a react agent may be good for your use case and then finally if we want to interact with our SQL databases then an SQL agent is specifically trained on generating queries SQL queries to reach from our database so all of this theory is no good if we don't know how to apply it in certain situations so we're going to go through now a few different
business cases where we might use a non- agentic workflow an agentic workflow and what an agent might actually do in that scenario just to give it some life and see help you to understand help us understand exactly how we might use this in our business so we first have a customer support workflow so we want to enhance our customer support processes and we're going to do that using workflows we could have a non-agenda I workflow and this might be where we have a linear flow a chatbot follows a predefined response template so we have a
series of questions and answers that it's able to use some fixed excal and some fixed escalation paths based on certain keywords that trigger it so this might be a chatbot that's just trained on our FAQs but isn't able to think on its feet and able to perform more actions than just sending escalation or returning an FAQ answer if we were to make that agentic then the things that we might include are a human feedback loop so we might be able to escalate to a human who gives us feedback and then our agent or workflow is
able to adapt to the human's feedback we might have Dynamic response generation so it's able to think about based on some context what it should respond rather than sticking to the predefined response templates and then we may have different scenario rooting based on based on things like does the customer want to place an order we will take this action with the customer or send them down this route does the customer want to escalate to a manager will take them down this route so it's able to think about what the customer is inquiring about and use
a appropriate roots or tools and dynamically choose those and an AI agent within that might be able to use those different routs that we mentioned so multi-channel support management but also use its understanding of sentiment analysis to understand how do we prioritize these queries coming in going down to sales we might have manual lead qualification for a non- agentic workflow with fixed Outreach templates whereas an agentic flow might be able to work out a lead score based on the inputs and dynamically choose the root rout that that lead goes down and choose a personalized approach
for that customer and the AI agent may help with that lead qualification and scoring as well as personalized Outreach or meeting scheduling for Content creation you can see it follows a similar gist which is non- agentic workflows may be creating content based on our template inputs whereas an agentic flow may be able to conduct realtime topic research and optim optimize and change the templates based on the content or research the a agent as part of that might be doing the topic research conducting web searches or SEO research as well as referring to our brand voice
guidelines and able to dynamically adapt our content to the brand voice and the final example we have here is document processing so again non- agentic would be a linear flow with no feedback loops and we covered this in the invoice passer part of document processing an agentic flow would be able to dynamically extract data based on the content that's input based on its understanding of what we want to Output so if for example we told it to split the PDFs it might reach out to the split PDF API and understand that it needs to perform
that action and understand whether that action has been perform correctly and whether the inputs we received in the first place were in the right format to do so so that rounds up AI agents agentic workflows and non- agentic workflows as well how all of these things can be used to enhance certain processes in our business now coming back to the context of what we actually need AI for here we are processing some data from a file and trying to extract key info using an llm or a large language model so we could use something like
a prompt but like I said most of the time we're actually going to use an llm chain where we can give it a set of instructions and tell it these are the different items I'd like to look for in the email please extract those and put it in a certain output and we'll run through that in a moment how we get a certain output and what the prompt looks like we don't necessarily need to give it any tools we're not outputting data into different tools like a CRM is all going into one place we don't
need any web search so we're actually not going to use an AI agent in this case and we're just going to use a basic llm chain so we're going to drag that up here and the first thing you'll notice that we've got is a model down here so we're going to open up the node the basic llm chain and you'll have two options here you'll have connected chat trigger node which will be there by default which means we can pass in a chat message from a previous window as the system prompt so the prompt is
just what you would normally type into chat GPT or Claude it's just the message you're asking it to or the task you're asking to perform we're going to Define that below and it will just appear as this text field and you can see we've got These Warnings because nothing's been filled out you can see here we've got require a specific output format for now we're going to leave that as not ticked and you'll see the variable outputs you get and how to correct those and then down here we have these additional options it will start
with none but by default we will add a system message and the system message it should use and then all we're going to Define in the user message or up above is what inputs we're actually providing to it so we're going to want to pass the text in because we're going to extract key info from the text which has been pulled from our file it's important to note at this point you cannot pass a binary file into the llm it only understands the text that we pass into it and that's why it's so important early
on that we pre-process that data and format in this way so that actually when we pass that text through you can see it's very clean text that it can just read through and extract key info from now you'll see a lot of contradicting advice around how to prompt properly but also lots of useful advice you need to work out the style that suits you for me I fill most of the information in the system message and then I tend to just put the inputs directly once the llm understands what its role is what the outputs
I'm looking for the inputs are the only thing I put in the user message but try out different methods see what works for you you can add all of this prompt into the user message in the text up here or you can add it all into the system message it's entirely up to you so I've got a prompt if I open up here that I've previously used for an invoice passer so I'm going to paste that into to save us some time but what we're going to do now is run through the important details in
this prompt so I've effectively given it this role given the following invoice in the invoice XML tags so I'm going to pass in an invoice or the text of an invoice extract the following information as listed below so I've told it what its role is if you cannot there we go got a mistake find the information for a specific item then leave blank and skip to the next sometimes llms can get stuck if they can't find certain information we've asked it for this just gives it an out so that it will not keep searching and
will save its time and just jump to the next not all of these are required as you'll see I've then got a bullet point list which I've done with stars because that's markdown format but again this could be a numbered list for you it could be dashes it doesn't matter it's still going to interpret it the same but I just tend to use markdown um because llms are trained on markdown so we said Gather these items the description a short one to two line description of the invoice EG monthly database usage for superbase the reason
I've given it examples is because you should treat a prompt like you are training a human how to get the same output so if you were to just give these items to somebody you were training without giving them examples you would probably get variable results so if you give them example of what good looks like then it can consistently relate to that example and actually pull more relevant information so examples in your prompt are really key to getting it to a really good output consistent output which you need for business use so we've told it
to pull a description we've told it that we want a type so I've given it some options here you can either choose monthly recurring annual recurring or single payment so we're referring to invoices here so most of mine fall under one of these and I want to know when I'm reconciling it was it a 3 $99 single payment or is that a monthly recurring thing that I need to check out we then pull key info around invoice date payment date because it's not always the same as info invoice date the invoice number and you can
see I've specified that these are required and the llm chain or the model that we connected to will understand that it needs to pull those every time and then we've got lots of different information that we're pulling from the invoice and you can go in here and change it and remove things but we're effectively specifying all of these different things that we deem important to pull from the incoming text if it finds it in the incoming text I've then added this almost like a catch all additional information you deem important that isn't included above and
then we've said additional important information such as payment terms payment methods tax exemption references delivery terms special instructions so things that it might not pick up in those categories but we might want to know when we're reconciling invoices as she pulls so you can see I've given it clear examples of things I want it to pull and that is all in the prompt but you can see there's no inputs here but we've told it to an expect an invoice input inside of these invoice XML tags so to make that really clear we're going to go
into the user message up here and we are going to add these XML tags so the opening and closing tags here and again you don't need to work with XML that's just a preference you could instead just say invoice or just pass in the text and it would understand that that is the invoice but I prefer to just specify in the system prompt and then call out the same things because when you've got multiple inputs that becomes more important so we're going to give that a run now but you'll notice that it will not run
you'll notice that we're missing one final thing which is a model to connect it to so down here we can click Plus on the model and on the right hand side we get given the options for all of the different model providers that we can connect to so chat GPT for example uses open AI it's a model from open AI anthropic gives us Claude Google Gemini the the majority of times you're going to use open AI or Claude and a way to do that is actually to use open router instead so if you've been keeping
up with AI news you'll notice that there's a new model coming out every single week all of the providers like open AI anthropic deep seek are all competing against each other to get models out that are better and faster and better at certain things like planning or research there's lots of deep research nodes coming out at the moment so you want to stay as a business model agnostic and use a platform that enables you to switch models very very quickly and very very easily so instead of hardcoding these models like openai chat model into here
and being stuck with open Ai and having to update our API key or update our keys when the model changes to the better model changes to a different provider we can use something like open router which in N has its own model and we can use open rter to connect to any model we want with one single key so if we go to open R's website it says a unified interface for llms so you can access any large language model or thousands of of them through one key for very minimal costs basically the cost of
what it takes to use that API through the official routs anyway and on here we can go to models and it will have all of the lists of models but we know that the ones we conventionally use are chat TBT 40 mini or so if we type in for mini that will appear below and you can see that if we click on it it will give us all the details around cost so it's 0.15 per million input tokens and 0.6 per million output tokens treat tokens like characters so 15 cents per million is going to
be hundreds of pages for 15 cents so this running this automation is going to be extremely cheap unless we have hundreds of thousands of invoices coming in models that are better at writing like anthropics Claude Claude 3.5 Sonet is much more expensive so you can see it's $3 per million input tokens so I recommend starting with a cheaper model seeing how it performs like 40 mini and then if it's not performing up to standard you can switch the model that's the beauty of open rout so we're going to go into our open rout chat model
here and we're going to go to the credentials and we're going to create a new credential and you can see up here that we can rename it again so we'll say uh test for demo API key open router obviously You' use better naming than this but we go back to open router and you need to sign up for an account first so once you signed up sign back in So once we sign back in on the top right hand side on the dropdown you can go to keys and set up your new key and we'll
create a test demo key and it will show your key on screen we're going to copy that key and I'll delete this afterwards we're go back to test workflow copy and paste that into our API key and then once that's saved that will confirm that our credential is ready to use you need to make sure that you've got enough money in the account so you need to go to credits and make sure you top up $5 to $10 you can see 14 days ago I topped up $10 and I've only used 90 since then and
I use this consistently every day for various things so very cheap if you use the right models we'll come back to the test window and we're going to come in and refresh the list here and this will give us access to all of the different models if you're experiencing this issue which I've been facing with the open router model there is an alternative way to access open router which I'll show you now so I've just tried to run it and it's come up with authorization failed error so actually I'm going to show you how to
use open rter but if yours also doesn't work then you can use it through the open AI chat model and what you can do is create a new credential with that same API key we mentioned before if we come in and open the credential then you'll see here's where we' put our API key and the base URL you need to change the update to be open router and not open AI so it's this address exactly AI API V1 and that will be enough to connect to open rout and then instead of from list here if
that doesn't show up we can copy the model name from here and we can put it in as an expression but it should show up here from list and allow us to pick GPT 40 mini which is the model we were going to choose so now we've connected that up let's just name that open open rout so that we're clear on what it is and we'll delete this open rout official node so now we've connected to a model we can actually run it and now that we've run it you can see it's come out with
our first outcome so we've got the file we've process that into text and we've actually then returned an output actually output all of the text we've asked for so it's given description uh the type the invoice stes the supply name and the invoice number and we can go and check the invoice number ends in 00004 against the actual invoice if we scroll up we can see that the invoice number is in fact that correct invoice and the total amount due is 2640 we go back here and it's recognized the total price is 2640 so it
has actually pulled all the information we need um all the information we've asked for or where it can pull all that information however if we ran this several times it might come back with different headers although we've told it in the prompt to specify all of these outputs we've actually not it's still is using its own judgment on what to return because we've not actually given it a specific output format to require so if you click the requir specific output format we're then going to tell it exactly the output format we need the way it's
outputting right now is not any good for us for inputting consistent data into our invoice system or our air table where we're storing the invoices because we'd then have to break up all of these different fields Again by split lines and it wouldn't be easy to do that consistently the output we've received here is a text paragraph and to extract consistent data with which we need for our business to pass and reconcile invoices we'd have to split all the different fields here so there is a much easier way which is actually just specifying a Json
output for the model to adhere to so every time it gives us the same output format and make sure that that's correct before it outputs the data so we're going to turn this require specific output format on and that's going to give us a second uh thing to connect to here we're going to click inside that and we've got three options here we've got the autof fixing output passer which is the one we're going to use and what this does is automatically fixes the output if it's not in the correct format so we're going to
specify an output format that we want but this will automatically fix it by calling the llm again rather than us having to tell the llm that's not right try again it will do that automatically and we don't even need to prompt it to do so and then the the other two are an extension on that which are what what type of format do we want it to return in so we're going to click the output autof fixing output passer and you can see that we connect a model and an output passer again so the model
we're just going to connect to our open rout model and the output passer we then open that up and we have two options we can return the list as a we can return the results as a list of separate items or we can return a structured Json format I recommend using the structured Json format because then consistently we can return the same data every time and it's really really accurate and that's something we need when we're reconciling invoices we need the exact fields to be present each time so there's two ways we can generate this
the first is from a Json example like below and we've been dealing with Json data all throughout now or the second is to define the schema man we will just do the first one to show so we're going to put in description here and we're going to put in total paid here and all we're going to do is tell it give it an example of $20 in total paid and description is 11 Labs bill so we're going to run that now and it doesn't matter what we specified now in the prompt to Output it will
output it will fix the output to be just those two Fields so you can see now in the output that it's output as we expected description and total paid and it's yeah given the description there and total paid is 2640 which is extracted from the text so that works well if we want to pull all the fields that we specified in the initial prompt which you can see were a lot description type invoice date payment date Etc then we could go through and write all those fields out the alternative to that is Genera it from
ad Json schema and the benefit of generating it from Json schema is that you can be more specific with the type of data being passed through and also whether the field is required or not so you can see this is more complex and it's a Json schema format don't worry about this being complex there's a really easy way to generate it if you copy and paste this into chat gbt or clae as an example and you also copy in all of our inputs from The Prompt or sorry outputs from The Prompt and what we want
to achieve or what we want to output and say take these outputs and turn them into a Jason schema like the attached Json schema and include this as an example it will then come up with a Json schema in this format that's really specific and consistently gets good results using our llm so I'm just going to copy and paste in the Json schema that I previously made from that and you can see it's got all of our different fields here all the different nested fields and also specifies where the things are required and the data
types like strings dates Etc and that's important when we're passing back into our air table because we need to match the data types there so if we run that again as a test you will see that the llm will start running and it will call the open rter model and also check against the required output to check that is met the requirements and it will show Green once it's met the requirements so we now go to the results of that and you can see it's got a much more detailed line by line Json format that
has all of the fields that we asked for including Supply name customer name any additional information Etc so we are at the point now where we've almost implemented a full document passer with just a few key stages we've also covered the key fundamentals and the key building blocks in n workflows and how to connect those and pass through the inputs and outputs as well as how to manipulate data in the other workflows that you end up building for your business or or for other businesses you will use all of these fundamental building blocks at a
certain point in time the fact that you've got this far means that you're already ahead of 90% of others that are building using n so well done for that now let's do the last step of connecting your data and then after that we will move on to more difficult data types and how to master connecting to external data types and external API so the final step here is connecting back to our database or our air table where we're going to put the data into so we're going to go to air table and we're going to
open up a new table and import like we did before for the emails a table called Financial or called invoices or whatever and give it these fields invoice number description invoice date payment date type total price current y line items purchase order number supplier name supplier vat number additional info and invoice URL the invoice URL is only going to be relevant if we are pulling that initially from a Google drive file so because we're doing this with Gmail it's not going to actually populate with anything useful but if you're pulling from a Google Drive in
a future workflow or you want to replace the trigger then the invoice URL will link you directly back to the invoice you're going to see our results populate exactly like this once we run the flow and once we've mapped the fields if you want access to the air table to just copy it directly as well as the finished workflows then you're more than welcome to join the school Community it's school.com scrapes and then within that within the community there is a classroom section you can go to the free course and there'll be all of the
Core Concepts that we've covered today and there will be all of the templates within those so just find the appropriate lesson for this one and there will be a template attached to the bottom that will give you the fully working flow anyway back to the flow let's go back to our N Flow and we now know we have the output so what we're going to do is connect our previously made air table node and we're just going to need to map those outputs again so it's no longer going to be post emails and it's going
to be post invoices and you can see all of the fields that we had before are no longer relevant because we need to update the table we're uh putting it to so mine is called financial and then again we're going to map each column manually and we have all of the columns from our basic llm chain output that we can just drag and drop into there and remember for these that we want to make it future proof by just adding in the actual node name so we've got basic llm chain do item. Json and then
the field so we can drag and drop and then update those so I'll go through and add those now whoops that's the wrong one so I'll just make sure it's the correct field that's invoice number now so we'll come back into the node and you can see I've mapped all of those data fields now referencing the basic llm chain outputs and what we're going to do is just test this step Okay so we've tested this step and it's actually thrown an error and this will happen often when you're running these so it says cannot pass
value 26.4 for field total price so this will be often more often than not a data type issue so in our scrapes table we've got the total price which we've given a data type of a long text so we've called this a string however when we are creating the structured output passer if we go to total price you can see that we've given it a t data type of number we've done that with a few other things as well so there's two ways we can tackle this one is the slightly easier way which is to
add options and typ cast the responses so this just means that the air table API will effectively attempt mapping the values that we're passing to it to the fields that it has so let's try that first and actually that's done it successfully it's effectively converted our number that it's received into a string value in our table and that's fine because we can manipulate that in air table directly but you can see that it is now put in the correct values for our invoice that we've been processing so it's got even a breakdown of all the
different line items as per the invoice the supplier name the vat number additional information Etc now you can notice the above invoice is one I've already processed in the past and actually they've got the same invoice number so there's two things to note here one is that it's adding records and not updating records so if we want to do that we can go back and do that in a second because actually we don't want duplicate invoices appearing here the second thing is that actually some of the information has changed between the first time we run
it monthly recurring and the second time and now this is just the state of what llm outputs are like it's not always going to perfectly identify that information but it does 95% of the job for us you would know by reconciling this invoice that actually it's a monthly recurring cost not a single payment and actually then you change that there it's also misidentified the currency here as GBP versus USD so again it's not perfect but it does 9 % of the job so that we can just come in here and reconcile the invoices on a
monthly basis so we're going to go back to the air table and just make sure that it's updating instead of adding a new record and then each time it will check invoice number and it Willen or change the details on that invoice directly so we're going to go up to here instead of the operation create we're going to have create or update it's going to give us one additional field here which is columns to match on so like said we're going to match on invoice number and actually now this has changed to invoice number using
to match and if we delete this row here you'll see that this will change to a single payment this will change to gppp like we saw before because it's just updating this invoice so we go back here you can see the automation has come in here and it was changing all those values so those are all updated it's now updating or appending if the invoice doesn't exist it's a really really useful business tool that you have just created from scratch and that concludes the second automation that you've built completely from scratch and this is a
really powerful one that takes into account a lot of different concepts that you will reuse again and again when you're creating new automations using NN we're now going to look at how to manipulate and read exter external API data sources because quite often you'll be interacting with softwares where there's no pre-built in nodes and I'm going to show you exactly step by step how to do that so we've now completed the core concept stage where we've built the fundamental building blocks and shown you around the canvas and the different data types and how to use
them in N we have shown you how to work with your own data use an example where we build out a mini automation to send our emails to air table and then we from start to finish created a complete invoice passer system that connects to your emails extracts the invoices and passes them using an AI node and then puts it into air table what we're now going to do is step one level up and where the data is not accessible or where you cannot connect to the software directly inside n we're going to show you
how to connect to anything using apis so the best way to explain an API if you've not thought about them before is to think about them in terms of what we learned already so the N nodes so we might have something like air table that has a pre-built node in an so in the back end of n an air table is actually just being connected to using that API which stands for application programming interface which is just a fancy way of saying the air table allow you access to their data or to your data that's
behind the API by a preconfigured or preet up connection I.E the API so when we're using n an is actually just a series of apis connected to certain servers like air table now if we go back into n you can see in the action in an app that there are hundreds and I think thousands now of services you can connect to with n twio for voice calls webflow WordPress YouTube a lot of different services but sometimes they don't have a pre-built node connecting to those apis for you like the API like the air table node
here makes it really easy because it's connecting for us to the air table API but sometimes you want to connect to a service that has an API but does not have a pre-built node that's how we can use apis in N so to connect to a service like air table we'd need to send data to air table itself and what we're receiving is data back the the same way in which when we send data using the air table node it then puts it into air table and we receive a response from that so the way
to do this is just called a HTTP request and you guessed it there's a node for a HTTP request in N that we can leverage to do this so when we are interacting with apis they commonly use standard terms we use post when we are sending dat too we use get when we are retrieving data or reading data from a service like air table we use put where we want to update all data not as commonly used post and get are the most commonly used we use patch when we want to update some data and
we use delete self-explanatory delete data so these are the requests we are sending to the air table server or API and it gives us back a response that response will tell us if the request has been successful or unsuccessful so we will receive codes back along with data so if the code ends in or starts in 200 indicates a success certain other routs like 400 might be resource not found you don't have to worry about the codes but you will receive a code back that's either success or failure with detailed information so you can think
of it as a HTTP request is sending or asking for data from a service like air table that has preset up a series of retrievable information we retrieve those from what we call end points and endpoints you'll see is just the URL that we're contacting or the end of the URL that we're contacting to get that information this will become really clear when we start interacting with those apis and we'll show you exactly how to read that API documentation in order to get any data from any API so you can see how powerful this becomes
because for your business use cases you can manipulate your data but also share it with other systems automatically without having to lift a finger so if we come back to the workflow here we're going to again open up the nodes we're just going to type in HTTP request which is exactly what we were talking about we'll make sure it's not connected to any other values and we're going to start a new flow down here we're going to demonstrate this by using a service that automatically passes invoice for us but actually that's a free to use
service where we get a free trial and we can just instead of doing all of this processing we can just contact that service send the information about our invoice I.E the PDF file attached and it will return to us exactly what we've done here with the llm node so it will show you a like by like comparison using an external service that's pre-built exactly for this so we're going to give it a go with a service called pdf. and they are a service that are API accessible and they automate tasks like PDF conversion editing extraction
and other things like invoice passing so say for example you were setting up an Automation in N if you connected to pdf. you could really easily do what we've already done quickly I pass the invoices directly using their service and their API but you could also just switch the end points which are the connection or function points for us on the HTTP request to do any other thing like convert the PDF to anything uh pass the documents classify the documents all of these different features that might be readily accessible out the box using an API
that are not native to na so you can see how this can become really powerful for your business use cases because there's always services like this that are accessible for externally through a HTTP request so I've not used pdf. before so it will also be a learning experience and you seeing exactly how I navigate the documentation to understand how to create the HTTP request here so we're going to sign up and you sign up so once we have signed up we'll be greeted with this app. pdf. homepage it tells us we can view our API
key here they have all the different tools listed out there they have your API call log history and you can manage your subscription plan right we've got 10,000 free credits and there is our API key which we're going to need like we do with all of the other nodes to contact and authorize that it is US sending the HTTP request so we're going to go to the documentation so if we go back to the main page main homepage they have an API docs button we'll click on that and it's already a good sign because they
integrate with low code or no code softwares and they've got AP and make on there which means in a we'll absolutely be able to connect to this API we'll go to the API documentation and often it will be very heavy on text don't be put off by this if it's got more text then there's likely more end points that we can contact more things we can do and it kind of opens up the opportunity for us I can already see immediately on the left there's some things that are interesting we've got the ai ai invoice
passer we've got a document passer so we might need to use one of those um in particular the AI invoice passer we've then got some end points probably for PDF to CSV PDF to text so converting from different endpoints and lots of different functions that we can use just through one single API so really really powerful I would break down an API or HTTP request into two things the first is your authorization so we need to authenticate that it is US sending the request and we'll do that in a second through the traditional HTTP request
credentials and the second thing is what are we asking for are we retrieving data using a get request are we sending data using a post request and in this example we're going to be sending a PDF directly from n and actually passing it like we did previously but using this HTTP request to show the difference so we're going to go back to n and we've got the HTTP request here we're just going to jump into the node and you can see immediately that we've got all these things we spoke about we've got the get we've
got the post we've got the delete we've got the put so first of all we're just going to change this to post because we know posting data the URL in here is the URL we're going to make a request to so it will tell us that in the documentation so we'll now tackle the first part most important part which is authorization so validating there is in fact us contacting the API so in the documents you're going to click on authen authenticating your API request and it leads us to here where it says to authenticate you
need to add a header named X API key using your API key as the value so we know we need to pass a header in to our HTTP request needs to be called X API key and we need to add our API key as the value so if we go back to n inside authentication we've got two options here we've got a predefined credential type so if there's a service already set up inside n then it might have a predefined credential type like Google if not however like pdf. then we're going to use a generic
credential type type there are multiple here but it's told us we're going to use a header orth type and this might get pre-populated but what we're going to do is go and create a new credential here as always we're going to rename it because it's going to be easier to find in the future and we'll rename it with the or type and the name of the service so pdf. header or it told us in the documents that we need to put in X API key so we'll go and copy and paste that and then then
it told us in the documents that we need our API key as the value there so we'll go back to the dashboard and we're going to copy our key that's OB fiscated at the moment we're going to go back into the value column and we're going to save that now that's going to confirm it down here but it doesn't necessarily mean it's correct but we'll go and test that in a moment the other type that you'll often see for header ORS is typing the word authorization with a capital A in the name and then I'll
just delete that out so I can show you we'd have Bearer with a capital space your API key afterwards that's a common header authorization format type that you'll see again and again so that might come up but for this one we know that it's just X API key and our API key value we're going to save that we're going to come back out now we will look at the different values that we pass in the body so that's our header so we're now going to look for what URL we need to contact for the endpoint
and also what data to send or retrieve and how to do that so in the docs it gives us this really important note and we will always talk about the base URL so all of our end points will be on top of a base URL and that will make sense when we come to it but this API base URL is going to form the starting point for our URL that we're reaching out to with the request so any endpoint will have a forward slash and it'll be Endo one endo2 and that might be invoice passer
we'll see what it is in a minute but it might be something like invoice passer but always will share this Bas URL so we'll get rid of that a second go back to the documentation and we're going to have a look at the docum ation for the AI invoice passer so we'll click on that you can immediately see the available methods are post and it's told us the endpoint which is SL ai- invoice D passor and here we've got a more detail method endpoint Now API documentation will all be slightly nuanced it might read differently
on different services but the Core Concepts remain the same we have a method are we retrieving data or are we sending data and an endo as well as then the attributes or the the attributes that we are sending in that request the HTTP request in order to get a response so here we can see that the endpoint is slv1 AI invoice passer and it's actually just AI invoice passer because we already had that on the base URL so like that we just append that end point and we can try to just do a get request
there and test that step and see if it responds and of course there was no get method so it says the method is not found but sometimes we can just use that to test the that the authorization or the header we're sending is correct so we'll go back to the post method we will try it again and it's telling us invad invalid input URL so what I'm reading from that is actually the header or is is working but it's telling us we need to append more data because what we're actually asking it to do is
we're asking it to receive the data we're sending and then we're sending no data so we'll go back to the docs again and you can see that for this AI invoice passer it gives us two different attributes that we're going to send we're going to send a URL and it's also asking for a call back but only the URL is required so the URL will be a URL linking to our source PDF and you can already see how that's going to be an issue because we have a binary data file and we don't have our
PDF uploaded somewhere so we'll tackle that in a moment and then the Callback is a slightly different concept and it will only be used sometimes but it's not required here the Callback URL is effectively saying that once the invoice is processed on pdf. CO's server once we send the invoice it's going to be processed instead of waiting for that and calling it again it will actually just send us the details back to a web hook where we will receive the details back to us instead of calling it again to retrieve them so it's a way
of them sending data to us where the web Hook is acting as our API and the way you can set that up and we won't go into this right now is go to the nodes and hit web hook and there'll be a web hook trigger with an address here that we could send to pdf. as our callback URL we'd use this production URL here for now we're going to delete that web hook and just see what our response is when we send the URL as we've already mentioned our file that we were using before our
invoice has not been uploaded anywhere so often services like pdf. will have a way to upload a document temporarily to get a URL so we can send that URL in the HTTP request so we might need to make another HTTP request before this one in order to upload our binary document first so we'll have a look through the docs here and I can see a file upload endpoint you can upload files as tempor pre files into pdf. I.E they'll be stored for an hour and then Auto removed so we can upload probably a binary data
file up to 2 gabt in size using a different HTTP request and it gives us the step here it says call this endpoint which again we will do in a second it will generate a link for the upload it generate a link for uploading so we call that first with the get request we then post our binary data or our PDF file to the URL and we'll be given a URL that we can then use in the next request to process or pass our invoice so it's a slightly more complicated API setup but gives you
a really good idea of how to read through this documentation and work your way through it and I'll show you afterwards a really quick way to set up these API requests that will save you a lot of time but first we're going to go through step by step how to set this up so back in the API documentation we have the steps here so we need to first use a get request to get this pre-signed URL of which we're going to upload our binary file to so let's create a new HTTP request here and what
we can do is just copy and paste this HTTP request it will appear somewhere on the canvas we'll rename them shortly we'll go into it and instead of that we're going to do it a get and we're going to make sure that we have the the right end point here so we have SL fileupload get pre-signed URL and because it's a get request we don't need to send any information we're just going to rename this get URL for upload so it's really clear what we're doing let's test that and see what it comes back with
so it's come back with a successful response and it's given us this pre-signed URL which it told us it would and also a URL so now let's just go and understand what those are use the pre-signed URL to upload your file once you upload your file to this pre-signed URL using put so not post we use a put request you can use the URL link to access the uploaded file okay so we upload the binary data to our pre-signed URL so we're just going to copy this Google Mail trigger down and the notes so we're
keeping good record we're going to run the test event again and it's the same one we had before we're going to change this from a get to a put because the documentation told us to so we've got the binary file coming in there we've got our URL here which we need to upload and then we're going to use the learnings that we applied earlier and actually merge those together so that we pass through both the file and the URL so we going to pull the merge node up and if we just run a test on
that both or add sorry if we we just add an a manual trigger and on our merge node we're just going to wait for all inputs to arrive and then we're going to run a test on that and that should run both of our branches on the left hand side so on the right hand side it's passing through our data and that just means that both of these have now run so we've got the pre-assigned URL which we need to upload the binary data to and then we've got the binary data from our Gmail trigger
here the binary data as a reminder is just our PDF file so now we need to upload our PDF file using put to this pre-signed URL so one thing important to note is here it's a put method but also we need to replace on the endpoint our pre-signed URL and then it's given us a reminder that we need to add content type header and tell it what content we're sending in this example it will be application PDF so let's do that now so we're going to create a new HTTP request I'm going to paste in
the URL and go back to our previous node to get the base URL again so we're going to upload to our pre-signed URL so we already know that we got our pre-signed URL here and get URL for upload it returned our pre-signed URL so we're going to need to reference that using an expression and replace it here with get signed pre-signed URL so we're going to go back on the schema which which an easy way to visualize our inputs and we're going to go and drag the pre-signed URL into the box here now you'll see
now you'll see that this comes up with no path back to node and this commonly happens when you merging nodes you can't find the path or it can't find the PATH back to this pre-signed URL so all we're going to do is we know that there's only one pre-signed URL so we can change this from detecting each item of every iteration and just write the first item and now you can see it's attached the pre-signed URL to the end of the request we're going to make sure our authentication is correct so we need our header
or and we need it to be the pdf. C header or and then let's go back to the documentation and see what we're passing so we're adding a content type header so we will send headers and we will put content type and then we know we're sending an application SL PDF I'm just going to copy it from across there I believe that's correct yeah application SL PDF and then in the body we're actually sending a binary file so we're going to send a body but instead of the body content type being Json we're sending an
N binary file and as we saw before we know the attachment name is attachment one for example and we're just going to rename this request to send PDF to PDF Co and there's one thing I've forgotten here which this should be a put request so actually I'm just going to also rename it to put PDF to PDF Cod and then I know exactly what is happening there is updating the URL on PDF code because remember put updates values on there so we're going to test that step now and see what it comes back with so
this is a good lesson in reading the documentation correctly because I misunderstood and I ran it and I was actually contacting the wrong endpoint so I've gone back to the documentation and what it actually said was that the put endpoint is just the pre-signed URL so no base URL this time we just have that pre-signed URL and then send our binary data so I've just put in the URL our pre-signed URL which is this temp file Amazon AWS server we've put in our headers we put in the content type as application PDF and then we've
attached the binary file we've run that and it's come through as successfully but it's also returned nothing so it doesn't give us much of an idea so the next thing we need to do is actually then retrieve that URL or pass that URL into our invoice passer so we've effectively uploaded the file now it's going to stay live for an hour so we now need to create the next step which was the starting point which was our original HTTP request where we are passing through the URL to get processed as an invoice so we'll go
back to the AO invoice passer and we're back to these docs where we're posting the URL and all we need to do is go back to the requests we're just going to name this now call post PDF Co file post PDF file we'll call it so we know that that has now been processed in the previous node so in the body we are going to send Json content type this time and we're going to fill out these attributes and we know that only one is required and that's the URL so we're going to fill out
the URL so the way we do that is we go into our body parameters and you can either do this using Json and actually just copy and paste this in here like this this would be perfectly valid and we' replace this with the expression or we can use the easier naming convention which is using the fields below we just put an URL and again we know we're pulling the URL from a previous value and we're pulling that from our get URL for upload it returned the URL here and again we've got the same issue where
we should just reference first we can actually test this by just going directly to this URL and it should allow us to download the file or it should show us the file that we were had in question so there we go at that address it's now tempor temporarily uploaded our file so that we can send it into the invoice passer so that should be everything it doesn't say to add anything else so if we now run this post PDF file what it's going to do is send the file in order to get ped and it's
given us some status details so it's given us a job ID was there an error no status 200 which we discussed was success we've used 100 credits we've got 9886 remaining and that action's taken no seconds but now it's going to be processing the background so we didn't give it that call back URL to send it back to the web put so now we've somehow got to retrieve that from the request and just to mention at this point not all API or HTTP requests are this complex when you're handling data types like this this is
definitely far more complex than if you're just posting data or retrieving data once you've got to grips with this everything else will be really really straightforward and I'll also show you in a second how to speed up the process of making these HTTP requests or speed up the process cess of reading the documentation so I'm reading the documentation here and under the AI invoice passer it mentions that we can use the job ID to identify the corresponding uh callback response use the job check API endpoint to poll the job status so we're going to go
down to the job check endpoint here and we're going to use the job ID and make another post request to check on the status of that so we'll copy this endpoint go back to the canvas again we'll set up a new HTTP request or we can copy and paste one of the old ones a get request and it's going to be to the job check endpoint and we know that we're going to use the base URL here just going to move this all down a little bit so it doesn't crash and we're going to set
up our credentials again and this time they're saved and it's actually a post request so we need to post the job ID in order to retrieve the details so we're going to send a body again and then the value was job ID and we've got that in the previous node so we've got post PDF file and again for good standards we're just going to call it by its actual name rather than Json Dot and then we can call that job check post job check we're going to test that step and it's going to come back
with the status of the file so it gives us some more details here it was a success we got the page count we got the URL of the file and you can see that this has now come back with exactly the information we were looking for it's come back with all of the different supplier details like vendor name address contact information customer ship to invoice payment details all of the things that we were stripping out ourselves we've man to get through an external service orbe it it was probably a little bit more complicated because we
were dealing with uploading data which is never as straightforward as just sending text and again it comes through at the the same level of detail that we were pulling with our llm node but the advantage here is we now know how to work with pdf. so that if we wanted to use any of the other functions like converting our PDF to text we could just go to the end point we could find the relevant endpoint just upload again the same URL and we would receive that response back so all we need to do is replace
that final or this node in here where we post the PDF file to the endpoint we did we don't need to replace that node with any of the other functions and now we've got something that we can plug and play that does everything for our PDFs we can merge them we can split them we can do any of these now because we know how to work with a pdf. API and given it's one of the more complex API use cases once you get your head around this all other API documentation will be really really easy
to to use I'm going to show you now how to speed this up even more now you know the fundamentals and can Gras that I'm going to show you now how to get there even faster with any API documentation in the HTTP request node let's say we want to use the AI invoice passer node again we're just going to copy that node down here in fact we will start from scratch and do a HTTP request and what we will do is we're going to try and replicate this as quickly as possible so we're going to
go to the documentation and in the documentation we'll go back to our AI invoice passer and we will scroll down and what we're looking for is a curl request so you see this curl here this is essentially a computer's way to access an API and it just lays out everything we need in one simple Co request so all we're going to do is copy that and you can see it's got all of our headers like content type application Json it's got a header with a API key it's got the URL in the body which is
marked with d so we're going to go back and very clever trick inside the HT TP request inside n you've got this import curl what we do is paste the curl command here and we import and that will prefill most of the information so you can see that the API key has now been added to the header parameters you can see the endpoint and the method have been changed or updated and you can see it's prefilled all the Json naturally it's filled out all of the information so we didn't use a callback URL so we're
going to delete that and also this UR L is a dud URL that we're going to need to connect to our own URLs similarly putting your header parameter like this directly in the body rather than through the authentication type is not best practice and actually if you share your Json or your workflow with others they'll be able to see your API key so I would always recommend setting up authentication properly we'll go back and just do the generic credential type header o PDF Co we've already set that up whoops and then removing it from the
headers there so we don't need to send any other headers but effectively we set that up with one Co requests super quick we don't have to read all the end points and input all the details we're literally within 30 seconds we're going to have to update this with the expression so we will connect it to the previous node and again we can just come down here delete that expression and this was the URL from one of our previous nodes here and again we're just going to replace that with first and there we have it the
same node but done in 30 seconds by just getting the color request from the documentation an even quicker way when you don't even have access to the API documentation or you can't be bothered to go look at the API documentation you can go to a service like perplexity and you could type in okay pdf. Co Co request for AI invoice passer and that would just search the web for the API documentation and actually come back immediately with this Co request so without even having to visit the API documentation you can just type something like this
into perplexity which is a web search engine supported by an llm on the background and it will give you the colod request straight away we can copy that go add that in and it's also given us some information about optional parameters and response format so effectively digested all of that documentation in about 5 Seconds we'd have instant access to any API really really powerful for business use case to round up this section I have gone through and pinned up all the data that we've collected from that so that we can actually uh put the data
into our air table and compare it to our own invoice passer so we built our own invoice passer using an llm and now we have built an invoice passer that just connects to an API does exactly the same job Returns the data for us so we will put that into our air table so we're going to copy this node here and paste and it's common place to copy and paste nodes and then just change the names change the inputs because it's a lot quicker we going to post invoices and we'll just call it API just
to distinguish from the one above but this time we're going to have to map it to the body that's being received from the API API request so we will now have to remap all these we're just going to refresh the columns here we already know we're attached to the right table the financial table just to remind you has all of these fields we're just going to um put a dash one on there so that when we go to update the invoice we can see below how the invoice processes um so we're going to go in
here we're going to look inside the body we're going to grab all of these details and fill them out individually so you can see it's grabbed invoice number for example and as usual we're going to go and we are going to name it rather than refer to Json directly great so I filled out those details now in post invoices API but you can see that because this is a service where we're not controlling the outputs like we did with our llm we don't get to pick exactly what Fields they have so we have to map
to those so we haven't got a field that's generic description um which is a shame but we've got things like invoice date payment date invoice number total price which comes with the currency attached the line items the supply name and some additional info so we're missing some of the info that we were able to build through our own llm but nonetheless it's a really good service that we can use we're going to test that now and see it go into our table so that's been added to our table we'll go back and we can see
these are two the same invoice numbers but one's got a dash in so perhaps one is better off picking up dashes and that's something you might specify in your own bill llm node make sure to include dashes in invoice numbers and uh yeah all of the details seem to be relatively the same but obviously some of the fields are not filled out in the same way and even the line items we can compare in here so actually this one from the apip pdf. picked up more line items in there and then we've both got uh
around payment terms or payment methods here so yeah both really good methods but that was it to round up the API Mastery section you can see how building your own sometimes could be more beneficial because you can choose the structure and tell the prompt for the llm exactly what information you want but but sometimes it's just easier to connect to an API and actually use an API because now we've set up that API it can continuously run it took a little bit longer to set up initially because of dealing with all the the data formats
but if you know how to do both methods then you can absolutely choose which one suits you best and decide on a case-by casee basis exactly which one you'd use for what so if you come back to the canvas and look at what you've built now we've effectively created three whole automations getting more complex as we go through so we've initially created a simple first automation where we've got a file from our Gmail we've converted the format and then we've posted that to our email's table in air table so we've manipulated data and sent that
to a table we've then created an entire endtoend automation from scratch that includes an llm call in the middle of it as part of a non- agentic workflow as we discussed earlier where we take files from our email we extract data from the files we know how to format that data in code notes and then we literally prompts the llm to pull specific information and post that to our database and then finally for the third automation you've built from scratch we've dealt with complex API documentation you've learned how to read API docs The Core Concepts
of API documentation and how to do that quickly and efficiently to achieve the same results that we could achieve Within llm These are core skills that you will need when dealing with your business data so if you've made it this far really well done we're going to move on now to scalability so building workflows in isolation like this is great but we need to understand how to find the resources to build workflows more quickly but also how to structure our workflows so they're really scalable and anyone of our team can come into these workflows and
understand exactly what's going on so that's what we're going to cover now this is about how to make your life easy and your team's life a lot easier by just implementing some best practice but also understanding where to get the right resources you'll notice immediately that actually if we zoom out on this we've got two of almost identical workflows one is really clear and the other is not and I'm wondering if you can spot the difference between those two and it is the sticky notes so this may sound obvious but I've seen a lot of
content put out there where people don't properly label their flows if you were to give this to a client or you were to implement this and give this to a colleague they would come into here and not be able to understand at all at a high level what's going on in your workflow if you came into this flow here you could absolutely at a high level tell me what's going on we get some sort of file from Gmail we are doing some sort of data formatting although we don't know what we know we're formatting the
data in some way we're then grabbing key data using an llm chain and then we are outputting that data so we know at a high level exactly what's going on here whereas down here we've not labeled as we've gone along so what we're going to need to do is retrospectively label those in a clear consistent manner to make sure that when we pass this on to someone else they know exactly what's going on and more importantly when you come back to this workflow which you will come back to this workflow trust me I do it
all the time I'll go back to my workflows and I need to see exactly what's going on I need to isolate a problem or change something in one area and labeling it makes it really scalable because I can come back and do that very simply and so can others so I've labeled up all of those now you don't need to label every single thing but you need to label sections and functions of work so we've got the inputs here a file trigger but also the upload URL we've got a section here which is about uploading
the file to pdf.com Co and returning the output and then we're outputting data into our dat data table you can see how that's immediately so much more obvious for both me and anyone else coming to return to this the next tip around scalability and making your life easier is just around your workflow naming so if I go back to my overview here you can see I've got a lot of workflows when you get to this point where you've built out as many workflows as I have it becomes very difficult to navigate through those I hear
rumors that n are finally rolling out a folder structure so that may come in soon and make it a lot easier but you also need to name your nodes a sensible name so you might have a convention like content template admin and then append a name that is about the function that it's actually doing but something like test workflow demo doesn't actually say anything about the workflow but when you open up new workflows it will be called my workflow 1 2 3 and actually when you trying to navigate back through a lot of project it's
going to be really difficult to find the right project the second thing around that is tagging so you can actually go into the workflow set up your own tagging system so maybe you want to tag around uh AI agents document extraction error handling all of these different things that you can decide what you tag around and you can then filter by your tags and you see for example all of those tagged with AI agent we spoke briefly about memory issues earlier and running into memory problems on your workflow n over the last few months improved
their canvas so it runs much smoother than it used to but having multiple workflows on the same canvas generally is bad practice there's a few reasons for that the first is the execution log so you want your execution log to be a history of all executions of one given work flow if you've got a canvas with multiple workflows around all of these executions are going to be very confusing when there's an error you're not going to know which workflow the error is part of we've just built it like this for ease of visual use but
if you were building production systems you would actually separate each of these into a new workflow so with n you can actually just highlight whatever you want to copy and copy it to a new workflow so we could copy this here we could open up a new workflow and we could paste in these values so we now have a separate workflow which has a separate unique URL that we can execute separately the second reason why this is important is because although these are low memory intensive if you've got a lot of larger workflows running on
one one canvas then it's going to cause issues with loading and memory Etc because they're all on the same workflow so it's a good idea to separate them out completely into separate workflows it's a bit different here because we've got two workflows which are effectively doing exactly the same thing just with different ways of doing it one through an llm one through the API but in general you'd separate these onto separate workflows the final thing I'll mention around this is around parallel execution so if you're wondering whether this could also run at the same time
as this yes it can on the same canvas however again it could cause more issues than you want so actually you'd probably want to separate these so that if a Gmail came in through here it would run on a separate canvas and run on a separate canvas for this workflow we're going to cover now modular design so you'd want to design your workflows in a modular fashion so that they can be plug and play and we can reuse certain core elements again and again so an example might be this invoice passer that we created we'll
just copy that and we will go to a new workflow create a new workflow we're just going to call it something like invoice passer test for now and we're going to paste that in here so there might be occasions where actually we receive invoices both in our email but we might be getting them through WhatsApp for business if our supplies on there we might be getting them through telegram we might be uploading them directly to Google Drive so in that case we might need to use this pre-processing stage and the extract key info and post
invoices again and again so we have options here we could copy and paste this down here and we'll change the trigger so we'll now have a WhatsApp for business trigger on message so we're going to find that trigger and put that in and then we also said we might have a G Drive upload so we're going to change the trigger we'll go G Drive trigger on changes to a specific file or folder that should have been folder and then we'll attach that so you can see that that would work right we have multiple inputs that
go through the same flow but what if we wanted to change the prompt we'd have to come into basic llm chain one or two basic llm chain basic llm chain one so we'd have to update in three places if we wanted to do that an alternative is to actually just attach this trigger directly to the flow so we can now get rid of of all these bits let me just move that get rid of all that and we can actually attach multiple triggers that connect to the same flow that is an option we might have
slightly different processing but we could standardize that with a set node in the middle I just realized here we had not reconnected the loop which we should have done so we can connect multiple triggers and that's absolutely fine but maybe there's some standard processing that we do to Gmail and there's some standard processing that we do to WhatsApp and there's some standard processing that we do to a G drive so we might have this data processing stage like this let's just say data processing and we'll ignore what we want to do for now it's just
the concept that's important and we'll copy that and we have separate rules for data processing for WhatsApp and separate rules for data processing for G drive so we could absolutely do that where we've got this separate data processing rules for each method but what about if in other flows we're also going to take in a Google Mail taking a WhatsApp message taking a Google Drive and we're going to do exactly the same processing again well in that case why don't we entirely split out just the triggers here and the data processing and we can do
that in n and it's really simple and it makes for really modular designs so if we break the connections here we now know we need to call this workflow from other workflows so to do that we just use our execute subw workflow trigger and what we're doing is saying that this workflow the invoice passer test can be called by another workflow so what we might have is okay let's SE seate these into completely separate trigger workflows that just do the data processing so we'll have one for Gmail and you might not structure it in this
way but the whole point is around modularity and anything you're going to reuse you should actually just uh only change in one place and that's really important from a reusability perspective and will save you a lot of time so what we'd connect to the end of this perhaps is an execute subw workflow node and we can choose our workflow which was invoice pass a test now whenever this runs it will execute our sub workflow so if we test this workflow here and we go back and fix the issues that were in the invoice passer test
and we change this input data mode to accept all data you can also Define the fields that are going to be passed in if we only want to pass some in or Define it using Json but let's accept all data for now so it's ready to accept data we'll go back to our Gmail trigger and if we now test that that will pass through and it will start executing our other workflow and you see this one continues to execute until it's received a response to say that this is fully completed and it's now returned turned
the end of that subw workflow so from this Gmail trigger workflow we've managed to trigger our invoice passer run it wait for it to respond and then take the response at the end you can see how that then creates reusable blocks so say we wanted to reuse that core set of functionality again and again we could just call it from other workflows the same with having multiple triggers we could just set up multiple triggers that have different data extraction or data formatting and then execute a certain workflow inside this subw workflow you can actually see
we can see that that's been executed if we go back to our other workflow we'll also be able to see it in the execution log so that has just run with our inputs that have come through the binary files that have come through from the other trigger so what we might do then is set up another input and keep this part of the functionality exactly the same but change the inputs that we're passing in here so that is how to create modular workflows and I'm sure your imagination is running wild with the possibilities of creating
reusable workflows with this structure it's a tremendously helpful set of nodes when you're working across multiple clients and you just need a core set of reusable functionality it would not be a good workflow ation for business course if we didn't touch on error handling and how we tackle that so there's a few built-in features that you must know about when it comes to error handling in N inside each node there are settings um we've covered this at the start of the course so I won't dwell on it too much but you can ask it to
retry on fail we've then got the different options for on error and most of the time what we want to do is pass the extra item to an error output what we can then do is if we open up the tools we can get it to throw an error and if we connect the error to that what that enables us to do is give a custom error message so if the post invoices fails we'll say error inputting into air table so that when we get our error notification we will get that error message and it'll
be more readable than then post invoices and then aong code so that is the first thing implementing just some error handling on a node by node basis some nodes you might want it to continue on error some you might want it to stop the workflow the second is around the settings for workflows itself so if you come into the settings what you can do is set up a few standard things like time zone whether you want it to save your production execution so anything that's not a test when you're in here testing workflow is a
production execution when we are active in the button up there whether we want to save our failed ones which you usually do whether we want to save our production ones our successful ones depends on your memory if you don't have a huge amount of memory you probably don't want to save your successful production executions if it's executing every minute for example save our manual executions we'll always have that as save because we want to see our tests and then save execution progress up to you there's also this timeout after and I tend to set it
to 3 to 5 minutes but when you're dealing with complex llm or AI agent workflows you probably want to set this um you know 10 15 whatever you feel necessary based on your testing it will just time out and stop the execution if it reaches that limit but something like this where it's an invoice passing we're dealing with document data so maybe five minutes but at a push it should be one minute you know so those are the uh settings the other thing that I'd advocate for you to do is to set up an error
Handler that by default um throws all of your errors to a certain place so I've got one in here which I'll share in the community which is an attaching a default error handling flow to every workflow so if I've not gone into the settings and put in this error workflow as the default error workflow then it will go through all of my n flows that are active and set this as the default error workflow what that does is it forces it to run this workflow when an error actually occurs and what that does is trigger
this error trigger down here and you can set this up to do various things for me whenever one of my workflow errors it sends me a telegram message and that telegram message has the workflow name the node which failed the error message and a link so I can go directly to that workflow so just a super helpful thing when you're setting this up for clients or your own business you absolutely want to know which ones are erroring without having to go to the logs yourself you can send it by email WhatsApp whatever you want but
the point is you have one default way to handle errors for all workflows and then to set that up we just go to the settings and assign that as our default error handling workflow and now as soon as it errors that will cause that workflow to contact me by telegram debugging in the workflow so you see if we come into the executions we can see these failed error logs and if you go into the node you can see that this data was passed through and it failed at this point that's really difficult then to go
back to the editor and try and replicate that so n an have this built-in feature called debuging editor and what we'll do is click that and it will push us to push the data and pin the data back to the workflow in the editor so we can replicate exactly what happened there and try and fix it directly from our workflow now to get this you need to make sure you've registered your community Edition so if you come down here to settings and you go to usage and plan it will effectively prompt you to register your
community Edition once you've done that activate your key that sent to your email and you get a few features including the debug in editor for free talking of community the Nan Community is a really great asset and you should definitely leverage what's out there in terms of resources so we're going to cover the basics of that and trust me this is is so important you will use these things again and again the first are Community nodes so if you have the selfhosted version you'll be able to get these Community nodes you can go down here
go to settings and in settings will be Community nodes feature you will see that I have a lot of these Community nodes one for appify one for browser lists 11 Labs there's loads on the market that you can install you go to install and you can go to npm which is effectively a Marketplace for different JavaScript packages and you'll be able to see all of the N Community node packages a community node package is essentially just a node like air table like Google Cloud that na hasn't made native to n software but somebody in the
community's decided to build those so for example 11 is a really good example deep seek before it became a node was an example however it's not great to search on npmjs but this is why the community is even better there is somebody in the community that's really active that has created a much easier search functionality for Nan Community nodes it's at n- community- nodes. octo oicc and this just makes it super easy to search rather than the clunky method so we can type in YouTube there's one for YouTube transcription there's one for YouTube info if
we go 11 Labs you can see it gives you information about what the node does when it was last maintained how many downloads per month and we can click on that and it will link us directly to the page and within the page you will see that it has a name like this na an noes 11 labs and what we can do is go back to our Edition here and we would just put in the name that we want to install here I've already got this one installed so it's going to uh tell me that
we can't install it but it would then install it on your system if we go back into our workflow open up the nodes here if I Searcher 11 Labs you can see it's been added and it's also got this cube next to it which signifies that is a community node so we can open that up and you can see they are as detailed as the normal nodes somebody has created all of these actions within that node we can open that it looks exactly like a normal node but it's maintained and built by the community now
imagine you're building out a workflow for your business and you want to post content to social media and imagine building that from scratch every time you can absolutely do that but you've got to Leverage The n pre-built Community templates or workflows so you can see on n.w workflows they've got 1,325 workflow automation templates that you can leverage that are kept up to date so if we click on AI here we've got as recent as they're basically updated every week so we've got one here that's paid we've got multiple here that are free most of them
are free and if for example you wanted one for posting to Twitter or X we just type in TT Twitter we'd search for that and we've got one here AI powered social media amplifier automatically Pro promote your YouTube video on X post new YouTube videos to X spreadsheet to tweet automation so you can see how these are super easy and if we just grab one that's free here if you come into the template it will tell you the title of the template what nodes are used in the template who's created it and you can go
on their Crea a profile you can see what other workflow templates they've created and then you can see some information around how it works what the requirements are and how to set it up all of these have a base level requirement of telling you how to set this up you can see that this has got a bunch of different nodes we can zoom in and see the nodes and then we can even open the nodes directly if we double click into them now there are several ways to get this into your na environment the first
is literally just highlighting them and copying and if we go back into our environment we can literally copy it below if we click on the canvas and it will put in that workflow directly there so we don't even need to leave or download anything to put it inside but we're going to go back and there are other ways to use the workflow we can import it directly into our self-hosted instance if it's connected we can copy it to clipboard which is exactly the same of what I've just done or or if somebody sends us a
file a Json file then we can just go into our workflow and import from file and import adjacent file directly and that will open up in the canvas so lots of different ways to do that definitely go out and check the templates Library it's one of the best things about n being on the sustainable use license means everyone is really friendly and sharing all their templates at all times getting help so you'll be building your workflows for your business or for other businesses and you will hit issues there's no doubt it's like uh when you
start a new skill you're always going to come along issues that you've never seen before there are two resources I recommend for this the first is the n community so community. N.O the community has lots of different uh posts that might cover your previous issues so you can just search SE in here say you've got a Gmail issue there'll be lots of suggested posts that you can go and review previous comments about they also give their announcements on here for new updates to the software so you can come here they have tutorials they have lots
of helpful resources and everyone's super nice and super helpful in this community and they will uh come and answer your questions you can see the activity is all within the last couple of hours so everyone is really active on this community the second resource that I definitely recommend is signing up to perplexity doai they do free accounts it's effectively web search with an llm in the back end so it can connect to all of the latest documentation for n and we can ask it like uh what are the latest n updates so if you're troubleshooting
this is a really good step you can literally paste the Json values or paste uh content from your prompt in here and ask it give it some context ask it to um help you out with a specific error code copy and paste the error code in here and this will get you 90% of the way um it's a really helpful tool you can see what are the latest updates it's come back with the latest version released on Feb the 6th and it's now Feb the 19th if you have an error code paste it in here
I guarantee it will get you most the way there things like code nodes are really helpful to iteratively build with perplexity by the way in the background to perplexity you can choose the different model so if you want some more deep research models like R1 or reasoning O3 mini R1 deep research Etc then you can access those but it runs with things like Claude 3.5 Sonet GPT 40 mini in the background anyway so it just Powers up your searches with webs after building hundreds of workflow automations for clients and all sorts of businesses I decided
to create this community and shortcut the path to AI agents and workflows for you whether you're a business owner a content creator or someone who wants to build a business around AI agents and automation then this community is designed for you to get started as simply and as quickly as possible when you join in the community your find this start here post you can start here introduce yourself share what mission you're on and your business goals and you'll get connected to like-minded people on the same Mission as you with similar values from all over the
world so Roberto here for example has 25 years of automation knowledge as a controls engineer and his goal is to create an agency using these automations sahill has been running running an agency for a year now and been working with AI for roughly 2 years now and he's got a London based agency and wants to connect with like-minded people similar to you and then Mary here has got a fascinating background she's living in Cape Town and she was a software tester for 20 years but since the AI Buzz has started she's decided to Dive Right
In with this community and learn how she can automate her own business but besides that she's also a trained life coach and always looking for ways to build out that work and her coaching then if you head into the classroom you'll find a bunch of free courses that will take you from beginner to AI expert each of these courses is broken down into the Core Concepts to help make it more digestible and easier to Mark where you've got to here I share all all my top tips on what I've learned in how to shortcut my
journey to Mastery of N and workflows if you're not looking to learn from scratch then we also have pre-built templates just for you in our template Library these are all production grade templates having worked with a ton of clients and built over 100 workflows now we've got some great examples of business use cases in here including AI agents that help us manage our invoices manage our email inbox complete our research scrape any website a ton of business use cases that I've earned thousands of dollars building out for my clients and if you still have any
problems then you can just head to the questions section within the community and we'll tackle it directly in the community or on a call with you I hope you learned a lot from this course thanks so much for watching check out my other content and hit like And subscribe if you want to see more
Related Videos
How to Run AI Voice Chat ANYWHERE with DeepSeek R1 (from your pocket)
11:34
How to Run AI Voice Chat ANYWHERE with Dee...
Simon Scrapes | AI Agents & Automation
3,511 views
Manus AI replaces your AI tech stack? (Full Demo)
1:03:31
Manus AI replaces your AI tech stack? (Ful...
Greg Isenberg
174,818 views
Connect N8N AI Agents to EVERYTHING using MCP?
37:21
Connect N8N AI Agents to EVERYTHING using ...
Simon Scrapes | AI Agents & Automation
24,519 views
FREE DeepSeek-R1 Course: Build & Automate ANYTHING
3:43:24
FREE DeepSeek-R1 Course: Build & Automate ...
Julian Goldie SEO
132,532 views
16 Things I Wish I Knew About n8n Before I Started
30:56
16 Things I Wish I Knew About n8n Before I...
Jono Catliff
23,255 views
Build Your AI-Powered Second Brain with n8n
17:53
Build Your AI-Powered Second Brain with n8n
Aftab | AI Automation
1,094 views
AI Agents Fundamentals In 21 Minutes
21:27
AI Agents Fundamentals In 21 Minutes
Tina Huang
595,547 views
NotebookLM Just Got HUGE! 5 Game-Changing Features
31:40
NotebookLM Just Got HUGE! 5 Game-Changing ...
Tiago Forte
53,885 views
FULL INTERVIEW: Trump reveals Canada 'end game,' sounds off on 'rogue' judges
22:38
FULL INTERVIEW: Trump reveals Canada 'end ...
Fox News
286,911 views
Make vs n8n—The Wrong Choice Will Cost You
49:30
Make vs n8n—The Wrong Choice Will Cost You
Stephen G. Pope
63,136 views
OpenAI’s Deep Research Team on Why Reinforcement Learning is the Future for AI Agents
32:46
OpenAI’s Deep Research Team on Why Reinfor...
Sequoia Capital
117,246 views
Model Context Protocol (MCP), clearly explained (why it matters)
20:18
Model Context Protocol (MCP), clearly expl...
Greg Isenberg
145,633 views
How To Build a Startup Team of AI Agents (n8n, OpenAI, FeedHive)
24:47
How To Build a Startup Team of AI Agents (...
Simon Høiberg
382,163 views
Everything you need to build your FIRST AI Agent (Full guide)
1:53:12
Everything you need to build your FIRST AI...
Simon Scrapes | AI Agents & Automation
2,419 views
Is MCP the Future of N8N AI Agents? (Fully Tested!)
25:29
Is MCP the Future of N8N AI Agents? (Fully...
The AI Automators
26,535 views
FREE AI Agents Course: Build & Automate ANYTHING
2:08:15
FREE AI Agents Course: Build & Automate AN...
Julian Goldie SEO
24,615 views
Master n8n in 2 Hours: Complete Beginner’s Guide for 2025
2:10:36
Master n8n in 2 Hours: Complete Beginner’s...
Jono Catliff
83,249 views
Elon Musk sets record straight on SpaceX rescue mission
29:42
Elon Musk sets record straight on SpaceX r...
Fox News
703,495 views
Build Your OWN AI Agent That Can SEE And SPEAK With Ease
46:47
Build Your OWN AI Agent That Can SEE And S...
Leon van Zyl
150,422 views
Don't Build Another AI Agent Until You See This: Anthropic's Secret to Effective Agents ~ n8n
28:11
Don't Build Another AI Agent Until You See...
Mahmut Kasimoglu
32,485 views
Copyright © 2025. Made with ♥ in London by YTScribe.com