MASTER Prompt Engineering in 50 min

7k views10019 WordsCopy TextShare
Ben AI
In this video I'll break down everything you need to know about prompting efficiently inside of AI a...
Video Transcript:
hey guys so in this video I'll break down everything you need to know about prompt engineering inside of AI agent and AI automation systems now I know there's a ton of content out there on prompting but I find most of it either very theoretical or without a focus on prompting inside of AI agents and AI automations so in this video I'll try to be very practical and second of all really focus on prompt engineering for these AI agent and air automation systems now I think many people think they know prompting because they can use chat
GPT efficiently and trust me I've been there too uh but prompting inside of these AI agent and automation systems is actually very different than prompting from within J chat gbt uh with chat gbt you can go back and forth as much as you want really to get the right output so understandably most people don't really worry or focus on using best practices and to be fair you don't really need it but when you're designing these AI automation or AI agent systems you first of all don't have the luxury of that back and forth tweak you
need to get it right with a single prompt and second of all when you actually start implementing these systems inside of businesses and start running it on a large volume of tasks you need your system to be consistently reliable H even with the edge cases and that really does require good prompt engineering and again I've learned this the hard way uh so in my opinion uh good prompt engineering is really the number one core skill you should Master when you want to deliver reliable AI systems to companies now if you don't know know me yet
I'm Ben I've been implementing AI agents and AI automations inside of companies uh since the end of 2023 now just a disclaimer I'm not pretending to be some genius prompt engineer I don't have a PhD in AI or anything like that I'm still learning every day myself uh I'm also very aware that there are many different ways to efficiently prompt and except for a few proven and study techniques there are lots of ways to get consistent outcomes this is just an overview of what I've learned what works for me and hopefully it can help you
but again this is not the only way to do it so I've broken this video down into five sections in the first section I'll give you just a high level overview of some key Concepts that I think you should know when prompting inside of these AI automation systems then I'll go over my prompting Frameworks which include uh short structured prompting long structured prompting and agent prompting and I'll also give you my initial take on how to prompt the new gb01 model then I'll go over all the most common prompt use cases you will actually find
when you're designing these AI automation or AI agent systems and I'll give you my recommend ations for each of the use cases on which prompt framework to use uh some examples and also which model to use to get the most Roi out of your system then I'll go over very quickly some other prompting use cases that you'll need in agent systems like uh tool input prompting and sub agent prompting and lastly I'll go over my recently updated agent prompting tool now this tool doesn't replace your agent prompt but it will help you help you write
them better and faster and I'll explain you quickly how to use it now before starting why do we actually need good prompt engineering you've probably heard the phrase English is the new programming language by now and I think it's very true because non-coders like me can now build extremely powerful uh applications and automations uh with just plain English but communicating with L&M is different than with humans so learning how to efficiently communicate with L&M is in my opinion the number one core skill in building these AI systems and it will also increase the reliability of
your system it will increase the accuracy your system and also it will lower the cost of your system which becomes very important when you actually start implementing these systems inside of businesses and start running it on large volume tasks so here are the key Concepts I just want to go over very quickly uh first of all as I said there's a big difference between conversational prompting and structured prompting right conversational prompting is what we do inside of chat GPT where we go back and forth until we get the right output now we don't have that
flexibility inside of our systems so that's why we use structured prompting right and structure prompting basically follows a framework and incorporates proven best practices right so that's really what we're trying to do with structured prompting right is first of all we're trying to incorporate those best practices now I'm want to make this video practical so I'm not going to go over them in depth but we want to incorporate these studied and proven techniques like rle prompting few shot prompting chain of thoughts emotional manipulation and markdown formatting and to make sure to ensure that we that
we incorporate incorporate those best practices we use a framework right so that framework basically serves to incorporate those best practices to have uh a good structure in your prompt but maybe mo more most importantly it will give you a framework for every time when you need to write a new prompt uh by going through that framework yourself you basically make it easier for yourself to always optimize your prompt and make sure that you include everything that you need to include and then lastly uh in these structured prompting we really want to always go for single
task optimization right L&M are not good at performing multiple tasks so we always want to optimize these prompts for one specific uh uh small task right and if we have complex tasks that have have to be done that's where chain prompting comes into play because basically we break that complex tasks down into separate tasks and put it in a chain where we use the output of the previous language model into the next language model uh and through that system we can through that chain prompting we can increase the reliability and the consistency of our system
so just this is just an example of uh a tool that I delivered to a client which is a reformat CV tool so basically uh this client needed to take an original CV from a candidate and basically put it inside of a new layout and also take out um the contact information so instead of trying to do all of that inside of one L&M step or language model step which would not have that much reliability it would make mistakes Etc we break that complex task down into smaller tasks like first organize the text right then
reformat the CV then take out the contact info and then HTML format the new uh CV so do we always need these really long structured prompts now in my experience no sometimes we can get away with short prompts now really depending on the use case and the complexity of the task uh in my experience sometimes language models actually get confed fused with these really long prompts for simple tasks second of all we're also trying to of course make this system as cost-efficient as possible and the longer our prompt the more it's going to cost us
and thirdly it also saves you time so depending on on the use case and for some of the simpler task we can actually use a short structured framework which I invented but basically uh therefore we have three different uh Frameworks which is the short structured framework the long structured framework and the agent framework because for agents we also use a slightly different uh setup uh to make those agents as efficient as possible now when do we use which one that really depends on the use case for your prompt and basically I've identified six of the
main use cases for prompts that you will find when you're designing these AI systems and the first one is extracting data uh then we have generating content classification or categorization evaluation data transformation and decision- making now of course these are very broad categories and there are a lot of specific use cases underneath each of those and if I'm forgetting a big one please let me know in the comments below but I've basically boiled it down to these six and later in this presentation I'll give you my recommendation on which uh prompt framework to use for
each of the use cases which language model to use for each of these use cases to get the most Roi out of your system and I'll give you an example for each of these but before that I do want to go over the three uh prompting Frameworks so first I'll do the long structured prompting framework uh then I'll do the short prompting framework because it's basically a reduced version of the long one uh then I'll go over the agent prompting framework and then I'll give you my initial uh tips on prompting with the gb01 model
uh which is a little bit different than uh with the other models now before showing you the framework very quickly if you don't know yet markdown formatting is basically a way where you can uh structure your prompt with headers bold uh Etc um which open AI actually themselves use in system prompt so it's probably a good idea and it seems like these language models understand therefore they can understand sort of different sections and you know importance of things by using markdown formatting right and second of all it also helps you structure it if you use
relevance AI you'll see it pop up like that automatically right so the way you do it is basically you use one hash for H1 a double hash for H2 three hashes for H3 then you have two asteris on each side of the word or sentence for bold and a dash for a bullet point and you can actually use three dashes to get that uh line you see here um that's what I these is are the ones I use and I think is everything you need in terms of markdown formatting so let me go over the
framework for the long structured prompts now again this is not the only way you you can do this there are multiple ways this is just how I do it and what works for me now basically I have seven sections inside of these long structured prompts which is the role the objective context instructions examples variables and notes now I'll go through each one very quickly and explain what you should do in each one um but if you want to look at this in detail I'll also make sure to add the link of this presentation inside of
my free community if you want to check it out in more detail so basically first we have the role uh also sometimes called Persona right these namings really don't matter that much it's more again a framework for you to to make sure that you include everything you need to include inside of your prompts so in this Ro prompt of course quite extensively studied that if you give these language models a role they seem to perform better right so besides assigning a role we also want to assign qualities and these qualities we can basically describe um
we can give it qualities for the specific task we're trying to um execute with this uh prompt right so I tried to put in here the way you can think about uh phrasing this or or coming up with this part of the prompt in each of the sections so in this case just try to imagine imagine giving the greatest description of someone you could ever give right because besides assigning this role and these qualities we also really want to Hype it up uh in terms of how good it it is at that specific task right
so useful sentences could be you're a world class right with a with particular expertise in right so an example here you can see I use the markdown formatting right to get an H1 right you're a worldclass CV formatter with a particular expertise in carefully reformatting an original CV into a new layout right so give roll and um some qualities right now if you're using relevant AI you can actually add this part this R prompt part of the prompt inside of um the system prompt so if you click here on advanced settings in your L&M step
you actually see here this field system prompt you can add that part of your prompt in there and some people say it gives you actually better out outcomes um now for the second part we have our objective also some people call it task um which is basically a direct description of what needs to be done and you can think of this as what would you tell chat GPT right so it's it's really just a direct description of what needs to be done in this prompt and it's one of the most important part of this prompt
right uh some useful sentences you can always start with something like your goal is to right and we can also incorporate this Chain of Thought uh in this section which is also proven to to improve uh outputs right and outcomes so we can use this think step by step so for example your goal is carefully read the original CV profile and reformat the CV into the new format you'll think step by step through the following process to ensure a good outcome one carefully read the original CV two add content from the original CV into the
new layout for the new CV and three add any information that was present in the original CV but that doesn't have a clear place to input in a new CV uh to the end right so just a high level overview of what it has to do right and then we have the third part which is the context part now this is basically you're describing to the language model why is it doing this task so you show him why this how this task basically fits into the bigger picture and why do we do this again it's
been sort of studied that if you give the language models an idea of the bigger context and why it is doing it uh it actually can improve um outcome uh too so some useful sentences you can use here are it is vital to my career which it sounds crazy but the sort of what they call Em emotional manipulation uh tricks also seem to improve outcomes so why not use them uh so in this case for example this task is crucial for our recruitment firm the reformatted CV will be sent to potential clients and therefore it
is vital to my career that the CV is reformatted in the specific format mentioned below it is also vital uh to my firm that information from the original uh CV doesn't get lost or is changed as this would undermine the quality of the CB right so just a high level uh description of why this task is important or how it fits into the bigger picture then we have the fourth part of our prompt which is instructions now some people also call them rules or specifics um now basically in this part we're going to really go
into details right so usually we want to include in this part all the rules right that it has to follow we want to also output uh output structure specifications if we have them so if it needs to you know output it in a specific format like Json or uh a specific sort of um structure then we want to lay that out here too and the way I usually approach this section is basically I start predicting before I actually run it what could go wrong in the output right and that's sort of how I come up
with these rules right so also again some useful sentences here you can read these rules you can play around a bit with these em emotional manipulation tricks it really sounds crazy right it's like I will tip you $1,000 if right and you put in the rule it is vital to my career or caps important right it sounds crazy but uh it it seems to actually really work so why not use it so here I have an example right it is absolutely vited to my career that you do not add text make new facts up or
edit anything from the original text right and then I go through all the rules right things that might go wrong that you could that you can already sort of predict that might go wrong put them in here or things you when you run it and you see things that go wrong you can also add that uh in here right and then also in this case I also have output specifications which you can see I actually used an H2 header here to let the language model mode know this is still part of the instruction uh section
right so here's the new format for the CV right I put in the output specifications and that's it uh so that's it for the instructions then we have examples now of course examples is one of the most important part of our prompt um it's been really well documented right now that giving in input and output examples does increase the reliability of these prompts so much more so this is really important to do and one tip if you don't have any examples uh just leave this section out first run it once with the best model you
can then uh let it run get an output right change it if you need to if to make it better and then use that uh as an example input and output uh cuz it's better to do that 2 minute extra work because it will increase that reliability so much more so the way I do it is again I have the H1 header with examples then I put a H2 header with example one and then I have bold input now put in the input right and the output bold and this is the the output right so
in this case to not make it really long I I only put in one but in general good to do two at least uh so that's it then the last the next part is the variable which I call the variable basically if you're going to feed it uh a variable then put it in this section so this section will change depending on what the prompt is about so in this case this CV you will reformat today right and then I put in the the variable where they output the original CV right so any variable you
might have I usually do it under a separate header in this sixth sort of section um and then lastly we have notes and notes in my experience and I've Incorporated this only the last few months have been really important because L&M actually take the beginning and the end of prompts more into account than the middle of the prompts and especially that's the case for these longer prompts so what we want to do in this note section is really double down on all the important things so all the important roles we can double down on output
formats which is of course also very important for our prompts and we can also double down on best practices like this Chain of Thought like think step by step and then also really useful if you see um your model making mistakes or not doing what you want the model to do then add those things where it goes wrong those rules whatever inside of the note section and you'll see that it will actually uh start correcting itself very quickly if you add them here in the notes instead of of course you can always add it in
the instruction section too but because it's at the end of The Prompt it usually corrects itself very quickly um so again you can use some emotional manipulation and chain of thoughts so here's just an example it is absolutely vital to my career so basically just used some of the core rules right again here and uh that's it so that's it for the long structure prompting now basically for the short prompt framework we basically reduce that long one down to the most important part which are the objective plus instructions right the rules the examples and the
variable of course we need the variable if if if that applies right so basically these two things so you can see here I have an example please identify the name from the person in the CV below so in this prompt all we need to do is get the name from that CV right now that's a really easy task right so we don't have to go with this whole long prompt because it's a really easy task uh and we don't actually want to spend these amount of of tokens on a long prompt when it's such a
simple task right so you can see basically here I have the objective in one sentence please identify the name of the from the person in the CV below right and then here I have the instructions the to rules right only output the full name no summary no explanation nothing but the full name now this is a useful sentence by the way if you struggle with your model outputting more than just a name because if you don't instruct it to do this a lot of times what you'll see is your language model will say um something
like the name of the person in the CV is benas brundle instead of only the name right and sometimes we only really need that name because we're going to save it somewhere or we have a condition after so if you want to make sure that it only outputs let's say a URL or in this case only the name then this is the way I found um that works really well for it to never give an explanation or nothing like that so only output the full name no summary no explanation nothing but the full name right
and then I've put in another one which is also good if there is a chance that the name is not present for example or in any other prompt use case where there's a there's a chance that the information it's looking for will not be available and it's very good to add this if if you don't find a name only output no name found nothing else no summary no explanation now it's very very important to Define what to do if you can't find it because if you don't what you'll see is they're going to start hallucinating
or they're going to use example outputs right from your example sections so very important to instruct these models also what to do if they can't find it right uh now in this case I've not even put in a an example input because this is this huge blob of text a huge CV which going to cost us a lot of tokens and it's such a simple task that all I do is I give it an example output so it understands that I only want the name as the output uh but that's it but of course for
other use cases when you you might not have a huge BL of text in general it is good practice to put in the input to at least um and then of course the variable CV you have to identify the name from right and then we have the variable but yeah like this is sort of the the the short prompt framework now of course what you can do is if you see your model is actually struggling then you can slowly add more sections to it to to make it work right so you'd probably first try the
example input to add that here see how it goes then right then you go um and you add the r prompt right then you add um maybe the context and then lastly if it's still struggling you can add in the note section to double down on rules right but you can sort of think of it like that if it's a simple task just start with this simple template and then if it's still struggling you can always add some other things on top agent prompting is the hardest type of prompting and that's because agents usually have
multiple responsibilities and as I said before language models are not good at uh performing multiple tasks and agents usually are in charge of quite a lot of uh different things and that's why the prompting becomes very important and it's also why they these agent systems some sometimes fail right so first of all it's really important to know that we want to give as little responsibilities to our agent as possible right so RI agents ideally their only responsibility is in decision making right when to use which tool and which order or sub agent which is sort
of the decision-making part and of course our agent also needs communication right so it either communicates with us or with their sub agents and that's already multiple responsibilities and therefore we really want to offload all the other work that has to be done in this system to tools and sub agents right we really want to reduce down that responsib ility to the core bare minimum and this is really ideal for an functioning AI agent system uh and therefore because it's hard to do add language models struggle with multiple tasks and it's decision- making we always
want to use the best models right so for example we never want our agent to besides using tools Etc to also for example your job is to categorize an email write an email or generate content no we don't we'll never want to do that we want to go for something like your job to manage my inbox and decide the right course of action to take according to the instructions given to you you have the following tools Available to You categorize email write email generate content so he's not doing this himself this is done by the
tools or the sub agents now I wanted to mention this quote that I recently heard from someone who's builts a lot of AI agents which stuck with me because I think it's very true the solution to problems of AI is usually more Ai and I think it's very true for the AI agent space if your agent which most people seen doesn't behave or perform the way you want usually the solution is more AI in the forms of tools or sub agents so the things he can't do or the things that go wrong usually you can
solve it by adding in another tool and adding in extra AI there or another sub agent so what is the framework for our a AI agent prompting it's very similar to The Long structured prompting but with two main differences first it has a section called sop because it is usually in charge of this decision making uh process we want to give it more context an idea through an sop like what do you have to do in which case right and second of course most agents have access to tools or sub agents and we have to
actually add in a section where we describe what these tools can do and how to uh how to use them right so those are the two main differences so very quickly right role Persona is very similar to The Long structured prompt right you're Ben friend worldclass personal assistant manager agent with particular expertise in meticulous and accurate management of his personal ad ministration related to email inbox calendar and notion databases right object objective and task here we can just give a high level overview we don't have to go into the detailed SRP here just yet so
what is this General responsibility right again we can use these sentences right think step by step and in this case using your team of specialized agents so your goal is to help Ben perform tasks related to his email inbox calendar to-do list and YouTube content calendar using your team of specialized sub agents think step by step to ensure thoro thoro and accurate task management when Ben sends you request you have three key responsibilities one decide which sub agents to delegate task to communicate clearly to the sub agents what needs to be done and then thirdly
communicate any work that has been done by your sub agents back to Ben using the send WhatsApp tool right then the third is context why is it doing this task again right very similar to The Long structured prompt right you're with triggered bit request from Ben as Ben world class same thing uh not that important um then we have the SOP right which is unique to these agent systems so very important also in these agent systems so break down the steps in a numerical way right so what are the steps to take in the process
right and then you can use if statements for conditions right also you you should double down if you're using relevance on the SOP inside of the flow Builder that's really where it translates to and in that flow Builder you can double down on it but late more on that later so you can see in this case it's quite a simple sop there are more difficult ones it is vital to bank career you all think step by step through the following process you also want to use that uh Chain of Thought prompting in that sop right
the side W agent right in this case very similar as the objective but normally if you have maybe a more detailed sop or more complex agent you'd have to go in more detail here right so one uh two three right but sometimes you can go 1.1 right if this happens then 1.1 1.2 1.3 and then two right you give it sort of exactly it has to be what what it has to do in in the right sequence right then we have again our instructions or roles right important roles double down on important roles for sop
uh communication rules with the sub agents again you can think of it what can go wrong in the output very similar as a long structured prompt then we have our tools and sub agent uh section which in this part you want to give as much context as possible to your agent on what these tools do when to use it and how to use it so example in this case we we are describing sub agents right so in relevance AI by the way you can use a a slash right to find um the your your tools
and your your sub agents just so to make it even clearer so in this part you can see uh first I have you've been equipped with one tool and three sub agents to achieve your goals right I give it the the first sub agent email manager agent this agent can handle all tasks related to B's inbox including getting emails sending emails in writing emails right so I describe what they can do right then I have another section when to use for any task related to Ben's email inbox right and then how to communicate now this
one is actually also important because we want our manager agent which in this case is a manager agent to communicate clearly to our sub agent because if it doesn't our sub agent can't do their work properly and the system will fail so when you delegate a task to email manager agent be as detailed as possible when instructing him what to do and includes all relevant details and I even add in an example for example please retrieve all unread emails from Ben's email inbox from the last month right and then we go through all of them
for tools uh we we do the same right but we don't have to of course uh specify how we communicate because they don't communicate with tools but what we can say is like expected output of the tool or things like that so this tool sends messages to Ben through WhatsApp you will always use this tool to communicate and I didn't finish here but to communicate any updates questions Etc to Ben right and then uh lastly examples now this is an interesting section because of course in these agent systems it's more about following a process an
sop and it's not really an input output right so you might ask what do we do in the example uh section and many people don't even use it in my experience it's actually really good to still use this and it will improve the out outcome a lot so instead of an input output what I what I think the best way to do this in agent systems is we put in an input so we give it an example of request right and then we give it an example of his sop for that request right so it
understands what to do with one or two requests and in my experience this really helps so for example you can see here example one input please check please schedule meeting with John Doe tomorrow at 3 pm to discuss the new project also send him an email confirming the meeting invite right so first we we show him the SOP he should follow right your sop decide which agents delegate to calar manager agent and email manager agent right then instruct calendar manager agent and we even describe what he would have to tell him right please schedule a
meeting with John do tomorrow at 3M so power of time to discuss the new project right and same for the email manager agent and then last step would be to communicate back to me right inform everything that has been done by your sub agents back to Ben viia send WhatsApp tool and then example again the meeting with John do has been booked at 3 P.M sapo uh time and the and the email has has been sent to confirm the meeting right and then we can even set add in the email right so it knows how
to communicate back to you but this really in my experience really helps when your agents when you see your agent is struggling following this this sop and by the way you can you can even instructed what I noticed helped a lot is if you have these edge cases where it sort of goes wrong like in general goes well but there's some Ed Edge cases where it doesn't go well then put those edge cases in as the examples right Same by the way for prompting inside of AI tools right with a long structured prompt if you
if you give them examples of these how to handle these edge cases you'll see that that the outcome for these edge cases will be a lot better and then lastly notes same as the long structured prompt Right double down on any important roles double down on any anything important in the S Etc lastly very quickly um on prompting the gpt1 this is going to probably change prompting a bit uh for these specific models because they work they're fundamentally designed in a different way uh now we're very new to this and it will also depend on
the use case but at least what open AI um put out there is that keeping prompts a little simpler seem to work better for these types of models um so we might not have to go extremely long I think it's still good to have this framework in mind we might not have to give it as much context uh as we as we do in our long structured prompt for example um because they say these models are trained to work best if you write simple straightforward for prompt so especially that sort of task or objective uh
part of the prompt is going to be very important uh they won't need extensive guidance because they can find the optimal path themselves right and then uh also uh they recommend to avoid using Chain of Thought prompting because of course this Chain of Thought prompting is baked inside of the model already right using your own reasing in the prompts won't work or even might hinder the performance right and then thirdly it seemed that seems that still uh you should use the markdown formatting uh just like we we were doing in a long structure formatting uh
because it seems to still perform better with that markdown formatting but uh we'll see how this develops in the upcoming weeks and months now I think this gp01 model will become very interesting for these agent systems because as I said these agents have a lot of responsibility at the moment and that's why they sometimes fail right and if we can get a gp01 model to basically generate this sop for our agents we basically take some responsibility out of our agent agent hand and they can just execute on these tasks instead of coming up with uh
the soop for specific request I think it will uh improve the reliability of our systems a lot so I see a lot of potential in um using this gp1 to basically generate this specific sop for each request before letting our agent execute um on uh on these tasks right and using the tools Etc uh because of course gp1 at the moment we can't use tool calling so we can't use it as our actual agent so now I want to go over over the most common prompting use cases and my recommendation for which prompting framework to
use and which language model to use so first we have extracting data now again of course this is a very broad uh categorization but there's lots of types of extracting data and I also included summarization in this uh but you'll you'll come across these use cases a lot in designing these AI systems which is things like extracting URLs uh from Google search results getting data specific data points from a scraped LinkedIn profile but really any data you capture out of a longer text or even summarization of a longer text uh I consider under extracting data
now L&M are actually really good at uh sort of identifying these specific data points inside of a longer um text so because of that we can actually get away in my experience with short structured prompting now of course again take this with a great assault because it really depends on the complexity of the task we have a really complex data extraction prompt of course we're going to go long but in general uh what I what I see in general you can get away with short structured prompting and also because these language models are so good
at it we can also get away with using these cheaper models right so I'm a big fan of GPT 40 mini very cheap and seems to perform really well so but any of the cheaper model will probably do this right right so we can this this use case we can get usually away with short structured prompting right then we have the second one which is classification or categorization and now common use cases for example email categorization or topic classification now this depends depends a little on the complexity of the categorization now if it's a really
easy one uh we can get away with short structured prompting uh for example in the example here classify the email below as action needed or no action needed uh really simple way of categorizing now again these L&M are really good at this so we can get away with short structured prompting and with cheaper models right now when you get to subjective classification or categorization no so not action needed or no action needed but maybe something more subjective is like uh really important and low priority and medium priority like we have to give the model context
on when we believe something is medium or low priority and of course then we need a long prompt to actually give the rules and the instructions of when to classify it in which way right so this really depends on the type of classification or categorization but in general even with these subjective classifications in my experience you can still get away with cheaper models right then we have the third one which is generating content again really broad category of course but basically I mean to say that this this one is any use case of prompting where
you're generating content out of nothing right and you can think of writing blog post write analysis write an email but anything it's generating or even image generation Etc now because this is the hardest for L&M and is where they hallucinate most and make most mistakes we want to really uh go long structured prompting here give it as much context as possible to get the best outcome possible and also we always want to use our the best language models here right so gp01 some people say that actually for creative writing Some people prefer GPT 40 still
um so you have to see what works best for you but again for generating content right we want to use the best models right because this is the hardest to do for these eles now for the next one we have evaluation now evaluation uh in some common use cases that I use for example a lot in my systems is evaluation of a vector search output right so I have a quick example here you're an AI assistant responsible for verifying the relevance of search results from Vector data against the user's question right so we actually check
if the result we get from a vector search for example when answering a question from a uh um a customer uh actually makes sense right if it's relates to the original question right so it sort of has to evaluate if uh if that question is is right now another example could be senent analysis LM output quality assessments which I I'm starting to use more and more too so evaluation is a little bit subjective in general and therefore and Depends highly on the context so therefore in general uh I go for long structured prompts give it
as much context and details as possible to get the best best outcomes and also in general in my experience use the best models for this also because these tasks are usually very important uh in your overall system functioning well so that's just my recommendation for the evaluation use case then we have data transformation right so very common too in these AI systems right for example you need specific formatting like the example I gave before right we need a CV in a specific format new format we need Jon outputs we need HTML formatting for maybe making
a nice Google Docs or whatever uh in general again I models are pretty good at this this data transformation so we can do short structured prompting but again this depends because for example this CV format or tool I used before I used long structured prompt because you know the output specifications was very specific and very long and there were quite some things that could go wrong so in that case I gave it a long structured prompt but if we just go like maybe HTML formatting or things like that in my experience we can get away
with short structured prompting although in my experience we really have to put in those input and output uh examples in these short structured prompts uh and even if we go with short in my experience for this data transformation it's better to go with uh the higher quality models the best models uh to to get consistent outcomes and then lastly of course we have decision-making which is again our AI agents right and of course we're always going to use our best models because this is the hardest uh sort of task uh to do for for these
L&M and it because it usually has multiple responsibilities uh we always go with the best models right and we use the agent prompting framework now lastly if you're using relevance there are a few more uh prompting use cases right so we have tool input prompting sub agent prompting labeling and naming tasks and we have the flow Builder so very quickly if you use relevance AI I'll go over it very quickly so first we have tool input prompting right so basically when we make a tool right we have here the name of the tool and the
description of the tool now these are actually read by our agent so we actually want to name this in a way that makes it easy for the agent to understand what this tool actually does right so get email this tool retrieves emails or inbox uh emails from my inbox right so just describing very quickly what this tool actually does will help your agent use it better right uh then we have the same basically for the user inputs right so the name of the user input will also be read by your agent right red Onre emails
use only un red so basically this part description of our user input is basically prompt to our agent on how to fill it out right so we can tell it directly use only unread when retrieving new unread emails Etc right so we actually want to describe here what our agents should do or what your agent should choose in this case with the drop down or what it should put in right or default value like we can uh we can really prompt it here and then uh also important in my experience is send clean data back
to your agent so we don't want to for example we scrape a website right or we get 50 emails uh back from our inbox we don't want to send this blob of text that is completely unorganized back back to our agent because basically we're giving another task another responsibility to our agent which is cleaning up data and because our agent is already a little bit overwhelmed with the amount of things he has to do uh we want to make it as simp his life as simple as possible and send clean data back to our agent
so yeah I just have a quick example Summarize each email below right so this this is from the get email Tool uh in 30 words in a concise way still include all relevant info make sure to mention action item so I just sent back to the agent a very concise summary of each email instead of a blable text of 30 30 emails and then he will have to summarize and do all kinds of other things before he sends it back to me right uh then we have sub agent prompting so important prompt techniques for sub
agents right so in in the sub agent section of your manager agent um you have a a setting there which is called prompt for how to use right this is basically you can double down and say tell your manager agent again what this agent does right that's all you do this agent gathers competitor data from review sites Google News blogs Etc so again we double down and we let our manager agent very clearly know what this agent can do right then we have another option there which is a template for communication right so as I
said before we want to sort of uh make sure that our manager agent clearly communicates to our sub agent uh with the things he has to do because if he doesn't our system fails right so we can do this through a template for communication right so we can say please do a comp this is from another agent right but please do a competitive research on this this for the past these days right so we can sort of specify that he always has to say this to our sub agent right to make sure that our sub
agent retrieves receives all the data he needs to actually perform his actions right and then we can add in these variables here these variables we can basically then here you can see companies Define again in another prompt to our agent on what how to actually fill in the variables right so I'll I'll tell him the companies that have to be researched right the platforms that have to be researched right so lot of prompting here but uh yes it it it can be helpful right like I notice even if you just add it in the system
prompt like how to communicate what it works quite well already but you can really double down on the reliability through these templates for communication right then uh we have another way to double down on that uh letting our manager agent know what our sof agent does which is in the agent profile of the sub agent we have the agent description and agent name and again we want to describe there very well what the sub agent does to because our manager agent will read that and then what I also do in terms of the sub agents
and I'm not sure if this is actually necessary or if this just hardcoded in but I always tell our my sub agents to Al always report back the information or uh to my manager agent right I don't know if this is actually necessary but I usually do it I add it in the SOP right as the last step you report back all the links back to your manager agent right and also make clear what he has to send back right and then uh and I also add that actually in the flow Builder and then we
have another few prompting things inside of relevance AI specifically which is the first one is labeling tasks so basically we can sort of organize our dashboard and uh sort of have labels for what has been done with each task right and basically this is a prompt too right so we can say blog posted right and that's that's the actual label right and then we prompted when to give it that label right so after post the web flow tool successfully run right so you can do this if you want to sort of organize it a little
bit better have a better overview in my experience I'm not a huge fan of this because I notice sometimes it runs into error because of this errors because of this uh feature specifically because for my understanding it's basically just a tool that relevant I added in the background but if you want to get better organization you can use this and then you have we have one more thing which is naming task right basically same thing but instead of putting labels we can also name the different tasks in a specific way and here in the advanced
settings we have an option for instruct instructions for naming tasks there we can basically tell our agent how to name these task Right Use the the video title for naming the task for example now again I'm not a huge fan because we already overwhelming these agents usually with tasks to do and if we do these things too you know you know you can imagine the reliability might go down a bit and then lastly we have the flow Builder also specific for relevance AI now this becomes important when you have like a very specific sop or
a very complicated sop that has to be followed right you can basically just double down on that sop in your prompt because it is very difficult sometimes to write out in language an sop you're writing out conditions Etc inside of a prompt it's not very uh easy to do you know so that why we can use this flow Builder to sort of double down on on that and make it clearer right and in my experience when you need it when there is this sort of complicated it really helps to do this now I'm gonna not
going to go in detail but basically what you can do is just instruction right this do this then do this then do this and then if you want you can add in a condition and then you have a separate sort of uh tree that it has to follow if this condition was met or that conditional met so use it if you have a complicated sop now lastly I just updated my um AI agent prompt helper tool now again it doesn't replace prompting but it helps you write them better and faster right because it can be
quite timec consuming to write these good Agent prompts but they are very important so that's really what this tool does so here here I have it right I make sure to also um share this a agent prompt helper tool inside of my community for free um so you can use it yourself so you can see here this tool is meant to help you save time and optimiz prompts for AI agent and AI agent teams you can keep input short and fast the tool will do the rest your sop is the only one that should be
actual actually detailed right because my prompt helper can come up with your s specific s so the idea is you could just type it in very quickly and it automatically sort of improves Clarity it expands a little on the prompt and it also adds in um those proven best practices automatically right and then uh most importantly maybe it auto generates the example section and the notes section this can also be quite consuming now it is important to actually double check if that example section makes sense right uh before you actually put it out and it
also formats your prompts in a markdown format so I just put in a quick example here right you could just type it in very quickly your Ben personal assistant for Ben right you handle his task related to his email inbox so you can be very short and our prompt helper tool will sort of expand on it and you know add these proven best practices context you're doing this to help Ben Ben manage this personal Administration right now the SOP I actually wrote out right um instructions the rules very quickly and the tools and sub agents
then we can run the tool we don't have to add in the examples and you can see now generated uh The Prompt the AI agent prompt you can see you're an expert personal assistance for B now it added that sort of you know extra sort of role prompt with the expert or world class with a particular knack for managing right it added in the qualities right your goal is it expanded a bit on the objective and it added in think step by step right um the SOP is still the same and it added in in
the instructions the emotional manipulation tricks right it is vital to my career I will tip you blah blah blah and it expanded a bit on the tools and sub agents and you can see it generated examples right so schedule meeting with uh Dr Smith at 10 a.m. next Monday right and you can see the output is like his sop an example of an sop now let's see if he actually did it right decide which agent use calendar manager agent did it right right communicate clearly to each please schedule a meeting with Dr Smith at 10:
a.m. next Monday sa Paulo time so did it well right and always communicate updates back to Ben meeting with Dr Smith scheduled for 10: a.m. next Monday so you can see did it perfectly the example so you don't have to waste time with this right did another one add by gross to my to-do list and then it also added in the note section where it basically doubl down on his perspective my tools perspective on the most important rules or things to follow right so again it doesn't replace your prompting but it might help you write
them a little bit better and a little bit faster so that's it for this video uh I hope this was useful I I added in one more thing here which is an LM pricing calculator which I found very useful when you trying to price the system for clients when you're trying to because it's sometimes very hard to predict uh the the L&M pricing uh when you start running this on large volume of tasks so usually what I try to do is run these systems once through this openi API price calculator you can basically put in
prompts and outputs and and through that you can sort of see the cost right so you can do that run your whole system through which is a little bit of a hassle but once uh to see the price per one run and therefore you can then calculate how much this will cost if they run it on 5,000 or 10,000 tasks now that's it for this video uh thank you so much for watching if you're still watching uh and if you got any value out of it I would highly appreciate if you can like And subscribe
and maybe leave a comment uh again thank you so much and I'll see you in the next one
Copyright © 2024. Made with ♥ in London by YTScribe.com