prompting a model is basically all about asking the question that you want the model to answer simple enough right but it's actually a lot more nuanced than that there are a number of techniques you can use to get the model to provide the answer you actually want otherwise you may get answers that are downright frustrating all of these techniques actually boil down to one main thing that you need to remember the key to getting the model to work for you is to not assume that the model knows what you're talking about you need to be
explicit about what you're looking for and that usually means spelling out your needs in more detail than you might initially think so in this video in the olama course we're going to look at the most common prompting techniques that you can use to help get better answers from your models I'm Matt Williams and I was a founding maintainer of the olama project it was amazing being there at the beginning but now I am focused on building out this Channel and getting all of you up to speed with all things local AI you can find out
more about olama by visiting the ama.com homepage clicking the big button in the middle of the page and getting it installed on your system if you need help using olama look at the rest of the videos in this free course okay let's start with the simplest kind of prompt the zero shot prompt this is what most folks use all the time it's a simple question this works well when the model has been trained on the topic in the question for instance my zero shot prompt might be determine the sentiment of the following text movies are
expensive this is a great starting point but shows one of the problems that come up with simple questions the model couldn't read my mind so didn't know that I wanted it to answer only with a single word sentiment I can edit the prompt to say what I actually want to see and now run it again and it gives me a good answer change the text and again it answers correctly let's try one other bit of text so these all worked great and it was all just a matter of giving the model just a little bit
more detail what kinds of details could you add well there's lots of things one is the Persona you want the model to emulate tell the model you're a surgeon or you're an experienced YouTube SEO expert or you're a lethargic idiot who sits on your butt all the time uh well maybe not that last one maybe that's just uh that old friend from high school and not someone you actually want to emulate there's also the context to help the model know exactly what you're talking about and tone is the feeling you're going for in the answer
but sometimes a zero shot prompt can't cover everything so a few shot prompt can be useful the main idea here is giving the model examples of how to answer essentially you're training the model to answer in a specific way with a question and prepared answer or answers there was a paper that discussed this back in 2020 and it gave some examples some strange examples the first takes a sentence where there is something wrong with the way it's said and the model should simply correct it it isn't in the form of a question so the model
should see the input sentence of I'd be more than happy to work with you in another project and correct it to I'd be more than happy to work with you on another project sometimes you don't actually need multiple examples but can instead use just a single example like this one it defines a new word a watpo and provides an example of how to use it in a sentence and then it defines a far duttle and expect another example sentence which the model is able to do you know what needs no definition or example sentence liking
and subscribing you've heard it so often that's almost cliche right but it really helps a channel like mine so if you enjoy content like this please like And subscribe okay back to the prompts in most cases coming up with a single example is always easier than coming up with many examples some will refer to this as a single shot prompt but others will call even a single example a few shot prompt let's move on to the next prompting technique Chain of Thought which you might see as Co the paper that talks about this is from
2022 and one of the main examples ask the model to figure out how many apples a cafeteria has at the end of the day to do this the model is asked to think about the process to come up with an answer step by step so here's the prompt Roger has five tennis bowls he buys two more cans of tennis bowls each can has three tennis bowls how many tennis balls does he have now and then an example answer is given working through the problem Roger starts with five balls two cans of three tennis balls each
is six balls 5 + 6 = 11 the answer is 11 and finally the actual question is asked the cafeteria has 23 apples it used 20 to make lunch and bought six more how many apples do they have so the model will try to replicate the example in the prompt in the paper an answer is given of the cafeteria had 23 apples originally they used 20 to make lunch so they had 23 - 20 equal 3 they bought six more apples so they have 3 + 6 = 9 the answer is 9 the paper provides
a few more examples of using this style of prompt but then there was another paper from that year that suggested that just adding the phrase let's think step by step step or some variation of it might be just as effective sometimes the problem is a bit more complicated than what can be described in Chain of Thought since we saw that providing the instruction of think step by step is sometimes just as good we can also ask the model to think of all the sub problems that must be solved first before answering the question then to
think step byep through each of the sub problems and finally to use all of that to answer the original question a paper in 2023 referred to this as least to most prompting sometimes this can be answered in a single prompt and other times you may have more success using a an agentic approach of generating the sub problems in one prompt than ask them the model separately to solve the sub problems and finally to use all the information to come up with the final answer hopefully as you've watched all the videos in this course you'll have
learned how models work they simply predict the next most statistically likely word to show up given the context when you ask the same question multiple times you're likely to get a different answer each time so you could try asking the model to give you a few variations of the answer how many variations well it might be hard for you to process an infinite number but three is a good set of answers to be able to go through these are probably each of the most fundamental prompt techniques used today but there are many others that are
variations and combinations of them you'll hear names like meta prompting sequential symbolic reasoning react and others that take these core ideas and and tweak them a bit definitely worth looking into some articles about each to give you inspiration on how to get better answers from your models you wouldn't be that far off if you realize a lot of this is mostly just common sense but it's often great to get reminded of even the most obvious things when working with llms some folks like to call this prompt engineering though I'm sure every engineering school will think
that it's a disgusting title since the rigors of engineering hardly apply Microsoft got sued over the use of the title sales Engineers but you can call it whatever you like are there other techniques that you like to use when working with prompts share them in the comments below thanks so much for watching goodbye