[Applause] [Music] in the next few lessons I will introduce you to the fundamentals of prompt engineering through various examples we'll start by understanding the importance of prompts followed by the types of prompts we'll then explore a few examples of how to craft these prompts I'll also explain what is hallucination and how to avoid it through prompting engineering finally we'll take a closer look at the key techniques used in prompt engineering let's get started so in this lesson I'm going to introduce prompt Engineering in generative AI essentially generative AI turns English into a new programming language
that's because it's the new syntax of generate to AI similar to how optimized code generates efficient applications well-crafted prompts and enable AI models to produce accurate output so designing a prompt is more of an art than science prompts influence the models to generate the right output without explicitly training or fine tuning them so fine shuring is a technique that we'll explore in the upcoming lessons but without explicitly fine-tuning or retraining the model you can get what you want by by crafting the right prompt that is what is called as prompt engineering so prompt engineering is
the practice of carefully designing and fine-tuning input prompts to guide an AI model's response in a designed Direction it can be especially useful when dealing with large language models like GPT where the quality and nature of the output can be significantly influenced by the wording and structure of your input prompt prompt engineering involves both an understanding of how the model processes the input and a creative approach to designing prompts that guide the model towards the desired output it is often used to fine-tune the performance of an AI system without really changing the underlying model or
the training data prompt engineering essentially influences the model to respond based on a specific requirement lack of prompt engineering leads to inaccurate and factually incorrect responses which is often called as hallucinations so it's important to understand the significance of prompt Engineering in the context of generative AI essentially what you ask is what you get so the more detailed you are with your prompt the more crisp with your prompt the better the output is so in the coming lessons we'll take a closer look at some of the examples which will drive the Alm towards generating the
desired output so having understood the significance of prompt engineering let's take a look at the key types of promts that's often used with llms so there are a variety of techniques obviously this section only deals with some of the important types prompt engineering technically reserves an entire course so we're not going to go into the details but I'm going to cover some of the important aspects and types of prompt engineering so as you can see there are a variety of prompts and we'll take a closer look at each of them so there is an explicit
prompt something like write a short story about a young girl who discovers a magical key and so on now here if you carefully notice we are explicitly asking the llm to write a short story and the theme is unlocking a hidden door to another world so this is being as explicit as possible and this basically drives the llm towards generating an output that matches your desired outcome or the expected outcome so the more explicit you are the better the output is similarly there are conversational prompts which you typically ask when you're dealing with a chatbot
now in this case we are actually interacting with a chatbot as if we are talking to a human so uh can you tell me a funny joke about cats now this is a conversational prompt and this can continue it can actually tell you a joke and you can ask something more about it and the conversation just goes on and on such prompts are called con conversational prompts now there are instructional prompts which will generate more useful content uh something like a blog post now in this prompt we are literally instructing the llm to write a
detailed blog post discussing the benefits and drawbacks of renewable energy and the outline should be structured as follows and we also Define what are the section of this blog post now this is being very in instructional or prescriptive with your prompt so you are essentially handholding the llm and leading it towards the outcome that is expected that is what is called instructional prompt then we have a context based prompt and we will explore this further as we go along but context based prompts will provide sufficient context text and the backstory to the llm before you
ask a specific question so it's a combination of two things one is the context the other one is a conversational prompt so in this this case uh I'm actually asking to suggest uh the tourist attractions and local restaurants uh based on my plan plan trip to Paris next month so if you see I have given sufficient context which is by plan trip to Paris next month and then it is going to uh Come Back to Me based on the prompt that follows the context so here the context is the trip to Paris and the prompt
in itself is the recommendations about uh restaurants and tourist attractions and so on so that is context based prompt the more you feed in as context the better the llm would be so you can even copy paste some context from other websites or your private data and and this will literally help or enable the llm to derive additional context now think of it like the backstory that you're providing to the llm before it answers your question then there are open-ended prompts which are very very Broad and they don't really have any context they don't have
any conversational style prompt they are very open-ended and they l Force the llm to be creative and come back with uh typically a pretty large answer so in this case we are asking about the impact of AI which is a very broad topic and notice that we are not adding any context and we are not instructing the llm we are simply asking a very open-ended uh question about the impact of AI on society so this is uh a typical example of an open and Ed prompt then there is something called bias mitigating prompt as you
understand llms are trained on large corpora which includes publicly available data sets and obviously the publicly available data sets are biased because they are ultimately generated by humans and humans tend to have biases in their thinking and in their writing so when you train an llm on the public data public that the data that's available in the public domain you obviously see certain bias So to avoid that you can steer the llm towards bias mitigation now in this example I am picking up a very sensitive topic which is cast based reservations in India now while
I pick this topic as an input to the llm I also provide additional instruction that says avoid favoring any particular group ideology or opinion and focus on factual information supported by reliable sources and also strive for inclusivity and fairness now this is a pretty detailed prompt and it it very clearly instructs the llm to stay away from any bias or any opinion that is skewed towards a specific group ideology opinion or a community so this is how you derive a very objective and factually correct information from llms without letting them hallucinate or generate content that
is biased so when you are crafting a prompt on any any topic where you know there are biases and there are highly skewed opinions this could be one of the ways to avoid lm's response being being biased by specifically mentioning what to filter and what to avoid when you're actually generating this so that is Biers mitigating prompt now you can actually take all of this and try it with chat GPT uh the resources section has a PDF with all these prompts so feel free to copy and paste them in the chat GPT window and uh
see the responses by yourself of course the response might be slightly different based on what you see on the screen uh because llms respond differently they are non- deterministic but all these prompts are available to you to play with and finally the most interesting part of prompt generation or prompt crafting which is code generation now llms have been trained on not just the textual data that is available in the public domain but they're also trained on various coding uh Snippets and coding reports course so uh obviously that makes them capable of responding to prompts that
deal with code now in this example I'm asking GPT to write a python function that takes in a list of integers as input and Returns the sum of all the even numbers in the list now this is a slightly complex code snippet uh of course it's not very complex but for a beginner this requires some kind of research and looking up docs or looking at stack Overflow Etc but uh CH GPT is smart enough to come back with the code snippet that you can literally copy paste and uh see for yourself here it actually creates
a function called uh def some even numbers and then writes the entire logic it also creates another code snippet that invokes the function with some sample data and it also comments what could be the potential output so this is a very very helpful mechanism to generate code that can go into your uh your your applications of course you got to be slightly cautious you got to test this thoroughly before you include that in your production code but you can definitely Ray on nlms to generate code to accelerate your programming so to summarize we have seen
some of the common examples of from design which will Ste the llm towards delivering the desired output or the outcome without hallucination without uh uh giving you random answers so so that's about the lesson where we have seen the examples so in this section we have seen various examples and various mechanisms of crafting The Prompt but there are some well-proven techniques that we can use to get the desired output from the L so in this lesson we'll explore some of those key techniques now essentially when you are crafting a prompt it can be one of
these three techniques there is zero short prompting one short prompting or few short prompting let's take a closer look at each of those so zero shot prompting is a technique that allows an llm to perform a task without being explicitly trained on that task so this is done by providing the llm with a prompt that describes the task and the llm uses its knowledge of the world to generate the response now obviously this is based on the pre-training DAT data that was used to train this model for example you could give an llm The Prompt
write a poem about now and the llm would be able to generate a poem about La even though it was never explicitly trained on the task of writing poems this is again derived from the pre-training data zero shot prompting is made possible by the fact that llms are trained on massive data sets of text and code these data sets contain a wide variety of information um including information about different tasks and how to perform them so this allows llm to learn the general principles of how to perform in task even if it has never seen
such a task before so zero short prompting is basically asking a direct question without any examples or without any additional context so these are some of the examples translate the sentence from English to French summarize this article in 100 words answer the following question now if you see there is no precedence there is no example that we are setting or in many cases we don't even provide the context but it is able to generate the output just based on a singular prompt and this is called zero short prompting now one shot prompting is a slightly
Advanced technique now this allows a large language model to perform a task after being trained on a single example this is done by providing the llm with a prompt uh that describes the task and llm uses the example to learn how to perform the task so you give at least one example to the llm and it uses that as a reference to finish the output so oneshot prompting refers to providing a single instruction or a question to an AI llm and receiving a coherent and complete response so oneshot prompting is made possible by the fact
that llms are able to learn from very small amounts of data essentially that's the context are the examples that you provide uh this is because llms are trained on massive data sets and they have the ability to look at the previous examples and learn from them even if they are not completely aware of the domain so when an llm is given a single example it can use the example to learn the specific details of how to perform the task so onot prompting is a new technique it is still evolving but if you use it you're
going to see a significant Improvement in the responses so let's take an example of uh a few prompts that use oneshot prompting technique so here you provide a prompt write a short story about a detective solving a mysterious murder so you are giving enough hints and you are giving uh certain examples that the llm can Leverage What are the symptoms and treatments uh treatment options for seasonal allergies so you know this is basically a oneshot prompt where you have the instruction plus an example now provide a step-by-step guide now that is the instruction on how
to make a classic margarita Pisa and and that is a hint or a one-hot example of what you want the llm to generate so if you see the difference between zero shot and one shot they basically differ in the way the prompt is constructed in one shot prompting we have some hint or an example that the llm can look up to generate the response now without that for example if we go back and look at zero shot here we we don't have any examples or we don't have any additional uh context provided whereas in one
shot prompting we have some kind of a hint in the form of an example and that's what is actually called as oneshot so obviously there is one more technique which is more powerful and that is called f shot prompting so few short learning is a way of instructing llms where the border is given several examples and expected to understand the task based on those examples so here we we expand the one shot to include more examples and then it obviously becomes a few short prompting technique and this method is often used to slightly nurge the
model into understanding the context and format of what's expected and by providing several examples within the prompt we provide uh just enough context for the llm to derive the pattern once the llm analyz The Prompt and understands the pattern that is embedded within the prompt it continues to generate similar content so this is almost pushing or nudging the llm a little bit to generate the output that we really want uh take a look at this example so we describe an animal and we give the output of the animal itself so for the first two we
have a explained or or described the animal and then we also called out the animal but in the third one we describe the characteristics of that animal but we don't really provide the output and we leave it for the llm to fill so this is actually a good exercise for you again these prompts are available in the resources section uh you can copy and paste and see the output so feel free to copy and paste this prompt and see what CH GPT comes back with so this is called few shot where you are providing certain
examples and expect the llm to fill in the blanks by populating the out for example if you want to convert a few sentences from English to French you give the translation through a few examples like two or three of them and then you give another word and you don't give the translation the llm will come back and populate that because now we knows what the top three examples are are providing and it figures out what needs to be done with the fourth so that is called the few short prompting and it's a very powerful technique
and I strongly encourage you to explore and craft your own prompts uh that that have the format of zero short one short and few short prompting techniques