Prompt Engineering Tutorial – Master ChatGPT and LLM Responses

1.58M views6519 WordsCopy TextShare
freeCodeCamp.org
Learn prompt engineering techniques to get better results from ChatGPT and other LLMs. ✏️ Course de...
Video Transcript:
Learn how to get chat GPT and other LLMs to give you the perfect responses by mastering prompt engineering strategies. Anu Kubo is one of our most popular instructors. And in this course, she will teach you the latest techniques to maximize your productivity with large language models.
Hi everyone and welcome to this course on prompt engineering. My name is Anu Kubo and I'm a software developer as well as course cruiser here on FreeCoCamp as well as on my own channel. This is going to be a unique course for me as there is going to be a lot less coding going on but a lot more understanding about the topic of prompt engineering and why some companies are paying up to $335,000 a year according to Bloomberg for people in this profession.
And no, no coding background is necessarily required. So what are we waiting for? Let's do it.
In this course, we will learn what prompt engineering is exactly, get a brief introduction to AI, a look at large language models or LLMs such as chat GPT, a look at text to image models such as mid journey, a look at emerging models. This would include text to speech, text to audio or speech to text as well as prompt engineering mindset, best practices, zero-shot prompting, few-shot prompting, chain of thought, AI hallucinations, vexes, text embeddings and also end with a quick intro to chat GPT. So let's start off with looking at what exactly prompt engineering is in the first place.
Prompt engineering in a nutshell is a career that came about of the back of the rise of artificial intelligence. It involves human writing, refining and optimizing prompts in a structured way. This is done with the intention of perfecting the interaction between humans and AI to the highest degree possible.
Not only that, however, a prompt engineer is then also required to continuously monitor those prompts ensure their effectiveness with time as AI progresses. Maintaining an up-to-date prompt library will be a requirement that is placed onto the prompt engineer as well as reporting on findings and in general, being a thought leader in this space. But why do we need it?
And how did it come about from AI? Before moving on, let's actually make sure we are on the same page about what exactly AI is. Artificial intelligence is the simulation of human intelligence processes by machines.
I say simulation as artificial intelligence is not sentient, at least not yet anyways, meaning it cannot think for itself as much as it may seem it does. Often, and this is certainly the case with tools such as chat GPT, for example, when we say AI, we are simply referring to a term called machine learning. Machine learning works by using large amounts of training data that is then analyzed for correlations and patterns.
These patterns are then used to predict outcomes based on the training data provided. So for example, here we are feeding data saying that if a paragraph looks like this with this type of title, then it should be categorized as international finance. The second paragraph should be put in the category of earning reports and so on.
With some code, we should be able to train our AI model to correctly guess what future paragraphs are about. And that's it. Of course, that is a super basic example and we would need way more data than these five paragraphs right here, but you get the idea.
If you want to build your own AI model and understand the concept of machine learning better as a total beginner, please check out my video on it on my channel, Code with Anya Kubo. As of now, rapidly improving general AI techniques can create realistic text responses and even images, music, and other media thanks to the huge amounts of training data and talented developers working on it today. Why is prompt engineering useful?
With the quick and exponential growing rise of AI, even the architects of it themselves struggle to control it and its outputs. This might be a bit hard to understand, but think of it this way. If you were to ask an AI chatbot what is four plus four, you would expect it to say eight, right?
The result of eight is pretty indisputable. However, imagine you are a young student trying to learn the English language. I'm going to show you just how different responses can be based on the prompts you feed and in turn your learning experience.
For this example, I'm going to be using Chat GPT's GPT-4 model. So let's start with the basics. If you were to type, correct my paragraph and then paste it a badly written paragraph.
So just like this, today was great in the world for me. I went to a Disneyland with my mom. It could have been better though if it wasn't raining.
Great, the young English learner has a better sentence, but it kind of stops there and the learner is just left to their own devices. And honestly, the sentence really isn't that great anyway. What if the learner could get the best sentences possible from a teacher who understands their interests to keep them engaged?
With the correct prompts, we can actually create that with AI. So let's give it a go and let's write a prompt to do this. So here's the prompt I'm going to give it.
I'm going to write, I want you to act as a spoken English teacher. I will speak to you in English and you'll reply to me in English to practice my spoken English. I want you to keep my reply neat, limiting the reply to 100 words.
I also want you to strictly correct my grammar mistakes and typos. And I want you to ask me a question in your reply. Now, let's start practicing.
You could ask me a question first. Remember, I want you to strictly correct my grammar mistakes and typos. So there's my prompt.
You can also go a step further and ask it to correct your factual errors too, which I think would be an excellent addition to the prompt that will benefit the young learner. Okay, so let's go ahead and add that to the prompt. Okay, and let us do its thing.
And great, so now this is way more interactive. As you will see, it's asking you a question and telling you what to do and will provide you with corrections if needed. So in a way, you're communicating with the AI.
It's giving you suggestions and you're learning along the way. It's a completely different experience thanks to the prompt that we wrote. Pretty cool, right?
We're gonna be diving into a bunch of these concepts soon, but first let's start with the basics. Linguistics. Linguistics is the study of language.
It focuses on everything from phonetics, so the study of how speech sounds are produced and perceived. Phonology, so the study of sound patterns and changes. Morphology, the study of word structure.
Syntax, so the study of sentence structure. Semantics, so the study of linguistic meaning. Pragmatics, so in other words, the study of how language is used in context.
Historical linguistics, or the study of language change. Sociolinguistics, or in other words, the study of the relation between language and society. Computational linguistics, so the study of how computers can process human language.
And physiolinguistics, or the study of how humans acquire and use language. So a lot. Linguistics are the key to prompt engineering.
Why? Understanding the nuances of language and how it is used in different contexts is crucial for crafting effective prompts. Not only that, but knowing how to use a grammar or language structure that is universally used will result in the AI system returning back the most accurate results.
As you can imagine, the sheer amount of data that it is trained on is most likely to mostly use the standard grammar and language structure that is universally used. So sticking to the standardization is key. Language models.
Imagine a world where computers possess the power to understand and generate human language. A world where machines can chat, write stories, and even compose poetry. In this magical realm, language models can come into play.
They are like the wizards of the digital realm capable of understanding and creating human-like text. A language model is a clever computer program that learns from a vast collection of written text. It takes in books, articles, websites, and all sorts of written resources, allowing it to gather knowledge about how humans use language.
Just like a master linguist, it becomes an expert in the art of conversation, grammar, and style. But how does this all work? Well, imagine you feed a sentence.
The language model will then analyze the sentence, examine the order of words, their meanings, and the way they fit together. Then the language model would generate a prediction or a continuation of the sentence that makes sense based on its understanding of the language. It will weave words together one by one, creating a response that seems like it was crafted by a human being.
Imagine having a conversation with the language model as if you were exchanging ideas with a digital friend. You ask a question, and it responds with a well-crafted answer. You tell a joke, and it counters with a witty remark.
It's like having a language expert by your side always ready to assist and engage in conversation. Now, you may wonder where these language models are used. They can be found in various places, from your smartphone's virtual assistants to customer service chatbots, and even in the creative realm of writing.
They help us find information, offer suggestions, and create content. But remember, while language models possess incredible abilities, they still rely on the human to create and train them. They are, in fact, a fusion of human ingenuity and the power of algorithms, blending the best of both worlds.
Let's start off by looking at the history of language models, starting with the first AI, Eliza, back in the 60s. Eliza is an early natural language processing computer program created from 1964 to 1966 at MIT by Joseph Wiesenbaum. Eliza was designed to simulate a conversation with a human being.
Eliza had a special knack for mimicking a Rogerian psychotherapist, someone who essentially listens attentively and asks probing questions to help people explore their thoughts and feelings. Eliza's secret weapon was its mastery of pattern matching. It had a treasure trove of predefined patterns, each associated with specific responses.
These patterns were like magical spells that allowed Eliza to understand and respond to human language. When you engage in a conversation with Eliza, it would carefully analyze your input, seeking patterns and keywords. It would then transform your words into a series of symbols, searching for patterns that match those symbols in its repertoire.
Once a pattern was detected, Eliza would work its magic by transforming your words into a question or statement that aimed to explore your thoughts and emotions. It was as if Eliza was holding up a metaphorical mirror, encouraging you to delve deeper into your own thinking. For example, if you said something like, I feel sad, Eliza would detect the pattern and respond with a question like, why do you think you feel sad?
It encouraged reflection and introspection, just like a caring therapist would. But here's the delightful twist. Eliza didn't truly understand what you were saying.
It was just a clever illusion. It used pattern matching and some creative programming tricks to create the illusion of understanding while in reality, it was just following a set of predefined rules. Yet, even though Eliza was a simple program, people were often captivated by its conversational abilities.
They felt heard and understood, even if they knew they were talking to a machine. It was like having a digital confident who was always ready to listen and offer gentle guidance. Eliza's creator, Weisenbaum, intended the program as a method to explore communication between humans and machines.
He was surprised and shocked that individuals, including his secretary, attributed human-like feelings to the computer program. But that is a whole other topic of conversation in itself. Eliza's impact was profound, sparking interest and research in the field of natural language processing.
It paved the way for more advanced systems that could truly understand and generate human language. It was the humble beginning of a grand adventure in the world of conversational AI. Fast forward to the 1970s, when a program named Shudlu appeared.
It could understand simple commands and interact with a virtual world of blocks. Although Shudlu wasn't a language model per se, it laid the foundation for the idea of machines comprehending human language. But the true language models began around 2010, when the power of deep learning and neural networks came into play.
Enter the mighty GPT, short for generative pre-trained transformer, ready to conquer the world of language. In the year 2018, the fast iteration of GPT emerged, created by the company OpenAI. It was trained on a large amount of text data, absorbing knowledge from books, articles, and a large chunk of the internet.
GPT-1 was a taste of things to come, an impressive language model, but small compared to its descendants that we use today. As time went on, the saga continued with the arrival of GPT-2 in 2019, followed by GPT-3 in 2020. This was a titan among language models equipped with a large number of parameters, over 175 billion to be precise.
GPT-3 dazzled the world with its unparalleled ability to understand, respond, and even generate creative pieces of writing. The arrival of GPT-3 marked a real turning point in terms of language models and AI. At the time of writing, we now also have GPT-4, trained on pretty much the whole internet, rather than outdated large data sets, as well as BERT from Google and so much more.
It would seem we are only just at the start when it comes to language models and AI. So learning how to harness this data with prompt engineering is a smart move for anyone today. The prompt engineering mindset.
When thinking of good prompts, it is always best to get in the correct mindset. Essentially, you want to just write one prompt, right? And not have to waste time and tokens, writing lots of different prompts until you get the result you desire.
So essentially, kind of the same as when you Google stuff, right? How good are your Googling skills now, as opposed to five years ago? I'm assuming a lot better.
We have grown to intuitively know what to type into Google the first time round as to not waste time. Having the same mindset for prompt engineering can also be applied. Mahail Eric of the Infinite Machine Learning Podcast says it well when he says, I personally like the analogy of prompting to designing effective Google searches.
There are clearly better and worse ways to write queries against the Google search engine that solve your task. This variance exists because of the opaqueness of what Google is doing under the hood. We are gonna keep this in mind for the remainder of the course.
A quick intro to using chat GPT by OpenAI. So as I said in this course for the examples, I will be using chat GPT's GPT for model. In order to follow along or just to understand how we are going to be using the platform, please head over to openai.
com and just go ahead and sign up. I've already signed up, so I'm just gonna go ahead and log in and that will take me to the platform in which I can choose what I want to interact with. For this tutorial, we are going to be interacting with chat GPT for.
So please go ahead and click on here and that will take you to the platform. And then I'm just gonna go ahead and switch to the GPT for model, which is the latest one. Okay, so great.
You will see here all the previous chats that I've had. I'm just going to minimize this. And if we want to create a new chat, all we'd have to do is click the new chat button.
Okay, so here, for example, what I can do is just go ahead and ask any questions. So what is four plus four? And then hit send.
And that will essentially give me a response. So I'm now interacting with chat GPT for. On this occasion, I can actually build on the previous conversation.
So what I can do is say, great. Now, can you add another five to that? What is the answer?
And it will take into account everything that I have previously. Okay, I am building on top of the knowledge that it already has. Great.
So again, this is just a very quick introduction to how to use this, we will be doing a deeper dive into this throughout this course. So once again, to create a new chat, all you'd have to do is create new chat. And if you want to delete the old one, just go ahead and click that delete button.
And that will delete it. And then a new chat is brought up automatically. Wonderful.
Now you might have seen my course on open AI in which we can also use the API. And to use the API, just go over to the API references. And all you would have to do is get an API key.
And the API key can be viewed here. And then you just go ahead and create an API key. And that will allow you to interact with the open AI API in order for you to build out your own platforms just as the one we saw before.
So if you're interested in that, please do head over to my tutorial on this, again, on the free COCAM channel. Otherwise, let's continue using chat GPT. So once again, I'm just going to log in and head back to the platform here.
And that is it, we are ready to go. Another thing I want to discuss is tokens. As you might find that you have run out of the free tokens that you have in order to interact with chat GPT.
So in order to do that, I'm just going to show you this, I'm going to head over to the documentation and talk to you a little bit about tokens. So GPT-4 essentially processes all texts in chunks called tokens. And this token is approximately four characters or 0.
75 words for English text. And we essentially get charged by token. If you want to know exactly how many tokens you are using, you can check it out here, you can check out the tokenizer tool, and it will give you a rough example.
So for example, I can say, what is four plus four. And with that piece of text, the total count of tokens is going to be six. Okay, so there we go.
That is exactly how many tokens will be used in order to produce a answer for this request. Great, so here is a URL for that. You can play around with it.
I hope you have fun. If you want to see your usage, you can head over to account and then you can manage your account. And this should be able to show you your usage.
So it will show you your usage right here per month. And then you can, of course, also add billing in order to carry on using chat GPT in case you've used up all your tokens and you can't use it anymore. So once again, this is under account billing overview.
So just go check it out. Okay, and now back to the platform. Best practices.
The biggest misconception when it comes to prompt engineering is that it's an easy job with no science to it. I imagine a lot of people think it's just about constructing a one-off sentence such as correct my paragraph that we saw in the previous example. When you start to look at it, creating effective prompts relies on a bunch of different factors.
Here are some things to consider when writing a good prompt. Consider writing clear instructions with details in your query. Consider adopting a persona as well as specifying the format using iterative prompting, meaning if you have a multi-part question or if the first response wasn't sufficient, you can continue by asking follow-up questions or asking the model to elaborate and avoid leading the answer.
Try not to make your prompts so leading that it inadvertently tells the model what answer you're expecting. This might bias the response unduly. And finally, limit the scope for long topics.
If you're asking about a broad topic, it's helpful to break it down or limit the scope to get a more focused answer. Let's look at some of these now. In order to write clearer instructions, we can adopt writing more details in our queries.
And to get the best results, don't assume the AI knows what you are talking about. Writing something like, when is the election, implies that you are expecting the AI to know what election you are talking about and what country you mean. This may result in you asking a few follow-up questions to finally get the result you want, resulting in time loss and frankly, perhaps some frustration.
Consider taking the time to write a prompt with clear instructions. So, for example, instead of writing, when is the election, you could write, when is the next presidential election for Poland? So let's go ahead and run this.
And this will be much more precise and knows exactly what we are asking about. It's not gonna go guessing and waste our time as well as waste our resources. So in other words, tokens that we are using, so in other words, money, in order to get the right answer the first time.
Here are some other examples of how you could write clearer prompts. So for example, we have this prompt here, which says write code to filter out the ages from data. And if you run it, you don't really know what language it's gonna come back with, let's see.
So for example, here it's using Python. I actually didn't wanna use Python, okay? So now we've lost some tokens asking this.
We've also lost some time and we just haven't got the right response. This could have been so easily avoided. So I'm gonna stop this from generating and let's try again.
So this time, let's be more specific by writing, write a JavaScript function that will take an array of objects and filter out the value of age property and put them in a new array. Please explain what each code snippet does. In this example, I am not assuming the AI knows what computer language I like to use and I am being more specific about what my data actually looks like.
On this occasion, it's an array of objects. Not only that, I'm also asking the AI to explain why it's doing each step so that I in turn can understand and not just copy paste the code without gaining any knowledge from it. Okay, so here you can see a live example of what's coming back to us from GPT-4.
It's given us the correct code. So I have checked that and I was also giving us an example of how you would use the function, which is something I didn't ask, which is super useful. It's kind of gone above and beyond for helping me out in understanding what is going on.
Let's look at another example. We can write, tell me what this essay is about. So I'm going to just type this and then just paste in an essay and hit go.
And then chat GPT will do its thing. It's going to give me a summarization as it thinks best. So on this occasion, it is essentially giving me numbered points about what this essay is about.
They're really long. I really didn't want to read this much. It's pretty much looking to be the same as the original essay.
So this is not something I wanted. I should have been way more specific in telling it what I need. So what I'm going to do is just add to this conversation.
So it's going to learn on what I wrote previously. And I'm just going to specify to use bullet points to explain what this essay is about, making sure each point is no longer the 10 words long. So I am being super specific in providing the instructions of what I want and let's hit go.
So now this is a lot shorter. Okay, as you can see, each point is no longer than 10 words long. And then I'm going to get a summary that is a bit longer and will give me a summary of the essay that I pasted in above.
So great, of course you can do this on any essay or piece of text that you wish and you can set your own clear and specific instructions as well. Okay, so I hope you can see why the second one is better. It's because I am not assuming the AI knows what kind of format I want the summarization of the essay to be in.
I am being specific that I want very short notes on the essay in bullet point format with a short conclusion at the end. If I did not put this, the summary could have been just as long as the essay itself and the prompt could be considered useless in my eyes. Next up, we can also adopt a persona.
When writing prompts, it is sometimes helpful to create a persona. This means you're asking the AI to respond to you and a certain character. So exactly like the English language teacher example we saw earlier, using a persona in prompt engineering can help ensure that the language model's output is relevant, useful and consistent with the needs and preferences of the target audience, making it a powerful tool for developing effective language models that meet the needs of users.
Let's look at some examples of adopting a persona. So for example, you're gonna have this prompt right here. Write a poem for a sister's high school graduation that will be read out to a family and close friends.
So let's go ahead and run this and let's see what comes back. So there we go. It is quite good in a room filled with kin and close ties.
We gathered to honor the mist in our eyes for a journey has ended another begun as our dear sister steps into the sun. Okay, so it's coming out with a poem I can see here. It is quite a good one, I guess, probably better than anything that I would have written.
And it is maybe a little bit generic. Maybe that's what you wanted. I think we can do better than this.
So let's try this. This time I'm gonna write a prompt with a persona. So this time I'm gonna specify who I'm writing as.
I'm gonna do write a poem as Helena. Helena is 25 years old and an amazing writer. Her writing style is similar to famous 21st century poet Rupi Kaur.
Writing as Helena, write a poem for her 18 year old sister to celebrate her sister's high school graduation. High school graduation, this will be read out to friends and family at the gathering. Okay, so let's check it out now.
We're writing as Helena, she's 25, she's an amazing writer and we've also assigned a writing style. So let's check it out. Now Chat GPT should be using anything it knows about Rupi Kaur, hopefully from the internet, in order to apply that style to this poem.
Okay, and as you can see, this is maybe a little bit more affectionate. We've said sister, it's obviously a younger sister, so the words little sister are being used. And in general, I think this is a much higher quality poem and if Helena truly does have the style of Rupi Kaur in writing, it will be almost indistinguishable who wrote this poem, Chat GPT or Helena.
So here we go, here's the full thing. It starts off, in the garden of our youth, I watch you bloom from bud to blossom, from child to woman, 18 summers passed. Again, utilizing the fact that we fed it, she was 18 and every winter's chill only made you stronger, a force of nature still.
So yes, in my eyes, this poem is a lot more better it's much more refined, it's much more personal, thanks to the prompts that we wrote. We've already had a brief look at specifying format when we limited the word count of our bullet points in a previous example. So that was a great example, limiting words is one that I use often.
However, we can do a bunch of other things, including specifying if something is a summary, a list or a detailed explanation. Heck, you can even create checklists. So I'm going to go ahead and show you how to do this.
Here is an example prompt and I'm just going to run it. And there we go, we have created a checklist. So many things you can do in Chat GPT.
Just make sure to specify the type of format you want and it should be able to do it. Okay, great. Now that we looked at some best practices, let's move on to some more advanced topics in the prompt engineering.
In this section, I'm going to talk about two types of prompting we can do, zero-shot prompting and few-shot prompting. Zero-shot prompting leverages a pre-trained model's understanding of words and concept relationships without further training. And few-shot prompting enhances the model with training examples via the prompt avoiding retraining.
So essentially in the context of the GPT-4 model, we don't really need to do much. We are already using all of the data that it has in order to ask when is Christmas in America? So let's go ahead and do some zero-shot prompting.
When is Christmas in America? And here, go. Okay, so it clearly already has the data for this.
We don't need to add any examples or anything like that. So as you can see, zero-shot prompting refers to a way of querying models like GPT without any explicit training examples for the task at hand. In the context of machine learning and not just GPT, zero-shot typically means that a model performs a task without having seen any examples of that task during its training.
Okay, great. So now let's look at few-shot prompting. So once again, with zero-shot prompting, we gave our language model a prompt and got a response.
But sometimes that's just not enough and we need a bit more training. So let's use few-shot prompting and level up our language model by showing a few examples of the tasks we want it to perform. So instead of zero examples, we give it a tiny bit of data.
So let's think, what would GPT for not know? I guess it would not know my favorite types of food. So for example, let's check, what is Ania's favorite type of food?
Plural, okay. And I mean, it can guess, but no, it's just telling me that it doesn't know. So that is absolutely fine, let's stop generating.
So now let's feed it in some example data. I'm gonna feed it in a tiny bit of data. Ania's favorite type of food includes, sorry about my English.
Let's go with burgers, fries, I love fries, pizza, and hit enter, okay? So we are essentially giving chat GPT some information. So now if I type what restaurant should I take Ania to in Dubai this weekend and hit here, it should hopefully understand that my favorite types of foods are burgers, fries, and pizza, and given that find me some restaurants in Dubai that I would like to go to.
So here are some, this has been updated as of September, 2021, but you know, these are pretty good ones. So I would totally go to these. This is a great example of few shop prompting in which it wouldn't have been able to answer this question if I hadn't had given it some example data or just trained the model a little bit more in order to get the response that I want.
Great, AI hallucinations. Now we're delving into something you probably never thought you'd hear in an AI context and that's hallucinations. So what exactly are AI hallucinations?
And no, they're not when your AI assistants start seeing unicorns and rainbows. It's actually a time that refers to the unusual outputs that AI models can produce when they misinterpret data. A prime example of this is Google's Deep Dram.
You know, that project that turns pictures of your dog into a nightmarish blend of dog faces and well, more dog faces. Deep Dram is an experiment that visualizes the patterns learned by a neural network. It is built to over interpret and enhance the patterns it sees in an image, or in other words, fill in the gaps with images.
But some of the time those gaps can be filled with the wrong thing. This is an example of an unusual output that AI models can produce when they misinterpret data. Now, why do these hallucinations happen, you ask?
They're trained on a huge amount of data and they make sense of new data based on what they've seen before. Sometimes, however, they make connections that are, let's call it creative. And voila, an AI hallucination occurs.
Despite their funny results, AI hallucinations aren't just entertaining. They're also quite enlightening. They show us how our AI models interpret and understand data.
It's like a sneak peek into their thought processes. This example is actually an AI image hallucination, as I thought it would be a fun one to show. AI hallucinations can also happen with text models.
An example of this is us asking a text model about a historical figure and the text model not having an answer and hallucinating one instead, resulting in an inaccurate response. Vectors and text embeddings. To finish off, I'm going to leave you with a slightly more complex subject.
We're going to take a quick look at the topic of text embedding and vectors. In computer science, particularly in the realm of machine learning and natural language processing or NLP, text embedding is a popular technique to represent textual information in a format that can be easily processed by algorithms, especially deep learning models. In the context of prompt engineering, LLM embedding refers to representing prompts in a form that the model can understand and process.
This involves converting the text prompt into a high dimensional vector that captures its semantic information. So essentially the word food is represented by this. If using the create embedding API from open AI.
Okay, so you will see it's represented, the word was represented by an array of lots and lots of numbers. But why do this? Well, think about it this way.
If you ask a computer to come back with a similar word to food, you wouldn't really expect it to come back with burger or pizza, right? That's what a human might do when thinking of similar words to food. A computer would more likely look at the word lexicographically, kind of like when you scroll through a dictionary and come back with foot, for example.
This is kind of useless to us. We wanna capture a word's semantic meaning. So the meaning behind the word.
Text embeddings do essentially that, thanks to the data captured in this super long array. Now, I can find words that are similar to food in a large corpus of text by comparing text embedding to text embedding and returning the most similar ones. So words such as burger instead of foot will be more similar.
To create a text embedding of a word or even a whole sentence, check out the create embedding API here from OpenAI. So here's the URL that you should visit and to create your own text embedding, you're gonna have to create a post request to this endpoint. Okay, so that's what you can do.
You can also take the code from here. So this is an example request written in Node. js and you would have to install the OpenAI package, take these two methods from it in order to pass through your OpenAI API key that I showed you how to get earlier.
As a refresher, you should go here and view API keys to create your own API key. And once you pass that through, you can then use the configuration and pass it through here in order to use OpenAI and all of its wonderfulness, including the create embedding method in which you pass through an object. That object has to include the model and it also is gonna include the input.
So in this case, it is the text that you want to turn into an embedding. So this is the code for doing that. And here is the response, which comes back as an object.
And then you would use dot notation to get the data, go into the array in order to get the embedding. So this is gonna be the embedding that we saw earlier. It's an array made up of numbers and a lot of these numbers, okay?
So not just these three, this does spill out and this will be a very, very long array of numbers that will give you this semantic meaning of whatever text you pass through, okay? So please have a go at playing around with this API in order to create your own text embeddings and compare them against other texts embeddings in order to find similar text. And that's it.
Thank you so much for watching this course on prompt engineering in which we covered what it is. We covered an introduction to AI as well as looked at linguistics, language models, prompt engineering mindset, using GPT-4, how to look at best practices as well as zero shot and few shot prompting, as well as AI hallucinations, vectors or texts embeddings and this recap right here. So I hope you've enjoyed it and I'll see you again on the FreeCocam channel.
Copyright © 2024. Made with ♥ in London by YTScribe.com