Discover Prompt Engineering | Google AI Essentials

90.34k views4316 WordsCopy TextShare
Google Career Certificates
This video is a preview of Module 3 in Google AI Essentials, available on Coursera. In this module, ...
Video Transcript:
prompt engineering involves designing the best prompt you can to get the output you [Music] want think about how you use language in your daily life language is used for so many purposes to build connections Express opinions or explain ideas and sometimes you might want to use language to prompt others to respond in a particular way maybe you want someone to give you a recommendation or clarify something in those cases the way you phrase your words can affect how others respond the same is true when prompting a conversational AI tool with a question or request a
prompt is text input that provides instructions to the AI model on how to generate output for example someone who owns a clothing store store might want an AI model to Output new ideas for how to Market their clothing this business owner might write the prompt I own a clothing store we sell High fashioned women's wear help me brainstorm marketing ideas in this section of the course you'll focus on how to design or engineer effective prompts to achieve more useful results from a conversational AI tool my name is yuang and I'm an engineer at Google I
first became interested in prompting because getting useful responses from language models was timec consuming sometimes it was even quicker for us to do the work without the use of AI I was inspired to help our tools be more efficient not less I'm excited to help you learn more about developing effective prompts first you'll discover how llms generate output in response to prompts and then you'll explore the role of prompt Engineering in improving proving the quality of the output prompt engineering is the practice of developing effective prompts that elicit useful output from generative AI you'll learn
to create clear and specific prompts one of the most important parts of prompt engineering the more clear and specific your prompt the more likely you are to get useful output another important part of prompt engineering is iteration you'll learn about evaluating output and revising your prompts this will also help you get the results you need when leveraging conversational AI tools in the workplace we'll also explore a specific prompting technique called few shot prompting writing effective prompts involves critical thinking and creativity it can also be a fun process and it's a very important skill to practice
if you want to use AI effectively in the workplace are you excited to get started on prompt engineering let's go it's helpful to understand how llms work and to be aware of their limitations a large language model or llm is an AI model that is trained on large amounts of text to identify patterns between words Concepts and phrases so that it can generate responses to prompts so how do llms learn to generate useful responses to prompts an llm is trained on millions of sources of text including books articles websites and more this training helps the
model learn the patterns and relationships that exist in human language in general the more highquality data the model receives the better its performance will be because llms can identify so many patterns in language they can also predict what word is most likely to come next in a sequence of words consider a simple example to get a basic understanding of how llms predict the next word in a sequence take the incomplete sentence after it rained the street was an llm can predict what word comes next by Computing the probabilities for different possible words based on the
available data the word wet might have a high probability of being the next word the word clean a lower probability and the word dry an extremely low probability in this case the llm might complete the sentence by inserting the word with the highest probability of coming next in the sequence wet or it might be another high probability word like damn an llm may vary in its response to the same prompt each time you use it llms use statistics to analyze the relationships between all the words in a given sequence and compute the probabilities for thousands
of possible words to come next in that sequence this predictive power enables llms to respond to questions and requests whether the prompt is to complete a simple sentence or to develop a compelling story for a new product launch or ad campaign although llms are powerful you may not always get the output you want sometimes this is because of limitations in an lm's training data for instance an lm's output may be biased because the data it was trained on contains bias this data may include news articles and websites that reflect the unfair biases present in society
for example because of the data it was trained on an llm may be more likely to produce output that Associates a professional occupation with a specific gender role the training data that informs an llm can be limited in other ways as well for instance an llm might not generate sufficient content about a specific domain or topic because the data it was trained on does not contain enough information about that topic another factor that can affect output is the tendency of llms to hallucinate hallucinations are AI outputs that are not true while llms are good at
responding to many kinds of questions and instructions they can sometimes generate text that is factually inaccurate let's say you're researching a company and you use an llm to help you summarize the company's history the llm might hallucinate and provide incorrect information about certain details such as the date the company was founded or the current number of employees a number of factors can contribute to hallucinations such as the quality of an lm's training data the phrasing of the prompt or the method an llm uses to analyze text and predict the next word in a sequence because
of an lm's limitations it's important that you critically evaluate all llm output to determine if it is factually accurate is unbiased is relevant to your specific request and provides sufficient information whether you're using AI to summarize a lenry report generate ideas for marketing a product or outline a project plan be sure to carefully check the quality of the output finally it's important not to make assumptions about an lm's capabilities for example just because it produced high quality output for a persuasive letter to a customer don't assume you will get the same quality output if you
use the same prompt again in the future large language models are powerful tools that require human guidance for Effective use being aware of an lm's limitations can help you achieve the best possible results how can you write prompts that produce useful output it's generally true that the quality of what you start with greatly affects the quality of what you produce consider cooking for example let's say you're preparing dinner if you have fresh high quality ingredients well you're more likely to produce a great meal conversely if you're missing an ingredient or the ingredients aren't high quality
the resulting meal may not be as good in a similar way the quality of the prompt that you put into a conversational AI tool can affect the quality of the tools output this is where prompt engineering comes in prompt engineering ing involves designing the best prompt you can to get the output you want from an llm this includes writing clear specific prompts that provide relevant context to gain a better understanding of the context llms need let's compare how a person and an llm might respond to the same question suppose a vegetarian ask their friend what
restaurant should I go to in San Francisco the friend would likely suggest restaurants with good vegetarian options however if prompted with the same question an llm might recommend restaurants that are not suitable for a vegetarian a person would instinctively consider the fact that their friend is a vegetarian when answering the question but an llm does not have this prior knowledge so to get the needed information from an llm The Prompt must be more specific in this case The Prompt needs to mention that the restaurant should have good vegetarian options let's explore an example that demonstrates
how you can use prompt engineering to improve the quality of an llms output Let's Take on the task of planning a company event you need to find a theme for an upcoming conference let's write a prompt to Gemini to generate a list of five potential themes for an event you can use similar prompts in chat GPT Microsoft co-pilot or any other conversational AI tool now let's review the response well this isn't what we wanted we've gotten a list that seems more related to party themes than themes for a professional conference our prompt didn't provide enough
context to produce the output we needed it wasn't clear or specific enough let's try this again this time we'll type The Prompt generate a list of five potential themes for a professional conference on customer experience in the hospitality industry this prompt is much more specific making it clear that it's a professional conference on customer experience in the hospitality industry let's examine the response this is much better we engineered our prompt to include specific relevant context so Gemini is able to generate useful output when you provide clear Specific Instructions that include necessary context you enable llms
to generate useful output keep in mind that due to llm limitations there might be some instances in which you can't get quality output regardless of the quality of your prompt for example if you're prompting the llm to find information about a current event but the llm doesn't have access to that information it won't be able to Prov provide the output you need and like in other areas of design prompt engineering is often an iterative process sometimes even when you do provide clear and Specific Instructions you may not get the output you want on your first
try when our first prompt didn't produce the response we wanted we revised the prompt to improve the output the second iteration provided instructions that were clear and specific enough to produce a more useful output there are multiple ways to leverage an llm capabilities at work to boost productivity and creativity a common one is content creation you can use an llm to create emails plans ideas and more as an example you can ask an llm to help you write an article about a work-related topic let's prompt Gemini to create an outline for an article on data
visualization best practices the article is for entry-level business analysts notice that the prompt begins with the verb create it's often helpful to include a verb in your prompt to guide the llm to produce useful output for your intended task the output provides a helpful outline for a first draft of the article you can also use an llm for summarization an llm can summarize a lengthy document's main points for example you might ask Gemini to summarize a detailed paragraph about project management strategies we'll Begin The Prompt with the verb summarize and specify that we want the
output to be a single sentence then we'll include the paragraph We want Gemini to summarize the output provides a convenient one-sentence summary of the paragraph while this example shows how you can summarize a single paragraph you can ask an llm to summarize longer text and documents too classification is another possible use for instance you might prompt the llm to classify the sentiment or feeling in a group of customer reviews as positive negative or neutral let's prompt Gemini to classify customer reviews about a retail website's new design as positive negative or neutral The Prompt includes the
verb classify to guide the output The Prompt also contains the reviews in this example there are are four reviews the output accurately classifies the first two reviews as negative the third as positive and the fourth as neutral consider how you could leverage an llm to efficiently complete large classification tasks or you can use an llm for extraction which involves pulling data from text and transforming it into a structured format that's easier to understand suppose you have a report that provides information about a global organization you can prompt Gemini to extract all mentions of cities and
revenue in the report and place them in a table then we'll include the report in our prompt Please be aware that you should not input confidential information into llms but in this example the report is not confidential the output displays a table with columns for City and revenue this presents the information in a well organized format that's easy to review another use is translation you can leverage an llm to translate text between different languages for example you might ask Gemini to translate the title of a training session from English to Spanish the output includes a
variety of Spanish translations to choose from and explains the reasoning behind each translation this information can help you choose the most useful option for your audience or you can use an llm for editing such as to change the tone of a section of text from formal to casual and to check if the text is grammatically correct for example Gemini can help you edit a technical analysis about electric vehicles by making the language more accessible for a non-technical audience we'll start the prompt with the verb edit and specify that the language should be easy for a
non-technical audience to understand after this we'll include the technical analysis the output provides a version of the analysis that an audience less familiar with the technical details can understand this is just one example of how an llm can help you edit documents llms can quickly customize the tone length and format of documents to fit your needs one more use for an llm we'll discuss is problem solving you can use utilize an llm to generate solutions for a variety of workplace challenges when planning a company event for example you could prompt the llm to find manyu
solutions that accommodate the food restrictions of multiple guests while following a holiday themed menu and here's another example let's say you are an entrepreneur who recently launched a new copy editing service let's ask Gemini to solve a problem related to the copy editing service we'll ask for suggestions for increasing the client base the output provides specific suggestions for reaching new clients optimizing services and growing the business I love these ideas let's ask Gemini to draft an email so we can easily share these ideas with others llms can help you brainstorm solutions for many different types
of problems I'm definitely excited by the variety of ways we can leverage llms when completing workplace tasks it's a very important skill to practice if you want to use AI effectively in the workplace coming up we'll focus more on evaluating output and iterating on your prompt have you ever created a presentation for a client or design a website for your new business if so you may have used an iterative process to achieve your goal in an iterative process you create a first version evaluate it and improve upon it for the next version then you repeat
these steps until you get the desired outcome for example if you're developing a proposal report or other document to share with your co-workers you might produce multiple drafts and make improvements on each draft until you are satisfied with the result taking an iterative approach is often the most effective way to solve a problem or develop a product an iterative process is also effective in prompt engine engering prompt engineering often requires multiple attempts before you get the optimal output most of the time you won't get the best result on your first try if you try something
and it doesn't work don't get discouraged instead carefully evaluate the output to determine why you didn't get the response you wanted then revise your prompt to try for a better result let's consider possible reasons you might not get useful output after creating a clear and specific prompt first differences in large language models can affect output each llm is developed with unique training data and programming techniques and has different background knowledge about specific domains for this reason different models might respond to similar prompts in different ways and might fail to provide an adequate response to some
prompts taking an iterative approach with the llm you're using will produce the best results second llm limitations previously you learned that llm output May sometimes be inaccurate biased insufficient irrelevant or inconsistent you should critically evaluate all llm output by asking yourself the following questions is the output accurate is the output unbiased does the output include sufficient information is the output relevant to my project or task and finally is the output consistent if I use the same prompt multiple times if you identify any issues when you evaluate output iterating on your initial prompt can often help
you resolve these issues and get better output to begin if you notice there's any context missing your prompt add it your choice of words can also significantly impact in lm's output using different words or phrasing in your prompts often yields different responses from the model experimenting with different phrasings can help you obtain the most useful output now that you know more about iterative prompting let's consider an example suppose you work as a human resources coordinator for a video production company the company wants to develop an internship program for students who are Exploring Careers in animation
and Motion Graphics design the company is based in the United States in the state of Pennsylvania my home state your team wants to partner with local colleges to provide internship opportunities for students in Pennsylvania as a first step you need to create a list of colleges in Pennsylvania that have animation programs the list should include necessary details about the colleges and be in a well organized format that your team can quickly review let's review an example using Gemini help me find colleges with animation programs in Pennsylvania next we examine our output the output lists colleges
in Pennsylvania that have animation programs along with further information related to these programs this is helpful information but it isn't structured in a way that your team can quickly reference when contacting the colleges organizing the information in a table would make it easier to read and understand especially for stakeholders like your manager who may have limited time we can iterate on the prompt by adding context to to specify the desired format of the output we'll type show these options as a table the output displays a table that provides useful information about the location of each
college and the specific type of degree it offers now the list is in a well organized format that's easier for your team to follow although the table contains most of the information your team needs it doesn't include a key detail whether the school is a public or private institution your company wants to offer internships to students from both public and private colleges we'll add a new request for Gemini to include the relevant information in the table can you add a column showing whether they are public or private now the table includes a column that indicates
whether a college is private or public to share this information with your team in a format that's easy to review and understand you can use the export to Sheets feature this will allow your team to easily access and analyze the data and make informed decisions based on the results you should apply the same iterative approach to further tasks when you develop prompts for additional tasks be aware that previous prompts made in the same conversation can influence the output of your most recent prompt if you notice this is happening you may want to start a new
conversation iteration is a key part of prompt engineering by taking an iterative approach to prompting you can Leverage llm to provide the most useful output for your needs have you ever created Something New by building upon previous examples perhaps you used a well-received report as a reference when writing a similar report or maybe you used a relevant and engaging website as a model when designing your own website examples are also useful for llms including examples in your prompt can help an llm better respond to your request and can be an especially effective strategy to get
your desired output we're going to explore how to use examples in prompting but first let's briefly discuss the technical term shot in prompt engineering the word shot is often used as a synonym for the word example there are different names for prompting techniques based on the number of examples given to the llm zero shot prompting is a technique that provides no examples in a prompt while one shot prompting provides one example and fuse shot prompting is a technique that provides two or more examples in a prompt because examples aren't included in zero shot prompts the
model is expected to perform the task based only on its training data and the task description included in the prompt zero shot prompting is most most likely to be effective when you are seeking simple direct responses zero shot prompting may not be effective for tasks that require the llm to respond in a more specific nuanced way fuse shot prompting can improve an lm's performance by providing additional context and examples in your prompt these additional examples can help clarify the desired format phrasing or general pattern few shot prompting can be use for a range of tasks
for example you might use f shot prompting to generate content in a particular style let's say you work for an online retailer you need to write a product description for a new skateboard you already have descriptions for existing products such as a bicycle and roller blades you want the skateboard description to follow a similar style and format we'll start with a prompt that begins with some general instructions write a one sentence description of a product it should contain two adjectives that describe the product we also specify that we want Gemini to review the examples we
provide and write the description of the skateboard in the same style because this is a few shot prompt we need to provide examples that model the style we want each example contains a label indicating the product being described a bicycle and roller blades and each description is one sentence long and contains two adjectives sleek and durable for the bicycle and smooth and stylish for the roller blades next we type the label skateboard when we add this label and leave the product description blank we indicate to Gemini that we want it to complete the description of
the skateboard like it did with the other two product descriptions let's review our output the output offers a product description of the skateboard that meets the criteria we requested and is in the same writing style and format as the examples we included in our prompt in this case two examples were enough to obtain useful results but there is no definitive rule for the optimal number of examples to include in a prompt some llms can accurately reproduce patterns using only a few examples while other llms need more at the same time if you include too many
examples an lm's responses may become less flexible and creative and they may reproduce the examples too closely experiment with the number of examples to include to get the best results for your specific task now you know a prompting technique that will help you get better quality output F shot prompting is an effective strategy that can help you guide an llm to generate more useful responses you've learned a lot about writing prompts that you can apply to workplace tasks in this section we discussed large language model or llm output We examined how llms produce their output
and potential issues you might encounter in the output after this we focused on a key principle of prompt engineering creating clear and specific prompts you learn just how important it is to specify what you want the llm to do and to include supporting context to help it provide better output we then went on to discover how to improve the quality of AI output through iteration it's essential that you evaluate your output and then revise your prompt as needed lastly we learned about fuse shot prompting which involves providing examples to guide the llm I want to
offer a final tip before I go we focused on promp ing large language models you can use the same general principles when you prompt other kinds of AI models too for instance the next time you want to use AI to generate an image try to be as clear and specific as possible and then iterate to get closer to the output you want it's been great guiding you through the process of prompt engineering I hope you continue to apply and develop these skills as you leverage conversational AI tools in the workplace to continue learning I encourage
you to explore the topic of using AI responsibly as part of Google AI Essentials [Music]
Related Videos
Practice Using AI Responsibly | Google AI Essentials
17:21
Practice Using AI Responsibly | Google AI ...
Google Career Certificates
7,268 views
Google's 8 Hour AI Essentials Course In 15 Minutes
15:34
Google's 8 Hour AI Essentials Course In 15...
Tina Huang
93,423 views
An AI Prompt Engineer Shares Her Secrets
10:11
An AI Prompt Engineer Shares Her Secrets
Fortune Magazine
24,807 views
Get Better Results From AI in 5 Easy Steps | Google Prompting Essentials
20:32
Get Better Results From AI in 5 Easy Steps...
Google Career Certificates
240 views
Fine-Tuning, RAG, or Prompt Engineering? The Ultimate LLM Showdown Explained!
22:26
Fine-Tuning, RAG, or Prompt Engineering? T...
Patralekh Satyam
2,283 views
Whitepaper Companion Podcast - Prompt Engineering
18:53
Whitepaper Companion Podcast - Prompt Engi...
Kaggle
30,126 views
AI Expert Answers Prompt Engineering Questions From Twitter | Tech Support | WIRED
13:56
AI Expert Answers Prompt Engineering Quest...
WIRED
159,243 views
Start Writing Prompts Like a Pro | Google Prompting Essentials
29:28
Start Writing Prompts Like a Pro | Google ...
Google Career Certificates
8,349 views
Use AI Tools to Boost Productivity | Google AI Essentials
22:12
Use AI Tools to Boost Productivity | Googl...
Google Career Certificates
13,725 views
Prompt Engineering Tutorial – Master ChatGPT and LLM Responses
41:36
Prompt Engineering Tutorial – Master ChatG...
freeCodeCamp.org
1,721,023 views
Ex-Google Recruiter Reveals 8 Secrets Recruiters Won’t Tell You
13:57
Ex-Google Recruiter Reveals 8 Secrets Recr...
Farah Sharghi
242,200 views
4 Methods of Prompt Engineering
12:42
4 Methods of Prompt Engineering
IBM Technology
160,913 views
Design Prompts for Everyday Work Tasks | Google Prompting Essentials
20:12
Design Prompts for Everyday Work Tasks | G...
Google Career Certificates
4,218 views
Introduction to Artificial Intelligence (AI) | Google AI Essentials
19:52
Introduction to Artificial Intelligence (A...
Google Career Certificates
47,528 views
Learn Machine Learning Like a GENIUS and Not Waste Time
15:03
Learn Machine Learning Like a GENIUS and N...
Infinite Codes
101,005 views
Don't Use ChatGPT Until You Watch This Video
13:40
Don't Use ChatGPT Until You Watch This Video
Leila Gharani
1,775,031 views
Prompt Engineering 2024 Full course | Prompt engineering course | ChatGPT Prompts
1:16:10
Prompt Engineering 2024 Full course | Prom...
Great Learning
244,370 views
Every Google AI Essentials Course Lesson | Google
1:41:38
Every Google AI Essentials Course Lesson |...
Google Career Certificates
22,843 views
Google AI's Illuminate Makes Better Podcast than NotebookLM
6:02
Google AI's Illuminate Makes Better Podcas...
Teacher's Tech
75,676 views
Top Minds in AI Explain What’s Coming After GPT-4o | EP #130
25:30
Top Minds in AI Explain What’s Coming Afte...
Peter H. Diamandis
384,983 views
Copyright © 2024. Made with ♥ in London by YTScribe.com