AI Expert Answers Prompt Engineering Questions From Twitter | Tech Support | WIRED

173.69k views2948 WordsCopy TextShare
WIRED
Prompt Engineer Michael Taylor joins WIRED to answers your questions from Twitter about artificial i...
Video Transcript:
I'm prompt engineer Michael Taylor this is prompt engineering [Music] support at marites wants to know serious question what is a prompt engineer SL prompt engineering one of the main things I'm doing every day as a prompt engineer is AB testing lots of different variations of prompts so I might try asking the AI one thing and then ask it a completely different way and see which one works best so a prompt engineer might be employed by a company in order to optimize the prompts that they're using in their AI applications Adam Jones Inc is asking does
anyone else use please and thank you when communicating with chat gbt and perplexity I'm hoping that I'll get better responses or be treated slightly better with the AI models ever take over specifically saying please and thank you there is no evidence that that improves the results but uh being emotional in your prompts for example using all caps does actually improve performance as well so if you say for example this is very important for my career and you add that to your prompt it will actually do a more diligent job it has learned that from Reading
Reddit posts and reading social media posts that when someone says this is very important for my career the other people that answer actually do answer more diligently one thing that we saw last winter was that chat GPT started to get a little bit lazy and what someone figured out was that when it knows that the date is December then chat actually does get lazier because it's learned from us that you should work a little bit less in the holidays at Shuffle upus is asking asking do you get better results from llms when you prompt it
to imagine that you're an experienced astrophysicist why would you want them to pretend let's do a little experiment here let's write the prompt as an astrophysicist and then write the same prompt as a 5-year-old and see the difference so I've asked it to tell me about quantum mechanics in two lines as an astrophysicist and you can see it uses a lot of big words that a typical astrophysicist would know we can then ask it the same thing as a 5-year-old and now it's explaining quantum mechanics as a magic worldw tiny things like atoms can be
in two places at once the overriding rule is that you should be direct and concise as an astrophysicist or you are an astrophysicist that tends to work better than adding unnecessary words like imagine at vbr is looking for any tips on how to improve my prompts well there are actually thousands of prompt engineering techniques but there's two that get the biggest results for the least amount of effort one is giving direction and the second one is providing examples say for example I had this Pro promp and I invented a product where it's a pair of
shoes that can fit any foot size now how can I improve that prompt template one thing I can do is to give it some Direction one person who is famous at product naming was Steve Jobs you could invoke his name in The Prompt template and you're going to get product names in that style alternatively if you prefer Elon Musk style of naming companies you can provide some examples of the types of names that you really like the reason there's two hashtags in front of this is that this means this is a title it really helps
chat gbt get less confused if you put titles on the different sections of your prompt at Pete mandic is asking serious question about AI artists why does the number of fingers on a human hand seem to be particularly difficult for them the difficulty in rendering fingers is that it's very intricate and the physics is quite difficult to understand and these models were pretty small they didn't have that many parameters so they hadn't really learned how the world Works yet we also have a really strong eye for whether fingers are wrong or whether eyes are wrong
it's something that we look out for as humans thing you might try in a prompt is to say make the fingers look good that tends to not work either because everything in a prompt is positively weighted if you say don't put a picture of an elephant in the room then it will actually introduce a picture of an elephant so what you need is a negative prompt that's not always available for example it's not currently available in dol but it is available in stable diffusion so we're going to type in oil painting hanging in a gallery
we're going to hit dream so what you can see is that some of them have a big gold frame but the one on the right doesn't have a frame and actually prefer that so how can I get it to remove the frames one thing I can do is if I add up the negative prompt here I can say frames in the negative prompt it's going to remove that and now we can see that all of the paintings don't have frames at Roberto digital is asking what is the weirdest response you've gotten from chat gbt so
my favorite one is if you ask it who is Tom Cruz's mother it knows who it is is Mary Lee Fifer if you ask it who is Mary Lee Fifer's famous son it doesn't know that Mary Lee Fifer's son is Tom Cruz so it will make something up I think the last one that I got was John Travolta so the reason why this happens is there's lots of information on the internet about Tom Cruz and who his mother is but there's not that much information on the internet About Mary Lee feifer and who her son
is hallucinating is when the AI makes something up that's wrong and it's really hard to get away from hallucination because it's part of why these llms work when you're asking it to be creative creativity is really just hallucinating something that doesn't exist yet so you want it to be creative but you just don't want it to be creative with the facts at Schwarz child is asking I'm not an expert in AI but if an llm is trained on bias data then won't that bias come through in its responses well you're absolutely correct because AIS are
trained on all of the data from the internet and the internet is full of bias because it comes from us and humans are biased too but it can be pretty hard to correct for those bias by adding God rails because by trying to remove bias in One Direction you might be adding bias in another Direction famous example was when Google added to their prompts for their AI image generator service an instruction that they should always show diverse people in certain job roles what happened was people tried to make images of George Washington and it would
never create a white George Washington in trying to do the right thing and solve for one bias they actually introduced a different bias they weren't expecting there is a lot of work in the research Labs like anthropic has a whole Safety Research team that have figured out you know where is the racist neuron enclo which is their model you know where is the neuron that represents hate speech you know where is the neuron that represents dangerous activities and they've been able to dial down those features at Cara John wants to know how much of the
conversation context does chat gbt actually remember if we chatted for a year with information dense messages would it be able to refer back to info from a year ago so when you open a new chat session with chat gbt it doesn't know anything about you unless you put something in your setting specifically they do have a feature which is a memory feature that is experimental and I don't think it's on by default so one trick that I tend to use is I will get all of the context in one thread for a task I'll just
ask it to summarize and then I'll take that summary and then start a new thread and then I've got the summary more condensed information it will get less confused by all of the previous history it doesn't need to know about at by Brandon white does customizing your settings in chat gbt and providing your bio SLP personal info help better results yes I find that you get wildly different results when you put some information in the custom instructions you have two Fields custom instructions which is what would you like chat gbt to know about you to
provide better responses and the second box is how would you like chat gbt to respond I use chat gbt a lot for programming so I tell it what type of languages I'm using what type of Frameworks I give it some preferences in terms of how I like my code to be written the second box is really anything that you get annoyed about when you're using chat gbt you could put that in the box so for for example uh some people put quit yapping and then it will give you uh briefer responses at travis. media asks
what makes a prompt engineer an engineer a prompt engineer is designing that system of prompts that are being used in the application and making sure that they're safe for deployment make sure that they work again and again and again reliably and that's the same sort of thing that a civil engineer is doing with a bridge right like they're designing the bridge and they're making sure that when you drive over it it's not going to crash into the river at Igan blade is asking do you think we can draw parallels between large language models and human
brains llms or large language models are actually based on human biology they're what happens when you try to make artificial neural networks simulate what our biological neural networks do in our brain so there are a lot of similarities and a lot of the things that work in managing humans also work in managing AIS so if you've heard of Transformer models which is what the llms are all based on the Breakthrough there was figuring out how to make it pay attention to the right words in a sentence in order to predict the next token or word
in the sentence so that was the really big breakthrough that was made by Google and then used by open AI to create chat GPT at this one optimistic wants to know what are tokens so let's say I started writing the sentence LeBron James went to the What word could come next well an llm looks at all the words in the internet and then calculates the probability of what the next word might be so this is the token Miami which has a 14% chance of coming next and we have Lakers which has a 133% chance of
coming next we also have the word loss which is just the beginning of the word Los Angeles here we have the token Cleveland which only has a 4% chance of showing up but the llm will sometimes pick this word and that's where it gets its creativity from it's not always picking the highest probability word just a word that's quite likely the reason they use tokens instead of words is it's just more efficient when you have a token which is a little part of a word like loss that can be more flexible and it can be
trained to be used in different contexts at Edam test Mo for what is the best llm in your opinion for me it's Claude 3 Opus I agree anthropic who makes Claude 3 is doing a great job I'm going to test this against chat gbt and then meta Lama which is an open source model and show you the difference in results so the prompt we using is give me a list of five product names for a shoe that fits any foot size we're testing the model's creativity here and you can see that we have unifit shoes
as one idea adaptic shoes which is pretty creative and one size so SS which is my personal favorite I'm just going to copy this prompt to Claude and with the same prompt we get different names we have morfit adap toep omnis shoe that's my new favorite now we're going to test it on llama 3 which is meta's open source model and you can see it comes up with really different names fit Flex size Savvy adjust a step Universal fit it comes with this text at the beginning and then it's describing each name as well that's
not what I asked it to do personally it's subjective but I like the anthropic Claude response best at G Monster 7000 is asking what is the simple task that an llm has done that has changed your life for me personally it's been the programming ability that I get from using chat gbt and anthropics clae those models are so good at writing code and explaining what that code does that I have really lost my fear of what I can build so if we pop over here to Claude I've made up a fake product uh which alerts
you if your baby is choking I'm trying to build a landing page for it because my developers are busy and it's it's actually going through and just writing that code for me say for example I don't understand what this section is doing I can just copy that and then paste it at the bottom and say what does this do and it's going to give me bullet points on what that specific code is doing step by step and that's the way that you learn with programming I find that I just never get stuck when I use
this one of the coolest things I've done a little automation or a little life hack that I use every day is that I set up an email address that I can email with any interesting links I've found that as we'll send those to AI summarize them and then put them all into a spreadsheet for me to look at later can of Goldson is asking what is prompt chaining if you wanted to write a blog post you wouldn't get great results just by asking it to write it all in one step what I find works is
if you ask it first to write an outline do some research and then when I'm happy with the outline come back and fill in the rest of the article you get much better results and they're comprehensive and fit the full brief of the article that you wanted to write not only does it make the thought process more observable because you can see what happened at each step and which steps failed but also the llm gets less confused because you don't have a huge prompt with lots of different conflicting instructions that it has to try and
follow at automation Ace is asking how can you automate AI that's what's called an autonomous agent where it's running in a loop and it keeps prompting itself and correcting its work until it finally achieves the higher level goal Microsoft autogen is a framework for autonomous agents an open source framework that anyone could try if you know how to codee and I think that's really the big difference between chat gbt this helpful assistant that we're using day-to-day versus having an AI employee in your slack that you can just say make me more money for the company
and it will go and try different things until something works at Mr drov is asking how would you prompt the llm to improve the prompt there's actually been a lot of really interesting research here techniques like the automatic prompt engineer technique where the llm will write prompts for other llms and this works really well I actually use it all of the time to optimize my prompts just because you're a prompt engineer doesn't necessarily mean you're immune from your work being automated as well at bahoo prompts is asking how long until prompt engineering or a future
field related to this becomes a degree will it be a standalone field or part of every field being taught that's a really great question because some people I talk to say that prompt engineering isn't going to be a job in 5 years it's not even going to be something we practice because these models are going to be so good we won't need to prompt them I tend to disagree because of course humans are already pretty intelligent and we need prompting we have a HR team we have a legal team we have management so I think
that the practice of prompt engineering will always be a skill that you need to do your job but I don't necessarily think that in 5 years we'll be calling ourselves prompt Engineers so those are all the questions for today thanks for watching prompt engineering support [Music]
Copyright © 2025. Made with ♥ in London by YTScribe.com