hello everyone welcome back to my code to care uh series uh as you may know uh I like to alternate between education topics and use cases and bias ethic safety uh kind of topics so I thought i' would do an education uh topic today uh and the topic is agentic AI uh and the reason I bring this up is I think this is going to be one of the big buzzwords for AI this year uh this year being 2025 and um and I feel it's my responsibility to kind of do some education on this so
you know what the word means basically and can kind of think through what it might mean to your uh organizations uh also on this channel Nikolai has given a nice little overview of a gentic AI so take a look at that video and today what I wanted to do is just use one example the simplest example I can think of and kind of walk you through um uh What uh what a gentic AI is is all about okay all right so let us uh let us begin so um first of all I'm I'm going to
I'm going to start out with the um using the word compound uh AI or compound llms uh and then I'll change it to agentic later on in the in the talk but um if you think of what an llm actually does you give it a prompt you know which is usually you know multiple paragraphs of kind of input text instructions context things like like that and believe it or not it predicts the next word so you might give it a bunch of instructions actually uh the example I wanted to use for this video is plan
a uh build me a marketing plan for a new product launch something like that and you want the llm to help you kind of uh get a draft plan so you could start uh going so you put in that whole instruction you put in background of your new product that kind of thing uh examples from your other product launches something like that so you pack the prompt in there you send it to the lolm and the lolm will pick the next word and the word might be the the beginning of the next sentence and then
actually feeds that word back into the lam and asks it to predict the the next word uh and the next word might be plan or something like that and it actually does this um one at a time and so that's why when you use like chat CPT or whatever you'll see the words coming off uh incrementally is because that's the way the model is working it's basically doing one word uh at a time uh so it's kind of amazing that it works but that's the fundamental architecture of these uh models but there's one important thing
about this if you think about it is there is no back button so these models do their work without any sort of editing reflection or refinement unlike the way we write so if you think of you building a plan or you building a presentation you might think of an outline then you might do a first draft then you might work on a script you might work on some slides you might review it with others you might have a second draft you might review with a larger group or do a dry run and then you might
have a final draft and then you might want to give the the presentation so there's usually multiple revisions but in llm we really don't give it that opportunity we just have it do it in one shot uh basically and so one of the things that we're finding is if you string together llms in more of a compound style that they can actually do a better job so let me give an example of of of an approach to a writing task like building the plan hey there I'm just popping in to say that I'd love to
hear your comments and feedback on this video I read all the comments so let me know what you think let me know what suggestions you have for my next video by uh putting in some comments uh below the line here thanks so an approach might be using a a group of three llms so you might and I'm calling a compound llm so you might want to do the first llm call to say um write me a draft of a marketing plan for this product launch and you Pat you know you pack it with the prompt
like I said uh and that gives a response and then you might take a different llm or the same llm with a different prompt and you say can you critique this marketing plan so this might be the critique L um and and just ask it hey here's the plan here's the background material can you critique it make sure it looks good to you it everything is there and that sort of thing and it's response would be a critique of the plan itself and then you take that critique you take the original draft and you send
it to a third llm and you say given this critique can you update this draft marketing plan based on the feedback from the critique and give me another version so this actually works this will give you a much better result you can try it yourself actually with chat uh GPT you can use the same llm you can use different llms that might give you different variety different perspect perspective but this compound llm approach actually gives you a higher quality response and you can see why if you think of AI a little bit like the way
we think uh it's helpful to reflect it's helpful to have a chance to do another version those kinds of things versus just a um again doing it without without a a back button type of thing so this is this is a compound llm style um and you might recall or you might have heard about a a prompt engineering study that happened a year or two ago Google did um and if they if you put into the prompt you ask the lolm a question and you put in the prompt please take a pause between each step
of the answer uh the llm actually improved its accuracy when it was asked to pause so this is a fancier version of that where you're giving llms an opportunity to reflect to critique to do multiple versions of something you actually get a better uh a better output um so this is a this is a compound uh llm basically um using these multiple steps so now let's get back to atic um AI this is an example of a gentic Ai and what I want you to do to sort of get a feel for what a gentic
AI is is take three mental leaps okay the First Leap I going write over here is don't use the word L use the word agent so this AI agent is writing the draft this agent is writing a critique and this agent is writing the final plan so it's a small little language thing but essentially think of these llms as agents and each agent is doing one piece of of this task okay so that's mental leap number one is call these if you want to sound smart let's say or fully buzzword compliant you can call these
things agents instead of lops so that's the first mental leap the second mental leap is don't assume that each of these steps is an llm okay so some of these agents might be something simple like a Google search so for instance when when I um present I usually like to present some statistics to support my point okay so let's say this is a presentation I could do a write a draft presentation critique it and then write a final version but I could also add in some steps like um I could add a data request agent
so given this sorry that says request uh given this plan um what sort of data would be useful to support the plan or to support this presentation uh and this El this agent will just return like a list of Statistics that would be nice then you might have another agent that does a Google search for those statistics okay takes the string value I don't know average age of of uh people in such and such country or something like that uh does searches gets the answer back and then it feeds that answer uh maybe you do
like a draft another draft that has the initial draft supported by extra data and writes a new draft and then you send it to the critique model or the critique agent and then you send it to the final agent so this is a again an an agent style workflow but this agent here let me use a different color for fun this agent here is not an LOL it's just a tool okay so some of of these agents could be tools like do a Google search call an API to schedule an appointment um be calculator API
to do some math um so you can help each of these agents with a with a set of tools so some of these agents won't necessarily be an llm they'll be a tool okay so that's um leap number two leap number one is call these things agents leap number two is not every agent is an llm some might be tools many will be llms but some might be uh tools and then the third leak uh the third leap I'll just write a three right here and then I'll just explain it verbally is that don't necessarily
assume that this is a predetermined path you might have an orchestrator agent that you tell it hey um write as many drafts of this presentation do as many draft Loops as you think would be necessary with a maximum of 10 let's say to produce a final product or um as you're after each draft ask for data needs search out those data needs with this Google Search tool uh and then write another final draft and do that until you have no more data needs um so you kind of can use one of these agents as an
orchestrator to step through the rest of the process so I think of it like that um uh if you recall Black Mirror had an episode where you and the audience could affect the path of the episode you could affect what happens in the next uh scene it's a little bit like that instead of having like a TV show that just comes at you in the order that uh um that the person who directed it thought that you would through the process of jumping through the workflow adjust the different steps so you would tell the orchestrator
overall instructions what agents that orchestrator has available what tools are available what the general flow is and that orchestrator could orchestrate its way through the the workflow instead of you specifying hey these are the six steps um so that's that's all I wanted to cover so it's a concept of that starts with compounding llms together um you know maybe not a lot of them but three of them you know allows an llm to do a much better job the draft critique model maybe a model that checks against your your organizational policies or your brand or
that kind of thing so you can have these different pieces and parts uh llms do parts of your task um think of those those llms as agents um and they all don't have to be llm so think of them as agents think of the agents as sometimes being an llm but sometimes being other tools um and then lastly imagine uh building workflows where one of the agents could or many of the agents could be sort of directing what happens next instead of you kind of pre-thinking this whole flow in code or or in your application
okay so um more to come on this I I would imagine throughout the year as uh as we find different use cases but I just wanted to introduce the topic to you at the beginning of the year you know give you this simple example to Think Through um so you weren't kind of either lost or intimidated or uh you could participate and helping your organizations and helping yourself think through you know where AI is going this year and how you might take advantage of some of these new new ways of thinking okay so that's it
hope that was interesting and until next time bye hey there I hope you like this video um I've added a next video at the end of this so um so take a look at that if you uh if you enjoyed this and if something resonated with you please drop a comment um at um uh down below here I read every comment uh myself and I really appreciate hearing from you thanks