How to Build AI Agents with PydanticAI (Python Tutorial)
5.29k views6448 WordsCopy TextShare
Dave Ebbelaar
🛠️ Want to get started with freelancing? Let me help: https://www.datalumina.com/data-freelancer
📚...
Video Transcript:
so bentch just released their agent framework and in this video I'll share everything that you need to know I will be walking you through this giup repository that I will also make available for you of course we'll dive into some practical code examples so that by the end of this video you should have all the information you need in order to decide whether penic AI is something that you could fit into your stack when you are building apps with llms and now if you're new to the channel welcome my name is Dave abar I'm the founder of data Lumina and I've been building custom data and AI solutions for the past 5 years we also have our own AI SAS where we integrate llms and on this channel I share videos to help you do the same so in order to follow along you can clone the repository link is in the description and here in the read me file we also have additional resources that you can check out now before we dive into the code examples I want to start with a little bit of context of how ptic AI works because otherwise the code probably doesn't make a lot of sense now I will also link the official documentation for where you can dive deeper into some of the core uh functionality and also of course the API reference so the goal of this framework this library is to help you build more robust applications around the use of large language models there are tons of Frameworks out there already but almost all of them leverage prentic under the hood when they're working with structured output so this is the instructor library now open AI is doing doing that as well we have of course Lang chain and llama index and seeing an official framework from the creators behind penic to me is really exciting I'm normally very hesitant with diving into new Frameworks because to be honest I think most of the Frameworks out there are way too complex and you probably don't need them but I wanted to find out if penic AI offers some kind of like low levels abstractions that I could actually use in my projects now why would you want to use btic Ai and this is from the official documentation now first of all it's from the team behind penic and this is a strong argument since all of these other libraries and Frameworks out there are also leveraging penic model agnostic meaning that you can use different model providers they have a couple supported already working on more it offers type safety control flow and agent composition done in plain pure python they have structured response uh Streed responses and a novel type safe dependency injection system this one is is really interesting and finally they have an integration with lock fire so this is something thing the penic team is also working on behind the scenes I haven't used it but it's similar to what Lang Smith and Lang fuse can do so looking at this first Clans it seems to be very similar to what other agent Frameworks are providing right so if we look at the Core Concepts we have our agents dependencies results message and chat history testing and evolves debugging and monitoring so all things that you could also do in Lang chain llama index autog GPT crew AI instructor and many more so the real question then become so why would you want to use penic AI over for example Lang chain llama index or the instructor Library that's also one of the goals of this video and by the end we'll have all the answers for that so whenever you're diving into a new framework the first and most important thing is always trying to understand the philosophy behind it and also look at the core components that work together now for this tutorial we'll be focusing on agents dependencies results and the messages and chat history testing and evolves debugging monitoring will be out of scope for this video so we're going to look at how to chain all of these things together and tie them together in order to take our data send it to open a in this case send it to an llm and then get structured responses back all right so let's jump into the code base and start with this quick hello world example to show you the very very minimum of how to work with this framework now two notes over here I am going to be running this code in the interactive python session that you see over here if you want to learn more about this approach check the link in the description and because some of the code that we will be running is going to be async I'm going to import the nestore asyncio and run this command over here this is just to make it work in a Jupiter environment this is not something that I would recommend in production environments it's just for an example all right and then let's start with the most important thing of working with llms let's pick our model so for this I'm going to import the open AI model class and I'm going to specify GPT for 40 so this is now our model now there are two ways in penic AI that to select models so a model is always passed to an agent which you can see over here so we're going to set up a basic agent so this is one approach but you can also do it the following so here this is from the documentation you can also specify the model by saying we use open Ai and then uh specify which model I like to have this more specific over here and also so that we can reuse it so I'm going to proceed using this method within this tutorial Okay so we've specified our model so now let's see how we can uh make this model available to an agent so let's look at this agent class and look at what it is so it is essentially a container for a system prompt potentially tools the types of results in the form of structured output that we want to get out of it potential dependencies and an llm model so we can store all of this information in the agent class and then start working with it but really at the bare minimum we need a model and we need a system prompt so here you can see we have a basic prompt helpful customer support agent all right so let's run this code we now have our basic agent and the next step would be running this agent and now there are three ways to do this we can call the agent. run which will do it async we have the agent. Run sync which will do it synchronous and we have the agent run stream which is useful for streaming responses so throughout these examples I will be using the Run sync just to keep it easy but what we can now do is we can run this response we get the result back from the agent and in here is a lot of information there is also an entire page over here to learn more about this result object but it's a result from running an agent and the failures are wrapped in the results the stream results and we also get things like the costs and also the message history so with this object now in place what we can do is for example look at the data that's in here so we can call response.
Print data and we can see the reply from the llm so it's saying to track your order please check the confirmation email we sent to you so it is making up an answer over here to reply to this customer but we can also have a look at for example all the messages that are in here so I can print this but we can also have a look at the list over here so you can see we have the system prompt in here which is the one that we started with at the agent then we had the user prompt that is what we sent to the llm and then we also have the reply from the model in the form of the content over here so we have the whole scope and all of this information within this response model and we can even look at the cost over here where we can have a look at all of the tokens and now if we for example wanted to continue the conversation we could run the agent one more time and passing the message history where we just look at all of the previous messages and then for example ask what was my previous question get the data out of it and then your previous question was about tracking your order okay so very simple example of how this works but also nothing that you can do with just the the bare open AI API right so let's look at how we can actually integrate penic in here and start to work with structured output so for this we're going to create a new identic model that we inherit from the base model and we call call it the response model so instead of just asking the llm give us an answer we're going to clearly state if you give us an answer we want the following schema and we want to get this back in a structured way so that we can load it into this response model and penic will then validate that for us and now here we have the same agent but now we also specify the result type which we set equal to the response model so this is similar to how you work with the instructor library or with uh getting structured output directly from open AI using a ptic model so let's load that agent up and run this one more time so we run this and then we get the response back and now the response is going to be that run result again but now if we look at that response. data but instead of just having the text in here so the content we now have an actual ptic model called the response model so what we can now do with this is we can also model dump this to Json uh put some indents in here and we can have a look at everything that the AI came up with so we have the response that we can send back whether it needs escalation followup and sentiment so this is already a step up but again nothing that you cannot do with the other libraries out there then before we dive into example number three real quick I want to tell you a little bit more about how we help developers like you beyond these videos here on YouTube with data Lumina we have all kinds of resources for free and also higher level programs for Learners students Freelancers and businesses so make sure to check out the links in the description so now let's go one step further and look at agents 3 over here another example where we're not only going to get structured output from the model but also we're going to inject some dependencies and this is really where it gets interesting they say they have a novel typ saap dependency injection system and now while I'm sure some of the other Frameworks out there uh also support something like this I have never seen it implemented in such a simple and elegant way as I see over here and this is also something that I built from scratch and use in all of our projects because with llms it's not only structuring the output that you want to get back from the model but you also really want to validate the input because you always need to send some data to an llm and you also want to validate that and that is where the dependencies can come in so in this example what I did is I defined two ptic models so we have have a customer over here which has an ID name email and also some orders uh and the orders is a list of another penic model and by the way if you're new to penic uh make sure to do a quick refresher on it do a quick tutorial so you understand that because that is really at the core over here so a customer can have multiple orders and they are in this penic schema over here so now here we have agent 3 and now next to specifying the model the result type we also specify the dependency type and we set this equal to the customer details and what we can do with this is we can add those details to the system prompt and essentially make it available as context to for the large language model and by setting it as a dependency and having it in a penic model we can also validate that so that is going to be really interesting so let's see how that works I'm going to quickly run this we have a system prompt over here so this is essentially information that is known when writing the code so these are the prompts that you can structure but now when we look at a dependency if you for example think about a customer care ticketing system you can specify all the information and prompts about how your company works but then when a new client comes in with a question of course that client or customer information is specific and we need to load that uh at runtime so we can add dynamic system proms based on these dependencies and you do that with the following syntax we are going to reference the agent over here agent three we're going to use the add symbol to create a decorator and we call the do system prompt now we're going to specify an async function over here and you can name this whatever in this case we're going to call it add customer name and we are going to add the Run context in here so this syntax is a little tricky at first to you have to look into this you can also read through the through the docs with the agents and the dependencies on how this works but essentially what we do is we import the Run context from penic AI so this is core functionality of the penic AI library then we plug in the customer details that is the model that we have specified over here and then we essentially say what we want to do with this and we just return a string saying the customer details is and I have a function over here to convert a identic model to markdown because that tends to work better for uh the models from open AI you can have a look at it it's in the utility F uh folder over here but you could for example also do a Json dump but what this will do it will just take the penic model convert it to markdown and we take the context and then the dependencies so this is the syntax on how to inject that dependency so let's run all of this so we can have a better look at what's going on let's specify a customer over here so so now first we just defined the customer details model for the agent we specify that it depends on these customer details and now here is a new request coming in from this customer John Doe so we have all of the information in there and we have a dummy order and now here what we can do is with our agent 3 we can run it and we can provide the user prompt and now next we're also going to put in the dependency which is going to be set equal to the customer so now let's run this and let's have a look at the responses to see what's going on over here what you can see right now here is that all of a sudden we have two system prompts so next to the original system prom that we specified within the model we have provided the model with the following customer details and here you can see that uh to markdown uh function in action so you can see it is using the hashtags and it's using new lines in order to display all of that information and then also the orders so we have now provided the II system with that and then finally if we have a look at the response we can see hello John you ordered the following items so John just ask what did I order it's not in the system prompt but through that dependency we were able to plug that in now the cool thing here is that since this is all uh using identic what we can for example do is let's say the customer ID comes in and it it is a an integer instead of a string even though we have specified that the customer ID should be a string so let's now say you are integrating with various ticket ticketing systems or web Hooks and data is coming in and we try to run this now we get a validation error in here and your system uh will not run and you will be pointed at the error so essentially what we do with this we ensure that every time we run agent 3 we are very certain that we have the right information and the right context that this system needs in order to work with this specific system prompt and this context information so let me put it back and run it one more time so we can get all the messages we have the response and now we can also use this PR print statement over here to see okay we have the name the email we have the response and also the status over here now this what you're seeing over here is really the key to building production ready AI systems you want to very strictly validate your incoming data make sure you have all the right information then you want to pass that to an llm using wellth thought out system promps and then you want to force that llm to provide you with structured output that you can then use within your control flow because now you can see needs escalation is set to false follow-up required to set the false now depending on how you design your system you might now want to directly send this message back to the customer because there's no follow-up required and no escalation needed so that would be a good message but if there is for example there is follow-up required you might want to escalate it to a human agent who can then for example send the message or and then looks up some information or if it's some kind of like weird message or something that the AI cannot help with directly escalate it to an agent to to a real human agent so as you can see this is a simple example for customer care it's something that we've been doing a lot of work in and I think it's an example that a lot of people can understand but you can extrapolate this to any kind of system that you're building with llms where you have to control the flow of your application and get to that desired output that you want to send back to the user or uh store in the system so this in my opinion is a really good feature of btic AI being able to specify these dependencies details at the agent level all right and then next let's get into example number four which is how to use tools so agents can also use tools so here we'll look at how we can register those tools and also how we can access those tools so tools in the context of agent systems are various methods function functions that the uh AI can decide to use and then provide you back with structured output for you then to be able to to use that structured output within your code within the functions to for example retrieve information and in this context we're going to look at an example where we're going to call some kind of API or or database where we can get some information on the shipping status so I have a very simple example over here normally of course this would not be in your codebase but would be an external surface that you call for an API but in this case it's a simple dictionary so we're going to uh primarily look at this order number 1 2 3 4 5 with the current shipping information and in order to work with tools we first need to register them and there are two ways to register tools and two types of tools so let's start with the types of tools first and that is tool and Tool plane tool you use when you when the uh tool needs to access context from the agent and Tool plane is when you don't need context from the agent now in most cases you will be working with this one that require is the agent information then also registering your tools so there are two ways to do it so you can use the decorator that is shown over here you so again you have your you can specify your function which is the tool the logic that you need and then you can use the decorator to essentially add it to the agent but you can also use the following notation which I will show over here so you can also just provide the agent with a list of tools two ways to do it I will show so both first we do this and then in the last example example number five we do it via the decorators so here we have our tool which is a simple function that requires the context and that is the Run context where we plug in the customer details this is going to Output a string and it's going to do that by taking the shipping info DB database over here which is a python dictionary and we're going to do a look up by accessing the key over here and we're getting the key from the context dependencies do orders and then we take the first order and then we take the order ID so again let's remember the customer details that we specified right so this is going to be the context so that is context.