Create a Custom AI Assistant API in 10 Mins

112.1k views2083 WordsCopy TextShare
pixegami
Learn how to build your own AI assistant using OpenAI's Assistants API and how to access it via Pyth...
Video Transcript:
The Assistants API is a new service from OpenAI that makes it really easy for you to build your own custom AI agent. The AI assistant will look something like this. It'll have access to all the latest GPT models, a code interpreter, and any additional instructions or knowledge you upload to it.
In this video, we're going to build a custom AI assistant like the one you just saw. It'll help us answer questions using global economic data, which we'll provide to it in a PDF. You'll be able to reason about our questions, use Python to run calculations, and call external functions or APIs.
You'll be able to create all of this really easily right here on the OpenAI website. And once created, you'll be able to use this assistant via an API endpoint as well. That means you'll be able to call it directly from your Python or JavaScript code and then use that to build your own custom application around it.
Let's get started. To create an assistant, you need to have an OpenAI account. So make sure you have that first before you begin.
Then head on over to platform. openai. com/assistants.
Click on this create button over here. A tab will open up where you can fill in the name and the instruction or the prompt for your assistant. So go ahead and fill those out.
In this tutorial, I'm going to build an assistant that helps us answer questions by using economic data and cost of living data of different cities around the world. Next, you'll be able to choose the LLM model that you want to use for this assistant. These are priced at pay-as-you-go rates, so pick the one that works for you.
Here, I'm just going to pick GPT-4. And once you have all of that done, you can click save. And now we can try out our assistant by clicking this test button over here.
You should now see an interface like this and you can just type in any question or message for your assistant and then click run. So here I've asked my assistant what is the most livable city in the world in 2023. And the assistant wasn't able to give us a useful answer.
That's because we haven't provided the data or the knowledge for it to use to answer our questions yet. So let's see how we can fix that. If you want your assistant to be more useful than just using ChatGPT, then you probably want to add some custom data or custom knowledge to it.
This is going to be some special knowledge that is really useful for your application, but something that ChatGPT won't normally have access to by default. It could be a text file, a PDF file, or even a CSV table. For this app, I've downloaded the "2023 Global Liveability Index Report" from The Economist to use as my custom knowledge source.
It's got a bunch of data from cities all around the world and data about their infrastructure, their healthcare, culture, environment, and things like that. And I want my assistant to be able to answer questions using data from this report. If you're following along and you want to download the same report for your app, then you'll find a link to it in the video description.
Otherwise, feel free to use any of your own data. To add this to the assistant, go to your tools section here and then turn on this information retrieval feature and click save and then click here to add that file. And once your file is uploaded, click save again.
Now let's clear the page and test our assistant again to see if it's able to retrieve our data. Now this time the response is much more useful. It discovers that the most livable city in the world, according to this report, is Vienna.
And if you look here, you also see that there's like a little numbered annotation. And if you mouse over that, you can get a snippet of where it retrieved that piece of data from. So as you can see, this assistant is working really well with our data.
It's giving us all the right information from this report and is doing it through a human-friendly interface. But what if I needed my assistant to fetch additional information or call other functions or APIs? For example, let's say I wanted to ask it a question like this.
If I had a function or an API available to help me calculate this, I can integrate it with my assistant using custom functions. Custom functions are external functions that you want your assistant to be able to call. You can define them as a JSON schema in terms of what the input to that function is.
Whenever your openAI assistant wants to use it, it will return you a JSON object conforming to that schema. You're then supposed to use that as input to the function yourself. So for example, say we want a function to get the cost of living in a city.
You could add a function schema like this. You specify the name of the function, the description of what the function does, and then all the available parameters that you want openAI to fill out for you when you want it to call this function. Let's pop back to our assistant window and add that over here.
So in this tools tab, click to add a function. And here you have a couple of examples that you can browse, but I'm just going to paste the function code that I showed you earlier. So we'll add that and then we'll click save.
With that done, let's try it out. Here I've asked it what is the top city that speaks English as a national language, and it's told me that this is Melbourne in 2023. And then I followed up by asking what is the cost of living there.
And now it's using my cost of living function and it's provided the input as a payload here. And it did this by looking at the function schema. Now for testing purposes, you're supposed to run this function yourself.
You could just put a sample response or you could take the function and calculate it yourself and then put the response into this text field and just click submit. And once you provide it with the response, the assistant will continue running as normal using that information. In a production app, you'll probably just want to write some code to call this custom function with this JSON input yourself.
I don't think that OpenAI is able to do this for you automatically just yet. But assuming that you can get this result successfully somehow, you're in business. You've now created a custom function interface that your openAI assistant knows how to use.
Now there's one more tool available to us that I want to show you, and this is the code interpreter. So go back to your tools section and you can turn it on right here. This will now give our AI assistant the ability to run calculations and even plot graphs by writing and running its own Python code.
So now that I've enabled this, let's ask it to plot some of this data as a graph. Now it might take a while, but you should see a graph like this when it's done. And it's generated this graph from the data in our PDF right here.
So it's pulled that out and then plotted it as this heatmap. You can also click open this code interpreter tab to see the code that it's actually written to produce this graph. So this is really useful if you have data that you just want to be able to talk to via this chat interface and have it visualize different pieces of the data or different relationships of the data for you.
As you see, once it's produced the graph, it will just show up directly in the chat window. After you've built and tested your assistant in the openAI playground, you might want to take it one step further and use it via an API endpoint. That way you'll be able to use it as part of your own custom application or even build your own UI or SaaS product around it.
If you're serious about using this, then I recommend you go to the documentation page yourself and read more about how to use the Assistants API in detail. It's got all the instructions you need here and a bunch of useful examples. Otherwise, if you just want to see how to quickly use it, then let me show you really quickly.
You're first going to need an openAI account and you're also going to need to have your API key set in your environment variable. So if you haven't done that yet, make sure you sign up and get that API key first. Go back to the assistant that you want to use and copy your assistant ID, which you should find right here under the name of the assistant.
Then we'll pop over to our code editor and create a new Python file and we're going to store this assistant ID. We're going to need that later when we want to call the assistant. Also make sure that you have the latest version of the Python OpenAI SDK.
You can run this command to upgrade it if you're not sure. Currently I'm using version 1. 3.
7. In your Python file, import the SDK and create a new client. If your OpenAI key is already configured in your environment variable, you'll be able to pick that up right away.
Otherwise, you can pass it into the constructor as well. Before you can interact with an assistant, you need to create a thread. This is the code you can use to create a new thread.
It's going to start with this single message from the user. Every thread is unique and asynchronous. You can think of them like opening up a new chat window with your assistant.
Once you have this thread, you can submit it to your assistant to actually run it. This is like clicking the run button that we saw earlier on the website. As a result, you'll get this run object which has an ID.
It doesn't contain the results of the run though. The run is asynchronous, so you have to wait for it until it's finished. To wait for it, we can do that using a loop.
There's probably a better way to do this, but this is pretty easy to understand. So let's start with this for now. We're just going to keep checking the status of the run every second until it is completed.
Once you know that the run is completed, you can then use this code to list all of the messages currently in the thread. That's going to give you a list of messages in the reverse order. So the message at index 0 is actually the last message that was added to the thread.
In this case, this is going to be the response from the AI that we care about. So let's print it out. So now I'm in my code editor with all of the code that we just walked through earlier.
Let's go ahead and run that. You can see here it's created this run, this unique run ID, and now we're pulling the run for its progress. When the run's completed, we can get this response from our AI.
So here's the response accessible to us via this API. I hope this serves as a good starting point to you. Once you have this result, you can do anything you want with it.
You can put it into a fast API server or maybe use it as part of a Lambda function. It's really up to you. Now if you want to go a step further and learn how to build your own custom UI using Python, then I also recommend you check out this Streamlit video tutorial next.
It gives you a really easy way to build interactive apps with Python, perfect for something like a chatbot, and you'll also be able to deploy it and share it really easily. Otherwise, I hope you enjoyed this video. See you next time.
Copyright © 2024. Made with ♥ in London by YTScribe.com