HuggingFace Langchain | Run 1,000s of FREE AI Models Locally

81.78k visualizações5235 PalavrasCopiar TextoCompartilhar
Tech With Tim
Today I'm going to show you how to access some of the best models that exist. Completely for free an...
Transcrição do Vídeo:
today I'm going to show you how to access some of the best models that exist completely for free and locally on your own computer we're going to be doing that in just a few very simple lines of code using hugging face and Lang chain I'm going to show you how to use various hugging face models for free inside of a python application and then we'll connect them to Lang chain so we can build something even more interesting with that said let's get into the video so quickly let's get into some information that's important to understand we're going to be using something called Transformers now this is a free open source package that you can install with python using pip if you're familiar with that and it allows you to utilize tons of different models that come from hugging face now hugging face is a platform you can see it right here that has all kinds of Open Source models available some of them you do need to pay for or you need some specific access but most of them are completely free and you can just download them and use them on your own computer now what the Transformers package does is just make it extremely easy to access these free hugging face models and to download them and use them on your computer you'll see here that in just a few lines of code you can run pretty much any of these models assuming you have the correct hardware and if you wanted to go further you could even fine-tune these models and add a bunch of advanced configurations so we're going to be using Transformers and we're going to use this in combination with something called Lang chain now Lang chain is another python package that just makes it a lot easier to work with llms so if you wanted to use one of these Transformer models you can connect this with Lang chain and then add things like memory connect multiple models together and I'll show you the basics of that in this video so that's the information now we're going to go ahead and start getting all of this set up now first things first we're just going to make an account on hugging face this is because a lot of these models require that you accept some terms or their license agreement before you're able to actually download them and use them so go to hugging face I'll leave this link in the description you should be able to go up here somewhere in the top right hand corner and just make a new account once you make a new account we're going to go to access token we're just going to leave this page open and we'll come back to it in 1 second so to get started here we need to set up our environment to be able to execute these different python packages now this does take a second so just bear with me once the environment is configured then it's very easy to write the code now you'll notice that what I'm using here is pie charm now that's the IDE that I'm going to use for this video and I actually have a long-term partnership going with them so if you would like to use pie charm as well which I would definitely recommend then you can click the link down below and I can give you a three month free extended trial of the pie charm professional Edition there's a Community Edition which is completely free and then there's the professional Edition which obviously has a lot of other features and specifically if you're into data science you'll see there's a lot of features here that can help you out for example it has an integration with hugging face directly works very well with tensorflow pytorch cond Git jupyter notebooks databases all kinds of stuff and you'll see some of the great features in this video of py charm anyways you can use whatever IDE that you want but if you do want to check this out again I have the those links below okay so first things first we want to create that virtual environment so what I'm going to do is open up my terminal here and from my terminal I'm going to execute a command that will create this environment for me now you do need python installed in order for this to work and what the virtual environment will do is just create an isolated area where we can have the dependencies specifically for this project so in order to do that we're going to type python DM VV and then venv now I'm running on windows so I'm using the python command but if you're on or Linux you can use the Python 3 command in case you get any errors with this what this does is use VV to create a new virtual environment called venv or VV and then once this step is finished we're going to activate that virtual environment so you've seen here that in my editor I just opened a new folder I just called it HF tutorial and then you'll notice that a new directory is created called venv that's my virtual environment and the next step is to activate the virtual environment so that we're using it when we install various dependencies so to do this manually from the command line you're going to write the following command and this is if you're on Windows it's going to be slash the name of your virtual environment which should be then V if you followed this command SL scripts with a capital slash activate if you type this you should see that you get a prefix with the name of your virtual environment in your terminal now if you're on Mac or Linux the command will be different and it will be Source again do/ theame of your virtual environment slash and then this time it's going to be bin with a lowercase and then activate this will then activate the the virtual environment for you and if you're working inside of py charm and you want to use the correct interpreter you can press this button down here and then you can simply go to add new interpreter add local interpreter press on existing and then select The Interpreter that's in your current directory so you can see HF tutorial venv scripts python. exe press okay and then it will select that and you'll see that it pops up down here and now all of your autocomplete will work with those packages now when you want to run your python code you simply type Python and then the name of the file if you called it main. py and then you can hit enter and as long as you're in the virtual environment then all of this will work okay next step we're going to make a new file inside of our directory called requirements.
txt now this is where we're going to put the various requirements that we need in order to work with the Transformers and Lang chain Library so inside of here we're going to type Transformers if we spell that correctly we're going to do Lang chain and we're going to add Lang chain Dash and then this is going to be hugging face which will allow us the integration with Lang chain okay so these are the three main packages that you need now that you have the requirements. txt file created from your virtual environment you're going to type the command pip install dasr and then requirements. txt this will read the requirements.
txt file and install all of those in your virtual environment so I'm going to go ahead and press enter you can see these are all cashed because I've installed them previously and then they will get installed if you're on Mac or Linux again you can try the command pip three in case that one doesn't work for you okay so all of that's been installed and the next step here is to Simply get our hugging face token now we need the hugging face token because a lot of the models just require that you accept some license agreement they're free to use but you have to essentially check a box and that's connected to your hugging face account so if you try to pull certain models it will give you an error and that's because you haven't added the token so what we're going to do is from our user access tokens again you can just go to hugging face create a new account press on your profile press here on access tokens and simply create a new one so if I go create a new token I can just call this python 2 or something just give this a read mode just give it read and press on create token copy this token obviously don't leak it to anyone like I am right now and then what we're going to do is type the following we're going to say hugging face- CLI and then sorry we're not going to paste the token we're going to say login okay now this command will work as long as you've installed Transformers in your ual environment so I'm going to hit enter and you're going to notice that it simply ask for my token so I'm just going to paste the token inside of here and then hit enter and then I'm just going to say yes for my git credential when you paste this you're not going to see the token so just paste it and then hit enter to go to the next line okay so now the token has been added you can see the currently active token is the name of the token and now we'll be able to pull those models and kind of accept the terms and conditions okay so we're almost done here that is pretty much the environment setup now what we're going to do is make a new file so I'm going to go new file I'm going to call this main. py and I inside of here s we're going to write the code that will allow us to start using some Transformer models so just bear with me while I write some basic code and then we can explain it after okay so first thing we're going to do here is we're going to say from Transformers import and then we're going to import pipeline okay now pipeline is a simplified way of running various models if you want to do this in a more Advanced way there are ways to do that but I'm going to keep it simple for this video now after that I'm going to Define my model so I'm going to say model is equal to you can see I'm getting some autocomplete here in py charm uh and I'm actually not going to use that I'm just going to say pipeline and then I'm going to paste in here the following okay just bear with me summarization and then model is equal to the following so believe it or not this is actually as easy as it is to run a model inside of python from Transformers or from hugging phase what I'm doing is I'm using this thing called pipeline pipeline just automatically sets everything up for me and all I have to do is specify the task so there's various different tasks like summarization text classification text generation and then the model that I want to use for this task I'm going to show you how you get the models in just one second but this is one that we can use for summarizing text okay we specify the model and then all we have to do is say response is equal to model and then we pass in this case text to summarize and it it will literally just summarize the text for us using this machine learning model the first step is it's going to download this it's going to take a second to download once it's downloaded it will then just summarize it for us when we call this model function and then we can print out the response okay so believe it or not that's literally all you need to do now there's some more setup steps and things that I'll talk about in a second but for now I just want to run this code and make sure it works so manually from your terminal you can just go Python and then main. py assuming you're in your virtual environment but because I'm in py charm I'm just going to use the Run button here and it will just run it for me um in my terminal and let's see if we get any errors after running this okay so you can see this did indeed work and it gave me this summary text now it doesn't really make a lot of sense because I just passed something that doesn't make sense to the model like it was only three words so it didn't really know how to interpret this but then you can see that it said summary text and then it gave me kind of this random summary of something that's not really relevant to this if we had a larger text it would actually summarize it properly so don't worry about that this is just a quick example now for you this probably took a second to run because you needed to download the model First Once the model is downloaded it will be stored on your machine and then you can use it very quickly after that now you'll also notice that we're getting some kind of warnings I'll show you how to mute those in one second and you'll notice that it says that the device is using CPU now this is the main thing I want to focus on how do we get this to use your GPU because obviously if you have a GPU you want to use that and it's going to be hundreds if not thousands of times faster so let me show you how we do that and we'll talk talk about also how we get different hugging face models just bear with me there's a lot of stuff to cover okay so first let's look at how we can run these models using our GPU now in order to do that you are going to need an Nvidia GPU or at least that's what I'm going to recommend because that's what works the best now in order for your Nvidia GPU to be used for this you need to download and install Cuda now I'm just going to go to the Cuda website I'll link this in the description as well but you can simply type Cuda toolkit and then you're just going to download that okay so if you go Cuda toolkit on Google you're going to need to download this again it's only available for Linux or Windows and then once Cuda is downloaded and installed you'll be able to move on to the next step so again if you want this to work for your GPU download Cuda the link will be in the description okay once Cuda is downloaded what you're going to want to do is restart any terminal instance that you have so close your IDE reopen it and you're going to type the following command just to verify that it's installed you're going to type EnV cc-- version if you do this you should see some kind of output and it will tell you what version of coua that you have okay so in my case I have 12.
6 you probably have 12. 8 or something more recent if you just download it like today when you're watching this video so now the Cuda is installed we can move on to the next step which is going to be to install pytorch specifically for GPU so I'm going to copy in a command which again I will leave in the description and it's going to look like this uh sorry let me just get out of this because for some reason when I copied this it automatically ran which I don't want it to do but you can see that it says pip install torch torch Vision torch audio and then I specify the index URL and at the end of the index URL you'll see this number cu12 something now in your case if you're downloading one uh sorry 12. 8 for Cuda you're going to change this to 128 in my case it was 126 so I'm going to download 126 okay what this is going to do is install pytorch 4 GPU for you in your virtual environment and then you'll be good to go in order to use this for your GPU so again this command will be in the description but simply hit enter here install this in your virtual environment and then you'll be good to go okay so that was successfully installed so I can go ahead and close the terminal and what we're going to do to verify that this is working is we're going to paste the following code in our Python program now all we need to do is import torch and then we're simply just going to check if our GPU is available and what the name of that GPU is so all we have to do is just print out torch.
ca. isil and torch. ca.
Direitos Autorais © 2025. Feito com ♥ em Londres por YTScribe.com