How to Build a No-Code RAG System (Pinecone Make.com)

857 views7675 WordsCopy TextShare
The AI Automators
πŸ‘‰ Get all of our make.com templates here (including this one): https://www.theaiautomators.com/?utm...
Video Transcript:
hey everyone Daniel here from the AI automats today I've got a great tutorial for you where I show you how to load up your very own knowledge based of documents into your make. com automations and when you get Claude or chat gbt to generate articles for you they're highly accurate and highly personalized based on what's in those documents if you're into building AI solutions for clients then this is a powerful technique to learn as it can really open the door to bigger clients and bigger budgets so in this tutorial we'll be loading up all of this information from this e-commerce store into pine cone which is a vector database and then when we're generating content will first be retrieving highly semantically relevant information to dramatically improve the accuracy of the text this tutorial is totally no code so even though it sounds complicated anyone will be able to do it and These make. com Blueprints will be included in our automation Community where you can quickly download import them into your make.
com account so you'll be up and running in a few minutes the technical name for this technique is retrieval augmented generation or rag in other words we are searching for highly relevant information that we can augment prompts when we're generating new content this rag style system is one of the most common use cases for businesses implemen in AI as they typically have lots of documents that they need included in generating outputs but these documents aren't publicly accessible before we begin if you're new to this channel my name is Daniel Walsh my brother Alan and I run the AI automator which is an online community where we help you automate your business like a pro before we dive into it let's have a quick look at the end result this is the online store that we're generating an article 4 and I've loaded up all of the information from this online store into our Vector database and then byy a simple prompt such as generate an article based off a 20% off sale it's able to Output this highly accurate article that has images product descriptions accurate specifications on the products it even has accurate shipping and delivery times information on returns and gift cards so this highly accurate article was generated using this rag based system let's get started generally speaking there are two stages to a rag system the first stage is to load up your knowledge base into a vector database when you're Gathering data this could be crawling website it could be uploading PDFs Word documents Excel CSV Json whatever it is once you gather the data you then send it into an embeddings model in this system we'll be using an open eyes text embedding three small model then when you have these vector embeddings or these numerical representations of the data in this knowledge base you can then upload them or upsert them into the vector database and when I say upsert what I mean is if it's fresh data it'll then be inserted into the vector database whereas if it's data that's already in the database that needs to be updated then it will be updated as part of this flow so at this stage you then have a knowledge base that has a vector database that represents it so you can then query that Vector database to find relevant results so the second stage then to a rag system is to generate content based on a query so step one we need to understand the query so in this video the query will be write an article about an upcoming sale for this online shop step two is then we generate an embedding of that query again using that open ai's embedding model step three is then we send the query or the embedding of that query into the vector database and we retrieve the most relevant knowledge for that query and then we include that relevant knowledge in the prompt when we generate the content and that could be through any large language model it could be antropix Cloud open AI GPT 40 or 01 or open source models like Lama 3 or mistro to really hit home what this rag system can achieve I'm going to go through three different prompts the first one is a prompt without context so here I'm asking an llm to write an article about an upcoming 20% off sale for a shop called Luma featuring a couple of products if we were to send this prompt into a model that model will have absolutely no idea about what shop this is what products they are and will absolutely hallucinate an answer and just try to pad out an article so an improvement upon that is what we call contextual prompting or at least it's manual contextual prompting so we're asking the same question but we then say base the article on the following information and then we need to paste in manually information about the shop information about the products Etc so this is a brilliant prompt engineering technique to dramatically improve the accuracy of outputs from a large language model the problem with it is it's manual some of the more recent models have much larger context windows so llama 3. 2 for example can handle up to 128,000 tokens so roughly 100 pages of text this context window length will most likely increase as new models come on stream which means you can stuff lots more information into each prompt to generate accurate outputs the problem with this is long prompts cost money and you may end up with less accuracy as there's so much information on the prompt it's a lot of noise for the llm to work through to figure out what to Output so sometimes it's better to be quite refined mind and focus as to the information you provide the large language model to give it the best chance of producing an accurate output so that's where rag comes in so with rag it's still is contextual prompting except it's Dynamic and what that means is it's taking this prompt it's creating an embedding of it which goes to the vector database to retrieve the most relevant results for this text and then that's dynamically entered into the prompt at the bottom here there are many benefits to this approach the main one being you don't need to keep manually pasting in information that's relevant to the actual prompt if you can create a rag system that continually updates like what we're going to go through today then the information that's retrieved is always up to date and just a quick note on open AI assistance versus the type of system we're going to go through today I created an AI agent video where I used open AI assistance and I built a make automation to continually update the vector store and the files within that openai assistant so check out the card above if you're interested in that it is worth discussing the differences though rag with open ey assistance is very easy to set up so definitely if you're on the beginner scale I would definitely start here using pine cone and make to create a rag system is definitely on the more complicated side if you are using rag with open AI syence you are very much locked in to using open AI embedding models and their large language model if you're using the pine cone approach you can use any embedding model you can use any large language model the concept of using a vector database can be implemented locally not necessarily with pine cone and make which are both cloud-based platforms but you could use FIS which is a local Vector database and NN which can be self Holsted I've created a dummy e-commerce store that you can see here this has about 200 products along with a host of informational articles so what we're going to do is load all of the information on this website up into pine cone which is a vector database to demonstrate this I created a Google sheet which contains all of the static pages and I've also brought in 30 different products from the store I'm showing you a Google sheet here but this data could really be in any format this could be scraped from a website it could be a CSV or an Excel it could be Json files the important thing is that you can get it into some sort of digestible format so that we can process it so next up we need to set up an account on Pine Cone which is a vector database so when you go to Pine con. you'll be able to create an account or login if you already have an account you can see their pricing here they have a free plan which is what I use and that gives you access to five different Vector indexes and you can see the various thresholds and usage limits here on that free plan they also then have a pay as your go plan if you need kind of higher limits so once you sign up you'll be landed into a dashboard like this and your pinec con account is ready to be used so next up we're then going to go to make.
com and we're going to create a new scenario and the first module we're going to create is a Google Sheets module because what we want to do is to load up all of the information on this sheet and upload it into this Pine con database so we'll click on Google Sheets and for the moment we're going to use the get range values so just type in get there it is get range values you'll then need to select the sheet so you can choose the file and that's it there you can choose the sheet name so I have created the first tab is knowledge base here and then what I'm going to do is load up the range from A1 all the way through to we'll say m 51 again this will be specific to however you're loading up documents and loading up information but this will work for this approach excellent so we'll press okay to that so I just save this and run it you'll see that it's outputed a very large number of bundles 50 different bundles so for the case of testing what I'll do is I'll just limit this to maybe two or three rows so I just change the range here from A2 to let's say M4 so we'll save that and run it again yeah now we just have three different bundles so we'll get the workflow up and running and then we can expand it out to the full data set so next up what we need to do is create what's called an embedding so if I jump into one of my other indexes here on Pine Cone let's jump into this one if I open up a record you can see I've already uploaded a product here from this e-commerce store and if you look down below there's a field called text and if I just copy this out and show it you so there's a quick look at the text associated with the product so the product name you know we've got a description there's a link to the product categories price whether it's in stock or not product images Etc so that's all of the information associated with that product but while that data is actually stored in Pine Cone that's not what's used to provide the most relevant results Pine con is a vector database so it needs numbers as you can see here so these decimals or vectors essentially represent the meaning of this text so while that sounds pretty strange this is effectively how large language models work they embed texts plain text that you see here into numbers and then all of that is fed into the algorithm to figure out what's the statistically most likely next word based on a certain context so what we need are these decimals or these vectors so to generate these vectors we need an embedding model and if you go to open ai's platform for example you can see that they have a number of different embedding models to generate these types of vectors and today we're going to be using the text embeddings 3 small model and we're going to feed all of the information from our knowledge base into that model to generate out those numbers that we can then upload to Pine so then back to make. com so if we click on the plus icon and type in in open AI so what you're going to do is look for embeddings so if you just search embeddings you're not going to find any modules so they haven't actually created a make. com module for embeddings so instead what we're going to need to do is click on the make an API call so we press that there and then what you need to do is just type in slv1 SL embeddings and then we're going to post to this endpoint then at the very bottom under body we're going to need to put in some text if we jump back to the open AI platform here we're going to see the API reference for this endpoint so there's embeddings we're going to be posting to this endpoint that's there and it's expecting this so if we copy that out bring it into here let me just make some space so that you can actually see that and then when you're hitting kind of arbitrary end points on make it is always best to kind of just remove the space okay so in this case the text that we're looking to create Vector representations of in this case the text is the food was delicious and the waiter Etc we're going to be using actually we'll use a different model so not the a002 let's use the text embeddings three small model and we can leave en coding format as float or a floating Point number which is a decimal okay so then let's press okay to that we'll leave the text as that dummy text for a second if we right click and run this module only what you'll see then we've passed the food is delicious and the waiter Etc and then in the response we're getting a body and then under data you're getting all of the embeddings that we're looking for so let me just zoom in here so these are all of the numbers that represents that text that we just passed in and this is what we need to save in Pine Cone along with the original text and there's the input the food is delicious so now next up let's remove this text and let's put in the text that we're retrieving from the Google sheet okay so we'll click out of that back into here and we'll delete out this and then you can see the popup here with the various variables from the Google sheet module and for this we're want to look for let's say description and press okay so we'll save that and again actually maybe we'll just limit this to a single row while we get this scenario up and running so A2 to M2 save that and run once Okay so we've encountered an error so let's have a look at the error so the problem here is it can't parse the Json body and the reason for that is the text isn't escaped properly if you if we dive into this description you'll see that there are various problems with the text there are Json kind of unsafe characters in it so we need a way to escape all of this text so that it's Json friendly and we can pass it in a call to the likes of this open AI endpoint so this is kind of a major limitation of make.
com at the moment there is no native way to do this where you're using kind of dynamic text and and dynamic structures so there's a a bit of a workaround that I'll show you here and I'll drop this into the description below so you can use it in your own automation so if we come in here and we're going to right click on that add a module this one would be set a variable we'll do set multiple variables because of a feeling we might need another one and for this one then we're going to name name this Json safe string and what we need to add in here is this so this looks pretty horrific this really is a workaround but what it's doing is it's replacing all of the Json unsa characters with spaces or it's escaping them so that they can be processed in the request and then the value in Black here which references 7. 3 that needs to be deleted and you need to then drop in the description there and then if we press okay to that and let's run this let's put a filter here so I can't go past this gauge so we just say when 1 equals zero which is never going to be case if we save that I just want to run these two modules and not this module so we click run once okay that's run and if you look at the first one again and click description you'll see the initial description from the Google sheet but then when you click this one you'll see that this is the new description where all of the characters have been escaped and this can now be processed in a web service endpoint so there's the Json save string so now let's remove this filter come back in here and instead now of having the description as what we're passing to the endpoint we're going to have this Json safe string so we'll click okay to that save it let's Auto align and let run it and now as you can see because that description has been successfully kind of clean so that it can be passed to the endpoint this now succeeds and if we press one we can now see that's what we passed and then back into the body data number one embeddings and there we have it so this is now the vector representation or the numeric representation of this text and there's a large range of vectors that we can now import into Pine con to represent this information before we go any further what I might do is create another variable just to represent all of the information associated with a row on this sheet and that way we can clean that variable so it can be passed to the end point so if we come back in here I know I created a set multiple variables but I'm just going to create a new module which is set of variable and then let's just say this is the uh full record and then I'll just um work through and populate out the values for each colume okay so there we have it so to that and then when we come to here instead of using the description I'll use that new string that I just created so full record okay and then actually let's leave this filter in and run it and see now how it looks excellent yeah so we now have the title page title page link and some of the other headings where there isn't actually any information okay so we'll remove that again and now we know we can generate a vector for a record on our Google sheet so next up we need to push that Vector into Pine con into an index so if we click on plus and then just search for Pine con firstly we're going to use a get a vector module the reason for that is we want to search to see does the vector already exist and if it doesn't we're going to create it and if it does we'll update it so that's called upserting you both update or insert depending on the context I already have a connection created here if you don't you'll just need to click add you'll type in an index name and an API key so for this then in Pine con if you go to indexes you can create an index so this we'll call this the make automation it comes to Dimensions what it's looking for if you click setup by model we're going to be using the text embeddings through small module in this case so I'm going to click that and that's going to load 153 36 into into the dimensions everything else I'm going to leave as is and then I'm going to click create index and then this is my index name so when I go back into make I can drop that index name into there and then in terms of an API key you can come into here API keys and you can copy one that already exists or you can create a new one and once you do that you can drop that API key into there so I'll just copy what I already have and paste that in oh yeah the index name needs to be based off what it says down here so it's actually not the index name it's based on the URL structure so back in here make automation that's the URL structure so copy that so you remove the end bit remove. Pine con.
and remove the https piece so that should be it I think press save to that maybe the trailing dot get rid of that yeah that's done yeah so you remove the htps for SL SL remove the pine. at the end of the URL and that's it I know you saw the API keys there so I'll just refresh them after this video is finished okay so now we have a connection to Pine Cone so what we're going to use as a key to check does a record already exist will be the URL so if we come back in here we have page link as our URL so the idea here is we're going to look for let's say this URL which is a gift guide in Pine Cone and if it exists we update that Vector if it doesn't exist we'll create that Vector so then enter Vector ID and then back to that first Google Sheets module and we'll use page link as the URL and we press okay to that don't need to set name space next up if we click on add another module we'll go pine cone now we're going to use that upsert a vector which either inserts or updates you can set the connection that you just created and then Vector ID again we're going to use that URL page link as it is seen there and then in terms of values what you need to do is Click add item click on that empty field which will load up the variables and then in this open AI module you need to go to data embeddings and just click on embeddings that's going to load that entire array of vectors into that value now a key actually is to press map I missed that because we're not creating a single Vector here we're creating an array of vectors so we're mapping this array of vectors to this field so now if we click okay to that press save and we'll run this once so it's retrieved that single Row from Google Sheets it's created embeddings and it's now upsert them into Pine con so let's have a quick look I don't even need to refresh it's automatically refreshed it and there you can see it at the bottom so let's just have a look at that and there you have it yeah there there's all of the values or all of those vectors or or floating Point numbers that have come back from that embeding module now the only issue is we don't have any metadata so we have no idea what information is actually associated with that so this is a really cool part of pine cone is you can add metadata to the vectors this is a feature that's totally missing from open ai's own kind of vector store so we cancel out of that let's just delete that Vector for the minute and then back into here back into that upsert of vector and then under metadata let's add an item and so let's put in content for example we'll put in a string and we'll load in the value that we sent to open AI to create the embeddings in the first place which is this full record now we could send in the Json save string or the full record it actually doesn't make a difference here because once we put in the full record this module make will make it Json safe any anyway so that's that one there I'm going to load another one which is title so this is the title of the page nice to have this as kind of a separate field and that's the title there and then we'll click okay save it run it again and now that should automatically upload and now you can see there's the values the vector values and now we also have the full content piece for that row so they're the numerical representations of that content and we have the title so there's one other thing missing which would be great if we're trying to identify if something has changed or not which is it would be nice to create a unique hash of this content so let's say that you wanted to keep this knowledge base up to date every day or every week for example and let's say you have hundreds or thousands of pages well one option is you just load up your sheet every time and with that upsert feature it's going to update the record hundreds or thousands of times even though nothing has changed so an alternative approach is that you create a unique hatch of the content and then if nothing has changed when we carry out a comparison it's just going to skip upserting that so it'll only actually upsert if something has changed so to do that then back into make. com we'll click on upsert a vector we'll create another metadata item here and we'll call this unique hash this can be of type string and we're going to then use a feature here which is the Sha 256 algorithm which you can see there and what we're going to do is pass the full record into that encryption algorithm so back into the star and we'll find the full record this will make sense in a minute so I'll press okay to that and let's run it again so that has now run again and this unique hash should magically appear yeah there you have it there's the unique hash so it begins with you know 5 f8d if I run that again now let's say a week later I try to update that Vector as you can see it ran straight in to upsert the vector if I refresh the page again it's still going to have the same unique hash 5 f8d so nothing has changed on that page it's basically updated it even though it didn't need to be updated so what we can do then here is we can carry out a check on this filter so we come in here stretch it out a bit and add a filter condition so the label that we can add here is is if the vector doesn't exist or if the page is changed so first one then from this module if the vector ID does not exist because here what we're doing is we're searching for a vector with that URL if the vector doesn't exist for that URL it hasn't been uploaded in the first place so that's that's a perfectly valid condition and if that's the case we want to insert it whereas here we want to add an AA condition and if that unique hash that we created is not equal to a new unique hash that we create so say sha 256 full record then in that case we also want to update it because if the unique hash of the pine con record is different to the unique hash that we now just generated from the text it means it's changed so press okay to that and we'll click save let's run it again and now what you'll see is it hasn't passed this piece because if you look here the vector ID is for SL about us so the vector ID does exist so it hasn't passed that condition and unique hash for that is the same because the content of the page hasn't changed so now if we come into here and if we just make any any modification to this record and we run it again you'll see it's now going to flow through here because something has changed and if we come back into here this 5 f8d hash if we refresh it we're now going to get something different exactly now it's E7 Ed and then you can see the change content page and then whatever text I just dropped in there so that unique hash is directly related to all of the text that's on that page great so that is the scenario so let's just rename that to oberting vectors into Pine con and then we can let this one loose so let's just delete that Vector that we just created and now if we change this to A2 M51 we're going to upsert all of the records on this Google sheet into our Pine con index and then the next scenario we create will actually be to retrieve relevant vectors when generating outputs so there's the a22 M51 we have an empty Pine colon index and now we can click run once and that is just going to bulk process through every record as you can see here it's cycling through all the numbers and if we come in here and refresh all of the vectors are now being uploaded into the index excellent and it's finished so that's 50 records uploaded into pine cone and there we go record count is 50 and that's exactly what we have on our Google sheet excellent so next up let's create a new scenario where we can now retrieve these vectors so back into scenarios new scenario what we're going to do is use the exact same Google sheet but we'll have a new tab which you can see here which is articles so I want to generate articles within this tab based on this title I want you to write an article about an upcoming sale for our Leisure company Luma the sale is 20% off and I want information on delivery and returns policies to be included in the article and actually I'll even change this because we have product information within this so yeah let's say that the yoga brick and the summit watch make sure to feature the Sprite form yoga brick and Summit watch as they are the focus of the campaign of course this could have a huge amount of kind of style guides toneal voice Etc this will do for the moment and so this is the article title or the description and I then I want the generated article to populate out here so then back into our Google sheet you could use a watch new roles here um I'm going to just use the same get range value just to demonstrate this but you could definitely watch new rows and have this scheduled to autogenerate articles then back into that knowledge base sheet and we're going to use the Articles tab we'll put in the range which is probably A2 to B2 I'd say perfect so we'll press okay to that and save just run that scenario and then yeah you can see that you're getting your full description here so now if we were to generate an article off the back of this without any of the information and the knowledge base we can demonstrate that here so we're going to create an open AI module we use GT40 We'll add a couple of messages so one will be the system message which would be write an article based on the provided text and then the text would be a user message and that would be what's in that article description field in the Google sheet let Set Max tokens to zero press okay and let's run it and like this isn't really going to work chat GPT will generate an article but it doesn't know anything about the products it doesn't know anything about the store or its policies so this is going to be generic or even worse hallucinating so here's our result and yeah it has featured the products but it knows nothing about them really so it's all it's just hallucinating kind of information about those products and hallucinating information about kind of returns policies and delivery information so now what we need to do is let's ground this response with information from our pine cone Vector database so the first thing we're going to do is have an llm call to digest the article description there's a lot of information there and then we need to digest that into a search term to search our Vector database so essentially what we need here you know delivery returns policy and then the names of the products that's kind of what we need to extract from this so let's right click here we'll add a module say that's yeah open AI create a completion we'll choose GT40 again we'll set a system message and that system message will be your task will write in an article based on a given description before writing it we're going to go to our knowledge base to fetch relevant content that will help ground the article in facts please propose a search term to search our knowledge base and we're going to ask it to respond to Json so that way we're just going to get the search term we're going to get it in kind of a a variable that we can then use in the next module so then add a message user and that user message is going to be the article description Max token zero and we're going to ask it to respond in the Json format and we want to parse that Json response as well so we'll press okay to that again let's do this trick of when 1 equals zero because we don't need to generate the article cuz uh we still need to fetch everything from Pine con so we'll save that and run and then you can see the response in the bubble and then the result is Luma Leisure we delivery and return policy and then the names of the products so that is excellent and that perfectly kind of summarizes exactly what we need to search the knowledge base for based off this description so next up we now need to create an embedding of this search term we need to go back to that embedding module and generate vectors or numbers that represents this information so then we can go to our pine cone Vector database and say okay here are these numbers provide all of the vectors that are similar to these numbers because if they're similar to these numbers or if they're if they're close to these numbers within the algorithm it means that they are semantically relevant or they're highly related and that's essentially how Vector database and Vector search works so we'll come in here we're going to go and add a module this will again then go down to this make an API call you can copy this out from the previous scenario but it's slv1 SL eddings and then we're going to post to that endpoint and then in the body we are going to send the result from this module into that text and medings through small model so we'll press okay to that so now we'll save it and run it and it creates the search term and then it creates the vector embeddings for that search term and there you have it you can zoom in there so these are all of the numbers that semantically represent this search term so we now have something to go to pine cone with to get highly relevant results back so we click on the dots add a module and now we're going to go to pinec con and we're going to use the query vectors module let drag that over there so then for Vector you you can add an item and again turn on this map feature because we're want to send an array of vectors that we just got back from the last module and then you can come into here data embeddings and click that and then if you scroll down under include metad data we definitely want to say yes because we want to get the actual text back we don't need a whole node of numbers that we can't use anything with we need the text that's related to those numbers and then under limit well we might just set that to let's say four so this is the number of results that we're going to get back from the vector database and we'll click okay to that so we'll click save I need to move this filter again press okay to that and then we can run it so there we have it we've gotten back I've updated the result to 10 rows and we're getting the Sprite former yoga brick and we are getting the summit watch I believe yeah Summit watch which is there so they are the focus of the article then we're also looking for information on delivery and returns policies so yeah we're getting events and Community we're getting the homepage customer service will have some of that FAQs will also have some of that so that pretty much covers off those kind of use cases So within those kind of top 10 results we are hitting everything that we need to hit as you can see the vector database is definitely returning relevant results but in a lot of cases the results that are at the top might not actually be the most relevant so it's common to have what's called a reranking phase after you get results back from a vector database where let's say these 10 results are then ranked based off the search term using a different model to then provide the absolutely most relevant results at the top I'm not going to do that in this just not to over complicated but you could definitely have a kind of a reranking module here so the next module we need is an array aggregator because what we're getting back is a number of bundles so if I was to continue this on without aggregating those bundles we would end up generating 10 articles so if I click add a module and then just type in Array there's array aggregator so we'll drag that over there and what we want to aggregate are the bundles in this module so we can just choose that module and let's aggregate both let's say the vector ID which is the URL as well as the metadata which contains the content of the pages so we press okay to that and then you can see that's gray out because that's a little Loop that's going to run and it's going to kind of create bundles and then reduce down the bundles into an array then we need to transform that array into Json so we can then feed it into chasht or you could use CLA for example to generate out the final article so right click again and add a module so it'll be transform to Json is what you're looking for so we're going to yeah transform this array so we'll press okay to that and now let's put in the filter again here and now that has completed and then you can see we now have a full Json string which is essentially everything that has come back from that Vector database and that's now what we can send into chat GPT to augment the generation of this new content so then let's remove this filter we're in the home straight here this was the module to generate the article but now we can say write an article based on the provided text while using the following information to ground the article in facts about the online store and then we're can just drop in relevant knowledge based articles and then that can be the Json string you can also say only use information that is is contained in the article description and the provided text by the user or the below knowledge base we're going to press okay to that and then finally we'll just add one more module which is to go back to Google Sheets that is to update a row actually we can use update a cell we have the row ID from the first module so we can choose our spreadsheet again we can choose the Articles tab of that sheet and then go back to the first module we have our row number and we know what the cell column is which is B so it'll be B2 for example and the value will be the results from this module so our full article excellent I think we're ready to run this flow end to end now so let's click run goes to Google Sheets gets the article description and then embeddings of those Search keywords are generated using the embeddings module which is then sent to Pine Cone to query kind of relevant similar vectors and then the content related to those vectors is aggregated into kind of a string which is then used in the article generation prompt to really ground the llm to produce kind of accurate personalized content based off the description and it then saves that to the Google sheet so then let's go back into the Google sheet and we now have our generated article so let's have a look at it excellent so we have information about our feature product we even have the image which is great if this was going to be used on the web and while we're reviewing this let's open these up so the Sprite foam yoga brick normally priced at $5 it says so let's have a look yeah that is correct 6 in by 9 in I'm just kind of factchecking some of this stuff yeah 6 by9 you can see there so it's very much grounded in the information on the product originally priced at $54 and there's a 20% discount which is accurate so there's the $54 the whole 20% discount that's the whole purpose of this article and then standard shipping a priority 5 to 7 days is $16 let's have a quick look at our Delivery 5 to7 yeah $16 this is perfect two to three business days for priority two to three business days for priority so that is absolutely ideal L mentions the gift card and for more information hit there so that is a brilliantly written article in the sense that it's perfect grounded in the information of the online store of course you know the style could be changed you could put in a style guide for that article you could use markup and then Auto post that to Wordpress or generate kind of social media posts off the back of that but really what you're seeing there is how to really ground an article in company specific knowledge using pine cone using Vector embeddings to really produce something that's that's pretty useful and as I mentioned a lot of companies are looking for this kind of rag style system because it's not enough to get the generic text that's generated by large language models they need company specific text based off what's in a knowledge base if you'd like access to these me.
Related Videos
Build high-performance RAG using just PostgreSQL (Full Tutorial)
35:43
Build high-performance RAG using just Post...
Dave Ebbelaar
12,948 views
The AI-Powered News Article System (100% Automated)
29:43
The AI-Powered News Article System (100% A...
The AI Automators
2,189 views
Run ALL Your AI Locally in Minutes (LLMs, RAG, and more)
20:19
Run ALL Your AI Locally in Minutes (LLMs, ...
Cole Medin
163,949 views
10x Your Video Creation With This AI System (Full Guide)
1:14:55
10x Your Video Creation With This AI Syste...
The AI Automators
2,159 views
Easily Automate Business Tasks – No-Code Automation Course
3:00:39
Easily Automate Business Tasks – No-Code A...
freeCodeCamp.org
166,625 views
Build an AI Agent Team That Does EVERYTHING for You! (No-Code)
46:56
Build an AI Agent Team That Does EVERYTHIN...
Ben AI
62,794 views
This ChatGPT SECRET Automates Nearly Everything (connect 10K+ apps)
35:53
This ChatGPT SECRET Automates Nearly Every...
Jono Catliff
39,263 views
Build a SaaS in 60 Minutes (Softr, Airtable + Make.com)
59:51
Build a SaaS in 60 Minutes (Softr, Airtabl...
Stephen G. Pope
106,890 views
Build anything with o1 agents - Here’s how
39:26
Build anything with o1 agents - Here’s how
David Ondrej
26,825 views
Let's build a RAG system - The Ollama Course
7:34
Let's build a RAG system - The Ollama Course
Matt Williams
13,737 views
This AI Creates PERSONALIZED Sales Pitch Decks in Under 1 Minute!
33:02
This AI Creates PERSONALIZED Sales Pitch D...
The AI Automators
476 views
Why are vector databases so FAST?
44:59
Why are vector databases so FAST?
Underfitted
18,414 views
Step-by-Step Tutorial: Build A.I. Agents with n8n (NO CODE!!)
25:00
Step-by-Step Tutorial: Build A.I. Agents w...
AI Workshop
27,990 views
The Ultimate AI Blogging + Social Media System
35:52
The Ultimate AI Blogging + Social Media Sy...
The AI Automators
1,270 views
Automate EVERYTHING Through ChatGPT ✨
29:13
Automate EVERYTHING Through ChatGPT ✨
No-Code Ireland
38,709 views
How to Build a Q&A Chatbot with RAG (No Code!) - Langflow Tutorial #3
18:31
How to Build a Q&A Chatbot with RAG (No Co...
Leon van Zyl
6,994 views
Chunking Best Practices for RAG Applications
53:45
Chunking Best Practices for RAG Applications
KX
10,596 views
Script VIRAL Reels With Instagram Scraping! (100% Automated)
47:16
Script VIRAL Reels With Instagram Scraping...
The AI Automators
2,459 views
Don't Sleep on the ULTIMATE AI Agent Combo (n8n, LangChain, Python)
24:56
Don't Sleep on the ULTIMATE AI Agent Combo...
Cole Medin
15,764 views
Steal This AI Blogging System (Internal Links + Social Media)
24:16
Steal This AI Blogging System (Internal Li...
The AI Automators
1,335 views
Copyright Β© 2025. Made with β™₯ in London by YTScribe.com