eager to be part of the rapidly growing AI field then you are in the right place today we are exploring the Forefront of Technology with generative AI a field that's transform how we create and interact with digital content we'll start by breaking down the basics including what generative AI is how generative AI model function and what makes them so powerful from there we'll look at the advantages of generative Ai and explore what the future holds for this transformative technology but it's not just about the tech we'll also examine the ethical consideration to ensure you understand
the broader implications but wait there is more next we'll introduce you to large language models that is llms featuring popular ones like GPT cloud and Gemini and that's not all you will get handson with prompting techniques and learn how to create your very own llm app for Android it's easier than it sounds we'll also explore top AI tools like chat jpt and GitHub co-pilot and show you various demos on how to use these tools to cap it all off we'll explore advanced concepts like Lang chain and rag with compiling case studies to wrap things up
by the end of the video you will have the skills needed to build Advanced AI applications continue watching to excel in generative AI and check out the time STS in the description to find your favorite [Music] Parts let me make you understand with a very simple example what is generative AI transport yourself back to your childhood you had a lot and lot of toys to play with you would keep that toys in one box now also imagine that if you wanted some toy which is different you would not get in the market but what if
I tell you that this box is a magical box and if you input your understanding of what you want in your new toy with instructions it can create a new toy for you which is not available in the market now this toy can be a beer with unicorn features and wings what if it generates for you this magical box generates a toy which is very unique for you this magical box is nothing but generative AI generative AI actually is not a magic it's a fast and rapidly evolving artificial intelligence system which creates generates transforms content
that can be text video audio image Etc based on your input so if you want to understand it technically generative AI or gen AI functions by employing a neural network to analyze data patterns and generates new content based on those patterns neural networks are nothing but a mimicry or a replication of your biological neuron based on how it gets from brain the activity from brain and you do your work it's nothing but a mimicry of that based on that mimicry it analyzes data patterns and generates new content for you let's Now quickly see what is
the difference between discriminative and generative AI suppose you have a data set of different images of dogs cats you provide this as a input to your discriminative AI which acts like a judge and it classifies all this into set of images between cats and dogs this is discriminative AI it classifies now let's understand what is generative AI you have the similar set of cats and dogs but now your generative AI is acting like an artist it creates a new species of dogs for you that's why generative AI is nothing but AI system that transform creates
generates your own content based on your instructions like an artist now that you have understood what is discriminative AI and what is generative Ai and what is the difference between the two let's understand why is generative AI or gen AI trending gen AI has impacted various Fields be it text audio video any input and those inputs in various domains like data management Tech healthare and entertainment it has creative applications such as di chat GPT where you can input what you want and get output from it for example if you want to create an image what
you think or perceive as a concept and you want it you give a prompt for your generative AI model and it'll create that image for you so your input is a text but your output is an image that's why it's trending it does not depend how traditional AI is dependent on what form of input you give the same form would be your output however gen AI works on your inputs on your instructions that's why it's trending it is impacting a lot of fields be it creative field be it research field be it business professionals are
using tools like chat GPT to create or generate code so that they can create something new the researchers are actually developing new and new large language models based on which we can create new generative models and can do new and new task each and every day that's why generative AI is evolving in rapidly and that's why is close to Magic for everyone now that you have understood why it is trending now let's understand how it works we give an input to generative models gen AI works on generative models we give an input it can be
text audio video any format those generative models are then preened on the data and they are fine-tuned to do the task that you want it can be Tech summarization it can be sentiment analysis it can be image generation it can be audio generation for your YouTube channel or analyzing your customer feedback if you are a brand or a marketing firm it can create codes whatever you want you give a prompt what you want explaining it that what you want and it fine tunes and gives you that task for you so this is how in nutshell
generative AI model works so now let's see what are the different types of generative AI first one is generative adversarial Network Gans it's a type of AI where two models one generating the content and one judging it work together to produce realistic new data second is variational Auto encoders this AI learns to recreate and generate new similar data third is Transformers Transformers is an AI which learns to produce sequences using context fourth is diffusion model which generates data by refining noisy starting until it looks realistic now that you have understood what are the different types
of generative AI let's quickly walk through different applications of generative AI first one is content generation it creates it generates whatever textual or any code that you want customer support and engagement if you're are brand firm it helps you with that data analysis and data science it helps with visualization it helps with analyzing any data it be it any data you want you are a brand firm or you are a technology firm it will help you analyze your data and create new automated task for you or it would create new perceptions for you to take
over then it is code generation and software development we have research and information retrieval as well where it helps different researchers it helps different professionals to grow and retrieve extract information required from different or various data sources then we have machine translation if you are a person who do not understand a language and you're watching something or reading something which is in different language you can use generative models to translate text or audio or anything into the language that you require then we have sentiment analysis which actually takes feedbacks or any text that you have
to give you is it a positive negative or neutral sentiment and so that you can analyze and take decisive decisions other domains here include healthc care transport everywhere it helps generative models generative AI is helping each and every domain in their perspective how they are applying this technology change in their domain step into the fascinating world of generative AI a technology that is transformed per in Industries and creativity alike we'll break down what generative AI is explore the advantage it brings and discuss its potential for the future additionally we will explain how generative models work
and address the key ethical considerations that come with harnessing this powerful technology so let's get started let's Now understand the advantages of generative AI generative AI offer new benefits across various Industries it enhances creativity by allowing tools like Del to create unique artwork and muset to compose original music thus expanding the creative possibilities for artist in fashion and design AI tools like Runway AML significantly increase efficiency by speeding up the creation process saving both time and cost personalization is another key Advantage as AI can generate tailored content for marketing improving use and engagement and conversion
rates in the field of drug Discovery AI helps accelerate the development of new treatments by generating Noel molecular structures additionally ai's ability to handle large scale content generation such as product design or media allow businesses to adapt quickly to Market demands overall generative AI is driving Innovation creativity and efficiency across multiple sectors now let's talk about the future of generative a which holds exciting possibilities we can expect AI to increasingly collaborate with artist and creators pushing the boundaries of creativity by co-authoring works and composing music as AI technology advances it will offer even more personalized
experience such as tayor educational materials that adapt to individual learning styles integration with augmented reality that is AR and virtual reality that is VR will enhances this environments creating more immersive and interactive experiences ethical consideration will be a major focus with efforts directed toward ensuring responsibilities use and addressing issues like data privacy and authenticity furthermore generative AI will drive Innovation across various Industries including health care and finance reshaping how we approach and solve complex [Music] problems let's understand about the ethical consideration in generative AI because as generative AI technology evolves several ethical consideration must be
addressed one major concern in the potential for AI to create realistic but misleading content such as deep fix which can contribute to misinformation there is also the risk of peruta biases present in the training data which can impact fairness and accuracy ownership of AI generated cont poses another challenge as it is important to establish clear guidelines for intellectual properties rights additionally AI system might inadvertently reveal sensitive information from their training data necessi strong privacy measures finally it is crucial to establish standards to ensure that generative AI is used ethically and responsibly avoiding misuse and harmful
applications now that we have covered the essentials of generative AI let me ask you a brief question which aspect of generative AI are you most curious about and why let me know your responses in the comment section next we'll explore the exciting realm of large language models that is llms we'll begin with an introduction to llms and take a closer look at some of the most popular ones including GPT Cloud 3.5 Sonet and Gemini will also delve into open ai's GPT API and guide you through the process of creating an llm app for Android ready
to harness the power of llms let's begin what exactly is an llm imagine you have a super smart friend who has read every book article and blog post in the world this friend can chat with you about almost anything help you with your homework write stories and even tell you jokes that's pretty much what an llm does in technical terms an llm is a type of artificial intelligence that has been trained on vast amounts of data it can understand and generate humanlike text based on the pattern it has learned from all that reading this is
part of a broader field called generative AI which focuses on creating content that resembles human output let's now break down the basics large language and model large just means that the model has been trained with a massive amount of data we are talking about billions of words from books websites articles and more model refers to a set of instructions or algorithms that helps the AI understand and generate language think of it like a recipe that guides how to make a cake in this case it is guiding the AI on how to create meaningful text let's
talk about language this is all about words sentences and how we communicate when you text your friends write an email or post on social media you are using language so now that you understand what each word in llm stands for let's quickly see what is the technical definition llms are language models made up of neural network with billions of parameters that are trained by self-supervised learning on vast amounts of unlabeled text now let's dive into into how do llms work let's simplify this with an example imagine you type the sky is dash dash dash the
llm might predict the next word to be blue because it has seen that phrase so many times during its training large language models like gp4 operate based on complex neural networks trained on vast amounts of text Data here is a simplified explanation of how they work in two phases the first phase is the training phase followed by the inference phase the training phase has four steps step one data collection that is gathering diverse text Data from various sources step two pre-processing which means cleaning and tokenizing Text data into numer iCal representations step three model architecture
which is nothing but designing the neural network structure typically a Transformer step four training is adjusting model parameters by predicting the next word in sentences by this training phase the llms are being trained on vast amount of data to do further predictions now coming to the next phase which is inference phase it also has four steps step one input processing that means tokenizing and converting the input text into embeddings step two generating output Now using the model to predict and generate the next words step three sampling which is selecting words from predicted probability distribution step
four last but not the least postprocessing which means converting generated tokens back to readable text after the training phase in the inference phase the llms will recognize the pattern sample content and give predictions apart from these steps there are three key Concepts which we should know to demystify llms First attention mechanism this allows the model to focus on relevant part of the input text improving its understanding of context and meaning second embeddings which are numerical representation of words or tokens that capture their meaning and relationship third Transformers Transformers are the architecture that uses self attention
to process input data in parallel making it efficient and more powerful let's see some examples of llms in action example one chatting your prompt can be what is the capital of Japan llm replies the capital of Japan is Tokyo llm knows this because it has read a lot of information about countries and capitals let's say example two about writing your prompt can be can you help me write a story llm says sure once upon a time in a land far away there was a brave knight who dash dash dash the llm can generate creative text
based on what it has learned from reading stories example three let's see answering questions your prompt can be explain photosynthesis in simple terms llm replies photosynthesis is a process by which plants use sunlight to make food from carbon dioxide and water it simplifies complex Topics by using the patterns it has learned from educational content there are different types of llms each designed for specific task and application now let's explore these types first are base model models they are trained on a wide range of data and can perform General language understanding and generation task they aren't
specialized for any particular type of instructions but are versatile for example gpt3 which is known for its versatility and ability to generate coherent and contextually relevant text another type is instruction based models they are fine-tuned to follow Specific Instructions better than base models they can understand and execute tasks based on detailed prompts given by the user for example T5 that is texttext transfer Transformer it converts all NLP problems into a textt to text format making it highly versatile for tasks like translation summarization and question answering another example is instruct GPT which is tailored to follow
specific user instructions better and provide more useful and safe responses so these were the types of llms however there are paid and open-source llms as well in the market based on your usage and budget you can explore them some paid llms are gbt440 Microsoft Azure open AI Services Etc and open source are GPT Neo Bert Etc let's see how are llms revolutionizing the world of AI in education they can help explain difficult Concepts answer question s and even tutor students in content creation they assist writers marketers creators by creating generating ideas drafting content and editing
encoding the assist professionals with generating code to their specific problems in Customer Support many companies use llms to power their chatbots providing quick and accurate responses to to their customer inquiries apart from all these fields llms are being utilized in every field in domain but it's not all smooth sailing there are challenges too let's see one such challenge which is bias since llms learn from Human written text they can pick up and reproduce biases present in the data another one is misinformation they can sometimes provide incorrect or misleading information especially if the data they were
trained on was flawed however keeping challenges aside the future of llms is exciting as they continue to improve we can expect them to become even more accurate reliable and versatile they will will be better at understanding context handling complex tasks and even learning from smaller amounts of data so there you have it large language models llms are like super smart friends powered by tons of data and advanced algorithms they can chat write and help us in countless ways as they continue to evolve they are set to become even more integral in our daily lives as
you know the large language models are Advanced AI system designed to understand and generate human like test let's talk about some of the most popular LMS first is GPT 4 from open AI excels in generating coherent and relevant text used in chatbots Virtual assistance and content creation then word created by Google understand word context by analyzing surrounding words making it effective for question answering and sentiment analysis and also T5 also from Google handles diverse tasks such as translation and summarization by treating them as text to text problems after that cloud developed by anthropic focuses on
providing ethical and safe responses making it suitable for sensitive applications next is llama which is from meta offers high performance in various language tasks providing useful for research and text generation CLA 3.5 Sonet is an advanced version of clot emphasizing improvements in safety and Effectiveness GPT 40 mini is a smaller variant of GPT 4 designed for efficiency in various application Gemini from Google is known for its Advanced capabilities in handling complex language task these alms are Central to advancing natural language processing and driving Innovation across many fields as these models continue to evolve they will
play an increasingly crucial role in shaping the future of AI and its applications let's understand introduction to open AI GPT API how this particular thing works and what is open AI first of all what is open AI it's a company where it will cater in order to work with chat Bots generative AI applications different kinds of models llms Etc basically it is dealing complete artificial IAL intelligence domain which is booming nowadays open AI has a platform where you can generate the API keys and you can integrate those into your applications API features what are the
features it will cater for text generation completion and conversation capabilities so talking about text generation it is always dealing with providing a new text which is not in your imagination with one small question say I want a poetry on so and so it will give you a complete poetry where it is not plagiarized it has been trained in that level it can think about writing poetry it has lots and lots of data behind how it is dealing with that what is to be categorized there comes classification summarization and many other machine learning and data science
models artificial intelligence models which is giving you the answer preop next completion if you give a prompt in a incomplete way it will try to complete that if you give with a spelling mistake it will correct your spelling and ask you back was this your idea the do you want to search so and so thing it will question you back in a interactive conversational way what we do with chat GPT it's the conversation how it will answer us with the help of already available data it has been trained on and it is updated every time
you talk to it that's how the model works with so these are the few features we have next comes to fine tuning and customization for specific task say you are building certain module which has been integrated to your application you're using open AI platform you can generate your own model it can cater to your own set of questions say for example chatbots in some or the other shopping websites or juary shop websites it will try to ask you what you want the lots of chat Bots which will address and also will help certain percentage of
customer care services without human Intervention which can be done with the help of machine 100% it will be solved other aspects it cannot uh very good example for this is swiggy right you can give a set of questions which is already present where is my order deliver gu is not moving so when you put this my order is getting delayed it will give a set of answer which is already there what is the current status still if you're not convinced by the bot answer you can go for a agent talk you can talk to a
human being where they'll interact they'll call the delivery guy and ask what is the situation and update you something like that might happen before introducing your particular chat to directly to the agent they'll try to solve with the help of Bot that means we are trying to reduce the work which is put upon humans we are using the technology in not to address the same this is a best example for the feature which is currently in use in the apps which we use in our day-to-day life how do we get start with this API we
have to just log on to open AI website create your account sign up or if you already have an account sign in login generate a API key and keep it why you have to generate a API key I'll let you after $5 of content you have to pay in order to improvise your API key in order to improvise your API key usage right you want to know more about what is open AI how does it help for GPT API everything you can just go to the official API documentation and understand more about this now let's
understand how do we generate a open AI API key for that you have to go to Google type open AI login once you click on login if you have already logged in you will get two options one is to go to chat GPD another one is to for API you click on API once you click on the API this is how your open API platform look like you could see a menu here towards your left stating API keys if you click on that it will launch API Keys before that I I would like to tell
you I was talking about the GPT models right so these are the models available for now GPT 3.5 turbo 0125 d106 and 16k these are the models you can select the model and you can work on let's come back to API keys I'll click on the API keys this is how the API Keys generation look like and if you want to create a new API key click on create new secret key and you can name that particular one I'm naming it as demo you can also give the restrictions if you have to control certain things
it can be a readon restricted or all just like the share option you have in your Google Drive for your Google content right so create a secret key and it will generate and display the secret key there you can copy and paste it in one particular notepad so that you can use it again and again it is taking certain time to generate the key once it is done it will display and you will also have an option called copy for it here we are it states API key is generated and you can copy the key
you can press as done see you have to save the secret key somewhere because it won't be viewed again due to security reasons that's why you have to keep it discrete and noted in a notepad separately you can get it back again if you want to again you have to create a new API you cannot copy this complete API key again the created API key is listed in the list here again you have the options to edit the key you can just change the name and permission nothing else you don't have access to again copy
the complete key and you can also delete the existing key this is how your API key Page look like in open AI platform hope you are clear how to generate this and save it in a place and use it for your coding today we're going to explore a truly transformator VI tool for developers of all levels CLA Sonet 3 .5 your new AI powered coding companion imagine having a super smart friend who can handle all the tedious parts of coding for you allowing you to focus on being creative and solving interesting problems that's exactly what
clot Sonet 3.5 [Music] does what exactly is clot Sonet 3.5 CLA Sonet 3 . 5 which came out in June 21 2024 is the newest and most advanced AI tool from a company called anthropic this AI assistant is made to help people who write computer code called developers by making their work easier and faster Cloud Sonet is like a super smart friend who can understand what you say in everyday language instead of typing complex codes you can just clae what you need in plain English and it will write the code for you for example if
you need a program to add numbers you can simply say I need a program to add numbers and Claud will create it for you this new tool is not just about making things easier it works faster and has better safety features to protect your work and data CLA 3.5 Sonet helps everyone from big ERS to experts to write code in a way that is more natural and less complicated so clot 3.5 Sonet changes how we write code by making it simpler safer and more accessible for everyone no matter how experienced they are all right let's
talk about the different versions of CLA Sonet 3.5 imagine you are picking a tool that fits your needs perfectly whether you are just starting to code or you're already a pro first up we have a free version this version is fantastic for beginners and hobbies it's like having a helpful friend who can write basic code and fix simple bucks for you you can access this free version on cloud. aai or even on cloud IOS app it's a great way to see how AI can make coding easier now if you're looking for more power and features
then there is a paid version this version is for those who need to tackle more complex projects with the paid version you can generate more advanced code get better debugging help and integrate with more development tools if you are really into coding this might be what you need there are also special plans like clot Pro and the team plan this gives you a lot more power to work with you get higher rate limits which means you can do a lot more coding without hitting any limits and that's not all clot Sonet 3.5 is also available
through some major platforms like the anthropic API Amazon bedro and Google's Cloud's vertex AI so if you're using any of these Services you can easily integrate clot Sonet into your workflow we discussed about the clot Sonet paid and free version right so there are different versions of your clot on it as you can see on the graph here which is between your intelligence Benchmark score and your cost price per million tokens so these versions We will see how do they perform or what are the capabilities they have in it just a brief out of these
versions the first one you can see is your CLA 3 hu which is on the lower end of your intelligence as well as the cost this is actually the basic version of your CLA where you can do some simple tasks as well as you can say that it is for your beginners right and then you can see here that you have Claude 3 Sonet which is an intermediatory between your intelligence benchmarks and your cost price per million tokens right so this CLA 3 Sonet was used for performing complex tasks and here you can see that
you have have clawed three Opus which is on the higher scale of your intelligence as well as on your cost right so this was a version which was being used for performing complex tasks as well as used for your professional Works to be done now you can see that we have uh Claude 3.5 Sonet which is on the higher end of your intelligence Benchmark as well as on the cost price per million tokens so we can say that this Cloud Sonet 3.5 is one of the advanced version of your Cloud you will be able to
perform your complex tasks and as well as it will help you in all your debugging and just have to write your uh queries in your natural language and it will give you the outcome of it right so this is about your versions of your uh clot Sonet now coming to the features of your clot 3.5 Sonet let us see what anthropic latest AI breakthrough is Right which is actually making waves in your artificial intelligence community so we will see what are the new features and the new enhancements which clot 3.5 Sonet has right so the
first one what it is discussing here is industry leading performance right so you can say that your clot 3.5 on it sets a new Benchmark for AI performance right you can also see see that it is outperforming its predecessors and the competitors right so what are the competitors you have your openi gp40 and you have your Google's gimini 1.5 Pro so you can say that your Cloud Sonet 3.5 is actually giving you better performance from the other open AIS like your GPT 40 or your Google's gimini 1.5 Pro right and then uh it says that
these advancements are significant far exceeding the capabilities of claw 3 Opus which has the higher intelligency as well as the cost we just discuss that here right now the next feature is enhanced speed right so this enhanced speed is twice that of your claw 3 oppus which we just saw in the graph that it is on the higher end of your intelligence as well as on the cost right the increased processing speed facilitate it's handling complex tasks as we said that claw 3.5 Sonet is a capable of handling complex task and multi-step workflows more effectively
which will help in opening new possibilities for realtime AI applications such as your finance and your health care and indeed a good feature you can say that it is providing you increased and improved efficiency with enhanced speed right the next one is Advanced coding capabilities right so CLA 3.5 Sonet stands out for its Advanced coding capabilities right this is what we said that you just give something in your natural language like say for example you want to write a program on printing a hello world that's a common example which everyone takes in any of the
programming languages when you just want to start off with right here it is telling that it will give you the complex uh tasks maybe you're trying to build a website maybe you're trying to integrate uh your website with the front end and the back end or you're having a full-fledged uh database you're trying to have uh something related to e-commerce or you're having some task which is appropriate for the uh projects which you're trying to work for right so it will give you these Advanced coding capabilities so what does the internal evaluation tell the internal
evaluation tells that it has solved 64% of coding problems that is an improvement over 38% solved by your Cloud 3 oppus right this makes it a powerful tool for software development and code maintenance so 64% of coding problems is being solved by your clot 3.5 Sonet which makes it a remarkable tool for your software development and code maintenance right its ability to independently write edit and execute code coupled with sophisticated reasoning we will also see a simple demo on the logical reasoning which your CLA 3.5 Sonet would provide for some of our um prompts you
can say okay and uh allows it to handle complex coding tasks and code based migrations efficiently now coming to the next feature that is your super visual reasoning right so what does your clot 3.5 Sonet does with your visual reasoning it is trying to interpret your charts graphs and complex diagrams right it accurately transcribe text from imperfect images which are crucial for Industries like retail Logistics and financial services one of the cool feature of your clot 3.5 Sonet is superior visual reasoning wherein it is trying to extract the information of the visual data even when
the image quality is poor such a cool feature right so let's move on to the next one that is your Innovative interaction and artifacts artifacts is one of the new feature which was introduced in your clot 3.5 Sonet so what exactly is this artifact this artifact transforms your cloud from an conversational AI into a collaborative work environment when users generate content like cod snippits text documents or web designs these artifacts appear in a dedicated window allowing realtime editing and integration into the projects so we will have a slight demo on what exactly are these artifact
new features which CLA 3.5 Sonet does in a little while right so let's move on to the next feature that is your cost effective accessibility as we saw that CLA 3.5 Sonet is higher in your intelligence as well as it's in the intermediat for your cost right so your Cloud 3.5 Sonet is accessible for free on cloud Ai and Cloud IOS app we have already discuss that right and you have this limits for pro and team planner subscribers right so what's the next one that is commitment to security and privacy this is actually the important
feature which your clot 3.5 son it uh provides because security and privacy is much crucial because you are relying on your AI platforms and then you're trying to give or have your task being uh accomplished and completed by it obviously you would want to have your security and privacy being maintained and not being exposed so what exactly uh does these uh security and privacy features have it's like it says that anthropic has prioritized security and privacy with Claude 3.5 sonets right okay it has maintained an asl2 rating and even external experts including your UK's uh
AI safety Institute have evaluated the security mechanisms right so we can trust blindly on your CLA 3.5 Sonet but but watch out what are you trying to uh have your task being completed see to it that still the personal informations are not being given there right so that's about your security and privacy now next moving to your part of growing AI family right your 3.5 Sonet is a part of a broader AI model lineup as we saw the different versions like you have your um Cloud hat IQ you have your Cloud Opus as well as
you have your CLA 3 Sonet right and now 3.5 Sonet is the higher end version of your CLA AI right and the next feature it has is Enterprise focused design right so it will help you in handling complex workflows and integrate seamlessly with existing business applications its contextual understanding and interpretation makes it ideal for tasks like customer support market analysis and data interpretation cool right we are not clots on it not only helps the beginners for the professional use but it also has a focus on your Enterprise tasks as well like you have your customer
support market analysis and your data interpretation the next feature is user driven development right your anthropic values user feedback as a crucial component of your CLA 3.5 Sonet development right it is great right that they are valuing the users feedback across the globe right and 3.5 Sonet redefines AI capabilities with enhanced intelligence speed and advanced features we discussed what are the advanced features like we have your artifacts which is one of the most recent feature which was added in your Claude 3.5 Sonet and we will try to see a smaller demo on how these artifacts
work in a while now now next moving to the advantages of Claude 3.5 Sonet why a game changer Superior performance and cost efficiency we already saw the different features it has the cost Effectiveness the visual reasoning capability it has right and the capability of trying to handle complex tasks Enterprise driven methods all these are some of the features which your cloud has So based on these features what are the advantages which we have right as we know that it is your NLP capabilities your CLA AI is an advanced has an advanced NLP capabilities as well
as it is combining with your coste effective pricing intelligence with less cost is what we clot 3.5 Sonet is providing you right its ability to grasp nuance and humor and generate high quality natural content makes it versatile tool across various applications right Advanced coding proficiency this is one of the significant advantages wherein it is trying to solve your coding problems it is also fixing bugs so nice and also add functionalities to your open-source code base with ease right okay this makes it particularly effective for updating Legacy applications and migrating code bases providing a robust solution
for developers so basically your clot 3.5 Sonet is having that advanced coding capability which will not only help the beginners but also the people who are trying to work on different uh softwares and different programming languages on some complex tasks as well right so the next Advantage is enhanced Vision capabilities we saw that one of the feature of your clot onet is the reasoning the visual reasoning which it interprets right even when the image quality it is less it is trying to interpret the exact information of the image right this is one of the advantage
that is enhanced Vision capability wherein the improved version of your 3.5 clot Sonet has the ability to interpret and analyze your visual data right we also saw the Innovative features of your artifact and I said that we will try to see the demo in a while right we did see the feature of your artifacts right and I even told you that we will quickly see a demo in a while right CLA artifacts are designed to make your interactions with AI outputs more Dynamic and versatile whether you're working on with your code data visualization or text
artifacts allows you to directly manipulate and enhance your outputs within the cloud AI interface Cloud 3.5 Sonet offers limited and free access but for regular use a pro subscription is recommended the pro subscription provides unlimited access to features like artifacts you can access Cloud 3.5 Sonet through Google's Cloud's vertex Ai and Amazon bedrock with usage based pricing we have been listening from a while through that artifact is one of the new feature which was provided by your clot 3.5 sonit so as you see that I have logged in myself to my cloud. a right you
can give your uh email ID and create a password and try to log in even with your Google account as well so we will try to explore the artifact features here so you can see that I'm going to my profile here right and then feature preview when I click on feature preview it is giving me artifacts preview and provides feedback on upcoming enhancements of your platform so that's what the feature preview is and here is what artifacts does that is ask Cloud to generate content like C code Snippets text documents or website designs and Cloud
will create an artifact that appears in the dedicated window alongside your conversation initially it is off so let me just enable it right so after I have enabled I'm closing this window so let me see that I'm just going to try out a simple uh Tic Tac to game you can see you can say right okay so I will just tell that create uh TI Tac to game for two players and um provide me with a sample output in say Python language okay so now let me just click on go here wow such a cool
explanation on the working of your tic Taco game as well right this is what is your artifact right wherein it is trying to give you the code as well as uh trying to give you uh the AI capability with the help of your artifacts right so let us see what is the output that is giving you it says that I will create a simple tic tag to game for two players in Python so that's what the requirement I had given for okay so click to open the code you can as well download your code here
the implementation is as follows it says print board displays the current state of board uh check winner it has some functions there and then it also says how to run the program right how to run the game to run the game you can save this code in Python file um I'll just give you a quick tip you can go to your online editor riplet and try to open a python project and you can just copy this code and try to check whether the game is working or not right as in the prompt I also told
that I need to have a simple output of how the game might look so you can see that it is providing me on player X player zero player one and so and so forth right okay so is it all that your CLA artifact does or does it have another capability or more capabilities as it has promised for so what will we do is we will just try to uh write another prom I'm saying that say uh I want a presentation for a class wherein I am teaching this game so provide me with a presentation starting
from agenda introduction followed by the summary so let's see what it gives wow it's creating a presentation for me and you can see that the presentation is on the artifact window here and it is giving me exactly what has to be done and what has to be conveyed when I'm trying to teach about this tic tcto game in a classroom right so let's see what it is giving wow so what did it give me uh this presentation outline provides structured approach to teaching the Tic Tac to game implementation in Python it covers game Basics Great
Python implementation detailed code walk through and suggestions for further enhancements great and key points of presentation that it is starting with introduction basic strategies implementation code walk through demo possible enhancements also it is giving I never asked for any possible enhancements further learning and creativity wow and question and answer session that includes to address your question and answers and encourage discussions right such a great thing it is trying to give me so let's see what it has added in my presentation right so as I said I wanted an agenda great it is giving me an
agenda and introduction it is asking me to give a brief history of the game popularity and educational value why are we implementing it in Python so and so forth right so a perfect piece of information which I can just take this and go to my classroom and have a discussion on my tic Taco game using your python we also saw that artifact will try to give you the interpretation of images right so I'm not going to do that check out and try to interpret your charts and graphs based on your artifact features clot Sonic 3.5
is incredibly versatile and can be used in variety of scenarios one of the standout feature of clot Sonet 3.5 is its natural language processing capability you don't need to learn any special commands or syntaxes just type your instructions in plain English and clot Sonet understands this makes coding accessible to everyone regardless of their level of expertise imagine never getting stuck on syntaxes or having to write boilerplate code again with clot Sonet 3.5 you simply describe what you want to achieve and it writes code for you whether it's a simple function or a complex algorithm Claude
has got you covered debugging can be one of the most frustrating parts of coding but with clot Sonet 3.5 it's like having an expert looking over your shoulders describe the issue you're facing and clot on it will help you and find and fix the bugs quickly and efficiently CLA Sonet 3.5 integrates seamlessly with popular development environments like Visual Studio code this means you can use it without disrupting your existing workflow it's all about making your coding experience as smooth and efficient as possible whether you need to write a hello world program or a basic calculator
clot Sonet can do it in seconds this is perfect for beginners or for quickly prototyping ideas tired of writing the same code over and over clot Sonet can handle repetitive task for you freeing up your time to focus on more complex and interesting challenges learning a new programming language or concept clot Sonet can help you understand new Concepts and write sample codes it's like having a tutor that's available 24/7 let's jump into the demonstration using riplet a powerful online coding platform first visit ret.com and sign up or login once you are in create a new
python Ripple give it a name like clonet demo and click create Ripple by default ripplet will create a main.py file for you we will use this file to simulate the capabilities of clot on it so I'm writing a prompt in a simple English language that is write a function to calculate and print the factorial of a number I'm not going to the basics like give me a Hello World example and so and so forth so I'm just trying to calculate and print your factorial of a number let's see what cloud 3.5 on it gives you
wow it's giving me the function right factorial is the function if n is less than zero return that factorial is not defined for negative numbers see so carefully it has even uh tracked out your exceptions like if a user is trying to give a number which is less than zero it is giving you or returning you a statement called as factorial is not defined in the for negative numbers right and then else if n is equal to 0 n is equal to 1 you return one and here it is giving you the function which is
calculating and printing your factorial number the number given is five then it will calculate with the help of this function and it will print that the factorial of five is figure out what's the answer and a brief explanation is given on how exactly the function works right okay so we will try another example like we also said that your um clot Sonet 3.5 has debugging capabilities right so we will try to give an input as an incorrect code and we will ask clot Sonet on how to debug this and why the code is incorrect and
give us an explanation on it so this is my prompt wherein I am having a function to add a and b and return a minus B right wherein I'm giving a function to add but you can see that I'm doing a subtraction over here right so this should be a plus b right so I'm telling uh Cloud that let me know what is wrong in this code and also provide the corrected code for me so let's see whether it will debug PL this or not for us oh it says that I'm happy to identify issues
and give you out with the results so let's see what it gave us so it says that um the problem in the code is an add function right I just mentioned that instead of giving a plus b in the code I had mentioned it as a minus B right so it is giving me the uh broken down like step by-step process of what went wrong the function is named add but it is actually performing subtraction instead of addition right it is giving you the complete details of the debugging errors right great capability you can try
out for your complex tasks or even if you're a beginers you can try out with this right so it is giving me the correct code and you can try that on your python uh that was about your coding capabilities I have just tried out some simple prompts you can just go ahead with your use cases and try to explore more on your Cloud right now we also said that your Cloud 3.5 also has the capability of logical reasoning right so we will try to see uh some of the logical reasoning U prompts and see what
how the cloud will give us the answers with so this is my prompt for logical reasoning it says that you are in a room with three light switches each controlling one of three light bulbs in another room you can't see the bulbs from the room with the switches you can flip the switches as many times as you want but you can only go into the room with the bulbs once right so how can you determine which switch controls which bulb a very good reasoning use case right so we will see what CLA provides wow it
has given the complete explanation of what is uh the reason behind and how it is determining that which switch is controlling which bul right so let us see first thing like it's giving you a step-by-step approach let's consider what we can manipulate that is the switches and what information we can gather we know that light bulbs not only produce light but also heat when they are it's is giving you an information about light as well right okay here is a strategy to solve the puzzle turn on switch one and leave it on for several minutes
after this time turn off switch one and immediately turn on switch two leave switch three off the entire time now enter the room with the light bulbs when you enter the room you will be able to deduce which switch controls which bulp great explanation right and then the solution works because the bulb controlled by switch one had time to warm up but it is now off and you can see how exactly it is giving you the logical reasoning of the issue and it is giving you the determined answer here right so we will try to
do another logical reasoning question we'll try to give here and we'll try to see what the uh Claude uh 3.5 Sonet gives you so I'm giving a prompt which says I'm thinking of a number if you add seven to it the result is three times the original number what is the number okay so let me think of a number first let's see what CLA gives let's call the number we are looking for X now let's translate the given information into a mathematical equation it is say it says x + 73x that's what the question said
right three times the original number right so that's what it is doing here now uh let's solve the equation it is giving some equations and simplifying X just trying to give X to be as 3.5 and therefore the number I'm thinking of is 3.5 great good logical reasoning it is giving we also said that your clot Sonet has automation capabilities right so we'll just try to look a small example we'll try to give a prompt over here for automating task and we will see how it works this is my prompt here wherein it says that
automate file handling by writing a function to read from one file and write its content to the another file so it's generating it's handling errors as well okay let's let's check out what it is trying to give us here right so the function is copy file content right it's like it takes two parameters source file and the destination file it uses a try except block to handle some errors and inside Tri block it is giving some U it opens the source file in a read mode with a statement which ensures that the file is properly
closed after your reading obviously You' want the files to be closed right and then we have the exception handling cases and that's how it is trying to copy a few things to note the function reads the entire file into the memory at once for very large files you might want to modify it to read and write it in chunks obviously if you have a very large file you would want to modify it that it writes in the chunks right it overwrites the destination file if it is already exist and uh you can just read out
what it is giving and you can as well explore uh more on this right so I'll just do one thing I'll copy this code you have a source. txt and you have a destination. txt file these are the two external files uh which are being created and with the help of your copy file content it is trying to write from your source to your destination file right so I'll just copy this function on onto my replate editor and I will see what exactly it is trying to give me so I'm trying to create a replate
here since it's a python code uh let me just write here as uh automating code some name here so I'm trying to create a riplet it will create a project for me here now so you can see that the project is created and main.py is my main python file so I'm simply copying the code here right okay this is my code right so let me just try to run the code so it is trying to give me an error that one of the file that is source. txt or destination. txt was not found obviously because
we have not created any file now so we will just try to create a new file called as source. txt okay that's the file name right so I will write hello from cloud Sonet 3 point five so this is my file and I'm saving it I'm creating another file called as destination destination do dot txt destination. txt okay so as you see that the file is empty now so let me just run my file here so you can say that content successfully copied from source. txt to destination. txt so let us see if d a.
TST it is copied hello from clot onet 3.5 now we are trying to have your um performance metrics of your clot 3.5 Sonet comparison with the different um AI platforms like you have your CLA 3 oppus GPT 4 Gemini 1.5 Pro and Lama 400b right so the graduate level reasoning for zero short Chain of Thought it is 59.4 higher than your GPT 40 as well and then you have your undergraduate level knowledge that is 88.7% for five shot and clot 3 has 86.8% nice performance it is giving right and then gimini has 85 5.9 and
86.1 for llama that is less compared to your CLA uh 3.5 on it Performance Based on your undergraduate level knowledge and then the code right it gives 92% the performance Matrix is 92% um then your clot three three Opus which is 84.9 and your GPT 40 has 90.2 way less than the clot 3.5 Sonet and then gini and Lama also are at the backer end right right multilingual math 91.6 and comparatively you can see that uh it is um higher than your GPT 40 and your Gemini right and then you have your reasoning over text
right we did see some examples on it and we did get good um results on it right try out few more reasonings on it so it is 87.1 three short Chain of Thought and then uh comparatively it is less than the other uh platforms like GPT 40 and llama and then mixed evaluation again here clot on it is on the higher end that is 93.1% and then 85.3% for your uh llama and then math problem solving and problem solving capabilities as well right 71.1% for your um math problem solving which is uh comparatively less than
what your GPT 40 provides but still um it is giving you mixed good mixed evaluation performance it is giving you code evaluation performance right so still fair enough and then grade school math it is giving you 96.4 per so you can see that CLA 3.5 Sonet has the higher performance metrics when compared to your Cloud 3 Opus GPT 40 and Gemini 1.5 Pro Lama 400b now I was talking about coot what the coot is your chain of thought so why not let's ask Cloud AI on what exactly is your coot right so I'm just writing
here I'll just give it as what C and I will see what response do we get here so let's see what it is giving it is giving me right like um the definition of it so Co typically stands for Chain of Thought in the context of a IML it says that particularly in large language models it involves breaking down complex reasoning tasks into a serious of intermediate steps or thoughts so we will see what are zero shorts zero short learning refers to an AI model capability to perform a task or make predictions about classes it
has never seen during training zero short learning so it is giving you the definition of it right so basically you can see that the way you are giving prompts to your CLA AI will help you in trying to get a better outcome so this reminds me of uh one of the course on your prompt Engineering in our great Learning Academy go ahead to our great Learning Academy and try to figure out what the course is all about uh try to give better prompts so that you can utilize claudi 3.5 Sonet as well moving on to
Claud safety test Claude AI includes rigorous safety features to ensure responsible AI usage this includes filtering out inappropriate or harmful cont and conducting regular safety tests for example it ensures that AI responses to potentially harmful queries such as how to rob a bank for educational purposes are appropriately handled to prevent misuse Claud employs measures such as data encryption user privacy protection bias detection and mitigation and continuous monitoring and updating of safety protocols to ensure the AI operates ethically and securely today we explored the features and advantages of cloud including its powerful AI capabilities and the
Innovative artifact feature we also looked at how to enable and use these features as well as the robust safety and privacy measures in place AI assisted coding tools like clot Sonet can transform your coding experience making it easier to generate codee debug functions and automat tasks give it a try and see how it can improve your workflows GPT 40 mini stands for generative pre-trained Transformer 4 optimization mini think of it as the smartphone version of a powerful computer compact yet incredible capable it's not just a smaller version of GP 40 it's been meticulously fine-tuned to
be super efficient and versatile making it ideal for a wide range of applications imagine having an AI assistant that fits in your pocket but can transform tasks that used to require a full-sized model with gp4 mini you can draft emails generate creative content or even assist with complex coding projects seamlessly it's optimized for Speed and efficiency meaning it can deliver powerful AI capabilities without needing massive computational resources but the benefits don't stop here GPT 40 mini supports realtime collaboration allowing multiple users to work together on projects no matter where they are it's Advanced natural language
understanding means it can help with research by summarizing articles extracting key information and even providing insights based on large data sets whether you a developer integrating AI into your apps a researcher needing quick data analysis a student working on assignments or an AI Enthusiast experimenting with new ideas GPT 40 mini brings the power of AI to your fingertips it's designed to make Cutting Edge technology more accessible and practical allowing you to leverage AI in ways that were previously unimaginable so why is everyone talking about GPT 40 mini this updated model offers several advantages that make
it stand out one compact and efficient unlike its bigger sibling GPT 4 the Mini version is optimized for faster performance and lower resource consumption making it perfect for devices with limited computational power two versatile applications from from chat Bots to content generation and Beyond GPT 40 mini is versatile enough to handle a wide range of tasks without compromising on quality third cost effective with reduced resource demands GPT 40 mini makes high quality AI more affordable opening up possibilities for smaller businesses and individual developers next we will explore few features of GPT 40 mini now that
we know what GPT 40 mini is and how it differs from GP 40 let's highlight some of its awesome features streamlined performance delivers high quality AI outputs with optimized processing power userfriendly integration easy to integrate into existing systems and workflows scale able Solutions suitable for projects of all sizes from Individual tasks to large scale implementations robust capabilities supports a wide array of functions including text generation language translation and more enhanced accessibility available to a broader audience thanks to its efficiency and lower cost realtime applications ideal for chatbots and other interactive applications needing quick responses customizable
framework easily tailored to fit specific use cases and preferences now let's look at the key differences between gp4 and GPT 40 mini size and speed GPT 40 mini is designed to be smaller and faster it processes information more quickly making it ideal for realtime applications resource efficiency while GPT 40 requires substantial computational resources GPT 40 mini is optimized to run on less powerful Hardware without significant loss in performance accessibility with GPT 40 mini more users can Leverage The Power of GPT 40 technology it's accessible on a wider range of devices from desktops to mobile customization
GPT 40 mini allows for easier customization and integration into specific workflows making it adaptable to various needs ready to explore GPT 40 mini head over to the official GPT 40 mini website click on get started and you will be on your way to integrating this amazing tool into your projects gp4 and its variants including GPT 40 mini are generally not available for free use open aai offers access to its models through subscription plans or API usage which typically come with a cost some limited free access might be available through certain platforms or promotions but sustained
and full featured access usually requires payment now we will explore the open AI platform first first let's explore the playground this is the main area where you can interact with and test different GPT models you can input text and see how the model responds next is the chat this section is specially for setting up the testing conversational models like chat GPT you can provide system instructions and see how the chat model responds in a conversational format the drop-down GPT 3.5 turbo lets you choose the specific version of the GPT model you want to use assistance
this allows you to create and manage different assistant personas or configurations you can Define how the assistant should behave what it should know and its specific purpose text to speech this feature converts written text into spoken words it's useful for creating voice responses or narrations completions here you can generate text completions based on a given prompt this is useful for tasks like content generation where you provide a starting sentence and the model completes it Forum this likely refers to a community Forum where users can discuss various topics related to open AI models share insights ask
questions and get support from both the community and open AI staff forums are valuable resources for learning best practices troubleshooting issues and connecting with other users who are working on similar projects next we will explore the dashboard finetuning finetuning allows you to customize the GPT model by training it on your specific data set this makes the model more accurate and tailored to your unique requirements batches this option lets you to process multiple prompts at once it's useful for handling large data sets or running multiple tests simultaneously storage here you can manage your saved data prompts
and responses it helps keep your work organized and accessible usage this section tracks how much of the service you have used such as as the number of requests made or the amount of data processed it's important for managing your subscription and staying within usage limits API Keys API keys are used to authenticate your applications with the open AI API they allow your software to securely access and use the GPT models next let us explore the doc stab so here you have the exact model names along with its description so you have Chad GPT 40 along
with Chad GPT 40 mini details you can just go through the model and its description with its context window and what are the output tokens or the training data required for both the models not just that you have the descriptions of other models as well like gp4 turbo and 3.5 turbo and many more you can just go through all of it next let us explore the API reference tab here we have the introduction of API through HTTP requests from any language so how to install how to give the commands how to give the library commands
and what are the Authentication API keys and many more so you can just refer these things as well so these will be helpful in giving the API requests in your prompts so now let us go back to the playground and try giving the prompts in the completions section so here if I ask any question or if I write or type anything it should actually continue or complete the sentence and give the results so let me try doing it in the free account so we'll see what is powerbi so when I type something and click on
submit if you have the paid version you will be getting the result since I'm using the free version here it is telling me that I have reached the usage limit so it is directing me to the billing settings so here you can see the billing section of your account so for free trial I have zero credits which means it's free it's a free account that is why I'm not able to use this platform so you have your payment details where you can go and do the payment it is uh some dollars to use your uh
GPT 4 and 40 mini so these are the things so here you go you can do the payment and get your subscription and try to use GPT 4 mini and explore more features by yourself in a world where technology is constantly evolving GPT 40 mini stands out as a game changer making Advanced AI more accessible and practical than ever before whether you are drafting emails collaborating on projects or diving into complex research GPT 40 mini is your versatile efficient and Powerful AI assistant it's like having the brain power of a super computer right in your
pocket ready to tackle any task you throw at it embrace the future of AI with GPT 40 mini and unlock Endless Possibilities in your everyday life in Gemini we have different versions broadly classified into three elements the first one is Flash Then followed by Nano and then Ultra or Pro let's see what are these classification mean to us first when it comes to flash it was developed in order to execute the queries or the prompts in a very quicker way if you go to Nano why this was a variation in order to execute Ute or
use Google Gemini in a smaller devices like phones or tabs Nano version was discovered then we have Ultra and pro when you come to this version it is having capability to execute biggest prompts and also give you a good detailed analysis according to your requirement compared to Nano and Flash generally you could see Nano has a very limited number of tokens compared to flash in alra Pro so this is the variations in Broad you can have in Google Gemini then let's explore versions one by one you could see uh Gemini started with version one AKA
Bard they launched a product called B in September 2023 with the tokens of 1 million it was generally used in order to have initial conversation uh AI model and also also text generation it was specifically given by the Google as a product as a competition to open AI that is chat GPT the applications of Gemini 1 is just to have a conversation with a chat bot and also basic content generation then it also served a general purpose AI task as well after Gemini one that is B we called Gemini advanced again it is B plus
it was in late 2023 after September it was also having tokens of 1 million and enhanced performance compared to Gemini 1 and also more refined responses for this was available improvised chat Bots and more complex content creation could be handled in Gemini Advanced that is B plus after all these as I mentioned we got the broad classification that is nano Pro and then Ultra so let's understand what is Gemini Nano 1.0 so it was in the early 2024 it was released but the tokens if you could see it has half million meaning it is having
a less number of tokens comparatively to other versions why the reason because it is generally used as a light weight model it is on the devices like mobiles or iot devices you could execute or use Gemini Nano 1.0 generally the applications are if you could see it's in mobile apps iot devices and embedded AI it was specifically designed in order to work as a lightweight model then comes Gemini Pro 1.0 again in early 2024 having 2 million tokens it was used for professional grade of artificial intelligence and also multimodel capabilities in generally uh this application
was only preferred towards professional content creation and advanced research so this is the application of Gemini Pro 1.0 now let's go to the next one Gemini Ultra 1.0 early in 2024 it was also having 2 million tokens high performance model and it was used only for intensive tasks it was generally used for intensive tasks complex problem solving or high demand AI applications were the use of Gemini Ultra 1.0 Gemini 1.5 Pro was again a early version of 2024 it is also having 2 million tokens generally used for advanced multimodal reasoning and also long context window
AI driven content creation is the application with the data analysis feature and advanced coding features available in 1.5 Pro which is currently most used and we also learning more about 1.5 Pro in this video stay tuned for that then we have 1.5 flash so it's 2 million tokens again but it flash means as I told you it is having a speed it is having low latency processing and also it is generally used for mobile applications a smaller devices not restricted only to mobile but for smaller devices then it is generally used as a real-time application
and iot Integrations were done with the help of Gemini 1.5 Flash and summarization extraction so all these are the applications of the Gemini 1.5 flash hope we are clear till now with the variance of Gemini now let's see what is the future what is Google preparing in order to get the next version that is Gemini 2 and 2.5 and we don't know much of the information how when it is released for the users and how many tokens it will be having uh what would be the key features it is just like Advanced version of 1.5
flash Advanced multimodel processing higher accuracy and image and video understanding cross domain AI interaction so all these would be the features of two and it is generally used for video and multimedia editing tools if it comes actually editing would be very easy comparatively and uh when you come to 2.5 what happens here you have advanced Predictive Analytics skills and you can do lots of data processing data analysis have multilingual communication support as well so customer support would be very easy if this 2.5 comes up because it's is having a multilingual Service as well now let's
switch back to our Google AI studio now let's start with our first prompt you have to click on the create new prompt and if you want to change the name you can just go on edit say demo one and click on Save now the interface is ready to take your prompts but there is one option here system instruction why this is used this is just to set the tone of the prompt the responses we get and the style of instructions we give to the model we are just trying to set a context before we start
communication with the model I'm trying to T tell the system the response we receive should be poetic right so it is just a sample I'll just give high as the prompt and run once I do this it has taken two tokens in order to work just to say a reply for high it is giving in a poet manner right it is giving a complete a Whisper Soft a greeting brief hi you say a single leaf so it is trying to give a poetic response to it so if I say how are you then it will
try to give us answer in a poetic manner itself you could see the token count here it's moving on and on the time it processes the number of tokens it will use those are kind of BU in blocks in order to process the statements we give to the system so you can see for just one how are you it is giving as four lines of answer because I have set a context that the answer should be in Poetic manner the theme should be poetic so this is a general prompting which you can do by setting
the system instruction with Google AI studio now let's move on to new tuned model so what is this all about before that see it is giving us do you want to save this yes if you want to click on okay it will save as demo one once you select new tuned model this is where you land you have option to create a structured prompt The Prompt which we gave was a normal prompt you have an second form of prompt that is structured prompt so I click on structure prompt again if you want to change the
name for this you can really put the name I'm using the name demo 2 and I'll save it you could see there are two sections here first section is input then output later you have test your prompt you have to first train the system then you have to test the system this is the concept you are doing in a structured manner you're not just giving a random prompt command now let's set a context or style of instruction before starting the structured prompting let's set an optional tone or system information which you want to save it
across I'm giving the overall context I'm working as a news reporter I need the normal news to be displayed more excited manner or as a breaking news so when I do this I'm setting a context I'm just saving for now it has been saved now let's start giving the input and specified output to the model we're just trying to train the model so this time Olympics is held in Paris right another input the player from Turkey won silver medal in shooting so I give this next it will try to suggest you certain things it will
just leave as it is now let's type the response as well did you know the Olympics is held in Paris exciting right so this is a normal sentences which I'm using then w a layer from Turkey one silver medal in shooting isn't it exciting so I've just typed certain inputs and outputs that is training a model out of 500 samples I've used two samples and you can give a third one as well India lost medals in badminton right you can just give a simple input and oh my God India missed chance to bag medals in
b ad Minon it is just exaggerating or giving an exited output for a normal sentences this is how I'm trying to train then we will test the inputs I'll say India won Gold in wrestling now let's try to run to get the output I'm clicking on run it gives an exaggeration here boom India takes gold in wrestling this is huge what a performance we are trying to exaggerate first we try to train this model and then it is now trained across it is also giving as we mentioned we need the output in exited manner so
this is the structured prompting now what do we do we try to delete this right the outputs we'll try to delete and we will just give the same input India one gold in wrestling and run this 100% it will follow the system instruction which we have given at the top not the model because we don't have any samples here in order to compare so it says a big paragraph breaking news India claims gold in wrestling so it just tried to give a good information of news so this is how the structured prompting will work so
now we have understood the normal prompt and the structured prompt let's click on new tuned model before that if you want to save you can click on okay uh when you come to tuna model you can import either it should be a CSV file or a Google sheet file and we they generally recommend 100 to 500 examples in that sheet available if you want a modeling tune guide you can have a look with this document now if you click on the select data source you you have structured prompt which is demo two which I created
recently that is also been available here and you have news headlines and summarization which is the samples so I prefer taking news headlines once you do that you can see the prefix column available you can select the prefix column input and output this is how it will be and just it is a sample and then you can change sample tune model once you do this you can give a description if you want to else we are not going to touch this advanced settings uh how many tuning of aox should be what is the learning rate
multiplier and batch size and stuff let it be as it is been set and you have two options here that is Gemini 1.0 Pro not not one and 1.5 Flash not not one tuning so generally we were working with the help of pro so I'm using this Pro again if you want to tune you can click on the tune button once you click on the tune button you could find sample tune model available in your library once you click on the sample tune model towards your left you can see the tune Model results it takes
some time in order to load this and you have to wait for 5 to 6 minutes why reason being I selected Pro it took certain time in order to load this epox see the graph is going in the right way but the value Val is 1.2 actually it's supposed to be at least four to five that is what we consider a good Epoch value but this is how model tuning looks like in Gman AI Studio you can also see less than 1 minute left in order to finish this process and it has all the information
available the bat size the epox the learning rate the examples it has been tuned for so the graph is the loss and graph is also being put up here this is the overall view of model tuning which you can do in Google AI Studio coming back to the main interface you have option here which is plus if you click on plus you have various availability of options in order to choose with that is my drive upload to drive record audio sample images and Sample video so let's check out with the help of Sample video there
are predefined videos here I'm trying to select one of the video and add a prompt what is this video all about so I click on run once we do this it will try to analyze the multimedia that is the video and give us the details what is this 10 minutes of video dealing with it takes certain time if you could see the tokens it is taking in order to process this is huge because of the 10 minutes of video so the response is in two lines it is trying to give a gist of what what
is this video All About so you can also upload the videos from your local system and understand what is happening in that video it has the capability to analyze the video as well and you can change the model here in order to immediately get the responses say Sample video I'll just give the same video or another 10 minutes video how does this particular video loads and how fast it will work what is in the video explain I'm trying to give this option I'm clicking on run I'm just changing the model and checking how does this
work while we were executing the second video we turned this model into 1.5 flash let's see what is the difference here in the first video it took around 30 seconds in order to work here we got 40 seconds and it depends upon what is the content we give what is the uh file size of the video Everything the more content you have it's good to use Flash right so just a simple comparison between Pro and Flash you can do the same thing for sample images if you click on image you have total here so if
I ask what is this it will try to give us the information let's see how much time this will take when it is in flash it took around 28 seconds in order to say this is a picture of a SE Turtle so this is how you can load your images your videos your multimedia and understand you have to first upload to the drive or you can directly take the information from the drive so this is how the prompting Works in Google AI studio so you have to choose it from the available options that is from
your drive or the sample videos or record the audio as it is let's get started with our demonstration we will be building an Android application that generates email templates for sick leave and Casual leave using GPT as you know that we are going to create an email generator app first let me tell you what are the different softwares which are required for having this app being generated since we are implementing an Android app we need to have an Android Studio editor uh so for that you need need to go to the website of Android Studio
editor which is developer.android.com studio and then when you open the uh website there you can see a link called as download Android studio and once you click on download it will ask for your terms and conditions to be accepted so you can read these terms and conditions and in the end you can just click on I have agreed on the terms and conditions and click on download and you can see that the file is being downloaded once you download the file from your Android Studio website you need to see this Android Studio 2024 and the
version of it so you have to click on this and execute the steps which it follows since I have already installed Android Studio I'm not going to follow the steps so you can just carefully follow the steps which it is giving on the screen and try to install the software uh IDE onto your laptop the next thing which we will be doing is we will try to create an API key here so to have this API key I'm using chat jpt open aai uh platform form for creating this key so I have already logged into
my chat GPT open API key even if you have a free version or a paid version it's fine free version will have the usage limits for your API keys but still you can be able to use those API keys and have your apps being generated there so as you open your platform. open.com this is the page what you see when when the website is open you have to go to your login to your user profile over here and then if you see over here you can user API keys right since we are trying to create
a new API key we need to set create new API key this field is optional so I will just give it as app here just to say that I'm creating an API for this email template generator app here and then when I click on create secret key the key is being generated so we have given the permission read and write API resources right so this secret API key is uh needs to be kept very confidentially because um there can be some data data breaches which which can happen so you just have to copy this and
keep it very safely so I'm just trying to open a notepad over here and I will try to have this uh generated key saved over here so that I can use that in my um program when I'm trying to implement this email generator app as you can see that my Android Studio is being installed and we are opening the Android Studio here so now we will try to create a project for your email generator app using your lnms so what is the first step which you need to do is you need to go to file
go to new and then new project and then you can just click on empty activity and go to next over here and then you can give a name for your app here I'm trying to give a name like email generator app okay and you can see that the package name is called . example. email generator app and this is my save location and my SDK is the default SDK which I'm using over here and my build configuration is uh cotlin DSL which is the recommended version for this app I'm not changing any of the default
configurations here so I'm just trying to click on finish this will take some time and a new project will be created so now you can see that the project is open and we have two folders called as app and then gradel script so we will see one by one what are these folders and how will they help us in trying to implement this app over here so we'll just open gradal scripts over here we have some of the configuration files you can see over here like build.gradle kts and then you have lies. ver. TL all
these files are uh actually required for having your configurations being run there okay the only thing what we will be changing over here is your build. gr. kts file this build. gradel file will help you in trying to add those configurations and the required dependencies which will help you in trying to have this app being generated over there right and then we will also uh this build. gradel file will also help you in trying to uh figure out uh or you can say debug or um try to test your tool and help you analyze with
how the configurations are uh being set and how the dependencies are being used over there right so the only thing which we would be changing over here is we will be trying to add the dependencies which are required for our app here okay so you can see that I have just added the dependencies of your Android and junit and expresso all these dependencies I have added based on our app here and once you have modified your gradel file you need to actually sync your project which will help you in trying to uh work properly with
those dependencies so I'm trying to sync the project now so you can this will also take a while but you can see if the sync is successful or not uh if the sync is successful it means means that the dependencies are added successfully if not you need to check out on the errors and try to debug what sort of errors are there and you need to figure out uh what the problems are and try to fix that basically you will not have any errors there though uh because we are only adding the dependencies over here
now you can see that the sync is successful if the sync is not successful it will show you errors in this uh problems terminal over here there are some warnings but we can just uh uh leave those warnings as such over here because it's just us to update the newer version but we are just trying to use the versions which are apt for our uh email generator app now so that's all about your gradel script now moving to this folder app which is our main folder which will help you in trying to add those uh
functionalities to your app wherein you will be able to add those uh uh input text Fields your buttons all those things and how to interface between your front end and your back end over here with the help of your API Keys over here right so you can see that in app we have manifest file we have cotlin Java file and then we have your uh rest file over here so we will just explore one by one what exactly these are and what are the changes are we going to have uh being Incorporated in these files
so in your manifest file you have Android manifest.xml file so if you see this Android um manifest.xml file this file will help you in trying to give the critical informations for your Android system and also it will help you in trying to access the per commissions which you're trying to give over here so we will just try to um uh not modify much over here but we will just try to add the connectivity with the help of one line that is your internet access we'll try to add that over here so you can see that
the only thing which I added over here is the user permission wherein I'm telling that the user Android needs the internet permission since we are trying to interact with the apis over here so this is the only one line change which we are doing in this file okay next we will move on to the scotland. Java file uh this is our main package that is your uh com. example do email generator app so we'll just try to explore this what exactly it is there this is our main uh activity. cotlin file it is actually the
main entry point for your app wherein you will be trying to define the user interfaces you will be also giving the API key which you have just generated and uh try to have that functionality being added there right so we will first uh try to create the user interface then we will try to add the functionalities of your um fetching apis and trying to have those request and responses in your main. KT file so what exactly do we have to do when we are trying to have the layout set for your uh app so you
just have to go to this um uh Rees folder over here you can see that there are few files over here uh drawable uh MIP map values in XML so we will just try to create a folder called as layout folder so when you're trying to create create just right click on the rest folder and then go to your Android resource directory and then you just try to give your resource type as layout and then click on okay okay so you can see that the folder has been created over here now you we just have
to create a file over here again we will right click on it go to new go to your layout resource file over here it will ask you for the name so we will just give the name as activity Main and then you can see that the file has been created so this will give you the layout of how the app will look like so I'm just going to go on the constraint layout and go to XML because I need to do some modifications in trying to add your text buttons and your radio buttons over here
right so we'll try to add those text and radio buttons over here okay so you can see that I have created a linear layout over here wherein I have an edit text and then I have two buttons over here one for generating a sick leave email and the other one for generating a um casual leave email and then I have a text view which will um actually give the response in in this text view it will give the response of my generate uh casual leave email and your s leave email and what's the next thing
which need which we need to do the next thing which we need to do is we need to try to modify your colors strings and your themes. XML file next thing we need to do is we need to um check out on the user interactions we will try to modify your colors .xml string.xml and we'll also try to add uh some of the styles for your XML so I'm just opening colors.xml and I'm just trying to refine it BAS based on the details of your uh color formats based on how I want the app to
be as this string.xml manages your text resources the color. XML manages your color resources now we will try to create the styles.xml again right click go to new uh values resource file over here and then you can give the file name as Styles click on okay a file is opened over there U this file will help you in trying to uh manage uh and theme your text now that we have created the layout uh we will just try to add the API which we created in our main activity file so that we can have that
interface being uh generated so this is our main activity. KT file here you can see that I created a private uh function over here to send the query and I'm using the URL https api. open ai.com which is the URL which we used for creating your API and the model which we are using here is GPT 3.5 turbo and and then the prompt is query and then the maximum token what we have defined is is 150 over here right and then we have defined the request request body and on response what the body has to
be uh given over here so it's just a simple file wherein you're just trying to use your API Keys try to send a request and try to retrieve a response file there right and you can see that you have a created a variable called as API key and the generated API key which we had uh generated using our chat GPT you can just uh copy paste the API here right now we will try to execute the um files and we will try to see if the app is generated on the uh Android mobile or not
right so to do that we will first try to figure out if there are any errors or not so we will go to the file we will go to build we will first try to clean the project it'll take some time okay we have cleaned the project now now we will try to rebuild project so now you can see that the gradal build has been finished so you can just uh see over here if there are any errors or not uh you'll be able to figure out here right uh to run this application we will
be using an Android mobile uh so I will be telling you uh what are the settings which you need to do so that you'll be able to access your Android mobile for this app let me tell you how the settings you need to do when you're trying to have your Android app uh as your physical device to run this app right uh you just have to go go to settings over here and then go to your uh about phone and after you have clicked your about phone go to your software information and here you can
see there is something called as build number so you try to tap that built number and it will tell you how many times it has to tap for the developer options to open and then it is asking for my pin here I'm just trying to give my password Here of the mobile and then you can see that uh a message popped in like developer uh um options have been enabled so how do we check that go back to the screen and then uh you can see that after your about home phage you have your developer
options being installed here all right so just click on your developer options over here and then after that um you can see that the developer options have been on over here since we need to sync your app whatever you have constructed in your uh Android ID to this uh mobile so what you just need to do is click on uh enable your USB debugging you need to enable your USB debugging so it will try ask you whether um uh USB debugging is in intended for development you just say okay and then uh you'll be able
to access your um device through your Android mobile so uh I'm trying to click on okay allow now we will click on the Run tab here so we will see that the um project is built and it is launched onto your Android mobile now you can see that the app is successfully installed onto your um mobile so we'll just check this out so you can see over here so I'll just try to uh give a prompt here that uh I have fever and need 2 days rest so after this I'm just trying to click on
generate and when I click on generate casually leave email it will create a casually uh email so this is how uh it is easy to build your uh llm app um a simple uh app is what we have created here we have covered large language models and explored some of the most popular ones such as GPT Claude and Gemini to learn more about LMS be sure to subscribe to our channel for exciting videos and updates next up we'll take a closer look at some of the most popular generative AI tools that are transform Industries are
you ready to discover these game changers let's get [Music] started generative AI is here to spark your creativity and turbocharge your projects whether you are an artist a writer or just someone looking to Jazz up your content these tools are like your new BFF in the creative world now let's talk about tools these are gadgets that make generative AI feel like magic and we will be exploring this in this video first is chat GPD chat GPT was developed by open AI it is a state-ofthe-art language model renowned for its conversational abilities and versatile applications second
GitHub co-pilot powered by open AI codex GitHub copilot is an AI pair programmer that assist developers by generating code suggestions in real time next Claud an AI assistant specialized in prompt engineering Cloud offers Advanced capabilities for generating text based on user prompt last but not the least Google B now known as Gemini is Google's Innovative AI platform which seamlessly navigates to text code audio image video with capabilities like multimodality throughout this video we will dive into each of these tools uncovering their functionalities applications and the Limitless possibilities they offer in the world of generative AI
now let's understand what is prompt engineering prompt engineering is as much an art as it is a science a very creative line we'll come back to this once we understand what is actually prompt engineering prompt engineering is made up of two words prompt and Engineering prompt is nothing but a detailed set of guidelines or instructions given to the llm or generative model to do a task engineering is developing iteratively a task specific prompt to enable the generative models or llms to Output a perfect or near perfect outcome that you have focused for so if you
see prompt engineering is an iterative process how first is the idea that comes to you that you want this then you have prompt you design a prompt then there are results you give that to your model your chart GPT okay and then you have to test you have a feedback right so basically it's a process a iterative process now once you have the prompt and you have the results you might be not very much satisfied with the result this is where you develop or you have a conversation with your chat GPT with your model and
enforce or reinforce the model to give you a good result a good output which is what you want for example if you have a task on code Generation Now a specific problem statement can be solved through various methods for example you want to uh output palindrome palindrome can be done by Brute Force palindrome can also be done by recursive method recursion for example when you give chat GPT a prompt it'll first give you a brute force method you have to keep on instructing and providing an attempt or providing a prompt to chat GPT so that
it gives you an optimized version for your code this is an example for code generation similarly suppose you have content creation you want to create a content and your content that you want to create is directed towards a specific target audience now when you give chat GPT a prompt the results it will show you that might not be satisfying enough you have to tell your chat GPT or your generative model iteratively what you want give them a feedback for example you have not received a good output what you will tell your chat GPT for example
this is not I want please generate the output based on my target audience or based on my feedback provided right that's where your feedback or test comes into picture you have to have a conversation with your generative model or for in this case if you say chat GPT give them a feedback constant feedback it is good it is bad it is what specific part you want it to be changed right all these things are an iterative process that's why we say prompt engineering is an iterative process based on the guidelines or the detailed set of
instructions that you provide for a specific task now let's go back a bit we said when we started about prompt engineering that it is an art as it is a science how it is an art it is an art because when you conceive an idea and when you write a prompt it is your creativity which comes into picture however the science part comes in when you provide your generative model or llm or in this case chat GPT The Prompt and it gives you the result because in the back end there is something called generative models
there something called different Transformers or architectures that are going into picture where billions of parameters are playing a role to give you this result that becomes your science part of it that's why prompt engineering is combination of both an art and science now that you have understood what is prompt engineering let's go and see the vital the main element of prompt engineering that is prompts without which generative model chat GPT will not work prompts are constituted or it contains two parts majorly that is parameters and structure what will be the structure and what will be
the parameters of your prompts that when you design it you have to think so that you have an optimized result now structure is one thing that we'll talk more about and parameters here let me explain I have only Selected Few parameters you can search more there are more parameters on which uh prompts work majorly here we are highlighting only three parameters that is temperature top p and max length temperature is the randomness that comes into picture when you provide that parameter to your prompt it ranges from 0 to 1 and if you want your model
to be creative and generate creative answers or generate creative outcomes your range or you set your temperature to 0.7 to 0.8 however in tasks such as code generation you don't need creativity so you can set your temperature to zero also where you do not want creativity now top p is actually kind of same as temperature it also has creativity top p is top probability that it selects from so you have a generative model it generates results based on your prompt now there can be n number of results how does that model decide what to put
as a output for you here top P comes into picture more the top P more you have different creative answers right so low for factual high for diverse last is max length again it is written manage your response length this parameter is to control the cost of your generative model as well so now now you have understood what is prompt engineering what is a prompt and what are the two main components of your prompt that is parameter and structure now let's see components of a good prompt what we mean by good prompt is that your
chat GPT your generative model gives you a good output your good outcome and you have less feedbacks for your chat that GPT or generative model now let's see the first or I would say the first two are context and instruction these are interchangeable how you want to design your prompt if you want to keep instruction first and then context in your prompt you can do the same if you want to keep the context first and then instruction you can do the same context is an additional in information that you want to provide to your model
instruction is a specific task that you want your model to perform for example if you want to summarize some text that is your instruction summarize your text summarize this text context is why you want to summarize what is the outcome you want to achieve for example going back to Tesla there you are providing an instruction to summarize but the context is that you want a business report from a business report you want what is the profit from 2020 to 2023 right so that is the context that you need to set why do you want this
why it is needed next is the input data okay what is your input data for the example Tesla example the input data is your business report or business article that you will provide to chat GPT and output indicator is what how what and how you want as an output for example you want in a CSV format your output suppose you want in a tabular format or you want in a graphical format so you have to tell what kind of output or what format of output you want so these are the four components of a good
prompt if your prompt consists of these four things you are enabling your generative model your chat GPT to give you better and good results now let's see an example here we will see um sentiment analysis we are setting up a context it is written act as a analyst you're acting as you are asking your chart GPT to act like an analyst working for an OT platform you are saying your chat GPT that you will have to perform sentiment analysis based on the feedback provided by the consumers you are working for an OD platform your consumers
are the users that are seeing your movies your web series now the feedback that they provide on that you have to do sentiment analysis this is the context that you are giving to your chat GPT now giving this your Char knows that they need to think from an analyst perspective they need to do sentiment analysis now second is instruction you are saying to chat GPT to classify the feedback into neutral negative positive where you are instructing or telling your chat GPT that what is positive positive means your consumer is your promoter negative means they are
demo neutral means they are neither promoter or demo they are neutral towards your content this is the instruction that you give what the chat GP or what generative model needs to do this is what it is doing classifying next is your input data and your output indicator before that let's see an example a good prompt while you giving a good prompt if you give examples for your generative model here in this case we are using chat GPT if you give them an examp example they will learn from this example and train your input data to
give the output here we have given two examples feedback first feedback is I think the series was okay second feedback is the acting of each character in the series was awesome so I think the series was okay it took took this okay as neutral so this is a sentiment neutral here awesome it took as positive so your sentiment becomes positive now your input data is the story line for the series was repetitive and abysmal and your output indicator here is sentiment what is the sentiment for this you're asking your chat GPT what will be the
sentiment of this entire prompt that is your context that you have given which tells chity to act as an analyst and do sentiment analysis it gives an instruction to classify the feedback provided that is your input data and classify that feedback into positive neg negative and neutral here you and here you are giving examples for the same so that your chat GPT can understand learn and then act accordingly now the output for this particular feedback is negative so if you copy paste this into your chat GPT it gives you a output as negative for this
sentence so this is how you should write your prompt which is good these are the components good components of for your prompt so that the output that chat gbt generates for you is exact for you to work upon so this is what we understood about components of a good prompt now that you have seen what are the different components of a good prompt let's have a look at a good checklist that you can keep in mind while designing a effective prompt for better results so the first one is defining the goal telling chat GB what
exactly you want to do that is called defining the goal next is detail out the format format is output format that you want chat GPT to provide you for example as I told you previously it can be table paragraph list csvs anything that you want and you can give it in a priority order also ask chadt to give you in a priority order if you are working on such uh content next is creating a role if you remember the previous prompt that we discussed while learning about components of a good prompt we told as a
context act like a analyst that is creating a role so that chat GPT assigns or processes based on your request based on that role that you have asked next is Clarity who the audience is or clarify the who the audience is basically you are telling chat GPT please generate the output based on the demography of my audience whom I am catering to it can be beginner level intermediate Advanced if you are creating any content if you are creating code it can be foundational functional expert based on the audience it can also if you're a teacher
and you are using chart GPT you can specify I want to teach fractions from a sixth class learner perspective or if you want to create a Content or if you are working on something and you want to make someone understand that particular concept for example I want a 10-year-old to understand what is prompt how would I make that particular learner understand that kind of clarity that kind of clarifications when you put in your prompt what is your target audience it will generate tailored feedbacks tailored outputs for you your prompt next is giving context basically you
are giving every possible additional information for chat GPT in your prompt so that the purpose of your request is clarified to chat GT and the response that it generates is what you want next is giving examples we saw in the previous components while we were understanding components that giving example trains your chart GPT or makes it understand that what you want and it learns from it to produce accurate results next is specifying the style it can be communication style if you're working with a brand it can be how your brand Works communication style such as
informal formal what do you want to do so it's suitable for your response next is defining the scope when you outline the scope with specifications besides giving context and examples chat GPT operates within those parameters for example temperature when you are telling okay I want to be creative my temperature is this I want my uh top P to be this right you are defining the scope for your chart GPT applying restrictions restrictions becomes your what is supposing length that you want the output length the token length that you want a there is a restriction but
what is that you want when you apply those restrictions that constraints will create right boundaries for chat gbt to produce relevant responses so all these are checklists that you can keep in mind again these all uh pointers in your checklist revolves around all the components of a good prompt that we have already seen But as a checklist you can keep in mind that okay these are my pointers which I can follow when I'm designing my prompt so this is how you can write a good prompt imagine having a chat with an AI that feels like
talking to a real person that's what chat gbt does chat gbt stands for chat generative pre-trained Transformer developed by open AI it is an artificial intelligence model based on the technology called Transformer which is particularly good at processing and generating humanlike text it's a version of what's known as language model because it can predict and generate sequences of words based on the input it receives now let's dive into some of the buzzword you have likely heard about generative AI large language model that llms and GPT now let us see how to give the prompt to
our chat GPT I will start telling that I want to build a game app that is rock paper Caesar let my backend be Python and front endb HTML CSS and JavaScript I want to execute this on an online editor namely riplet so this is our instruction constraint because here I'm telling chat GPT that I want to develop a rock paper Caesar game where python is my back end and HTML CSS and JavaScript being my front end so we are setting a context here so this is all about what we want to develop next is I
want this for the target audience of college students and it should be for early professionals as well so let it be simple and easy to execute so this is where I am setting my audience pattern and the overall context is set here then let the creation of this be very interactive and with simple explanation and a stepbystep process which helps me understand and create these easily so this is our overall context that we are setting here so I want it to be very interactive so that it'll tell me what to do stepwise then please help
me connect both front end and backend codes easily and explain everything from scratch so this is again an instruction so this is my entire prompt which I want to give it to my chat GPT so I will copy paste this prompt to my chat GPT here so I will go to my chat GPT and I will give my prompt if you want to know more about how to write these prompts in detail you can take up a course from our great learning platform and it is prompt engineering so now let us execute this prompt when
I give this prompt so this is what chat GPT gives me back so sure let's create an interactive rock paper Caesar app using flask at the back end and HTML CSS Java for the front end here it's giving me stepbystep guide to follow so it is telling me to sign in or login to ret.com I will show you a demo of how the prompt decoding Works in a moment but first let's go through the basic setup for our front end we are using HTML to structure our game CSS to style it and JavaScript to add
interactivity HTML will help us create the grid for the rock paper scissors game CSS will make it look appealing and JavaScript will handle the game logic such as detecting C and revealing winners on the back end we will be using flask a lightweight python web framework flask will serve our front end files and handle the game logic on the server side this setup allows us to build a fully functional web app with python we are going to use riplet an online code Editor to run write and test our code riplet makes it easy to collaborate
and share your projects with others now I will walk you through the steps of building the rock paper scissors game we will start by setting up the front end files then move on to the back end with flask and finally connect everything together so here I can go to Google and I can log in into my reple account Ive already logged in so you can create one account for your own so this is my riple account next it is telling me to create a new project so it is telling me to click on the create
button and choose flask as the template so I can go and click on the create replate option here I should select flask as an option that is your template now I have to give a title to it so what can we give we can give it as rock paper Caesar game app okay so now create riplet now once you click on create it'll open a new page for us so now here we will be copy pasting our code so we'll see what is our next step our next step is to set up the back end
that is python with flask so we have to open main.py and copy this code that is your python code so click on copy code here go to your driet so this is our main.py so here I am copy pasting my code which I got it from chat GPT next after this step it is said telling us the step three that is to set up the front end so to set up the front end we have to create templates folder first so let me go back and I am in main P now and here there is
an option of new folder so click on this new folder and name it as templates okay now it is also telling me create a new folder folder name templates inside this folder we have to create a file named index.html so I'll copy the same name and I'll go back and here it is telling me file so I will copy paste my HTML uh file now so I've have created an index. HTML file now next after creating it it is telling me to add HTML code so I will just copy this code and I will paste
it here in my HTML file that is index. HTML okay we'll see what we have to do next next it is telling me to create static folder so I'll go back on the main.py click on main.py again create a new folder by name static okay and inside that we have to create two files now so one is style. CSS I will name a file as static. CSS so now after naming my uh one file name as CSS other file name is script.js I will create another file inside this by name script.js I have created two
files inside static folder now so now I have to add CSS code so I have to copy my CSS code and I will go to my CSS file and I will paste the code here next next it is telling me to add the JavaScript code so let me copy the JavaScript code from here and I will click on the Javascript file and let me paste my code here next we will see what it is telling so step four is nothing but running the application on riplet so before we run in our into our execution let
us see uh the main gist of what these files are now so this is the python file where you are setting the actual context where it is actually playing rock paper scissors and all of that taking the input from the user HTML file where it is used for structuring your code so your entire web page will be set up using this HTML structure then is your CSS where you are styling your entire web page and then is your JavaScript which helps you to interactively connect everything on your code that you are developed so now once
you understand it just about it it our we will move on to our step four that is your running the application on riplet so to run we have to just click the Run button on top of the riplet then you can access the game is what it is telling so now I will go here and I will run so when I run this code you can see that you have got an output so it is telling rock paper ceson so if I click on Rock it'll tell you chose Rock but computer also chose Rock so
it is a tie so when I click paper you choose paper but computer choose Rock so you win so this is how you can get the output for a single player computer being your another player so now we will actually Define our code I mean we will refine our code uh using chat GPT better for two players so I am just giving the prompt as rock paper Caesar should be for two person game so when I give this prompt here again it'll refine the code and it'll give me the code so simply we will copy
paste all the codes of python HTML CSS and Java from here on the same document that we have created so copy the python code I will go here and I will paste it in my python file so select all select we so I pasted it here now next let me copy HTML code code so copy code go back to your riplet in your HTML file let me paste the HTML code next it is giving me the CSS code so copy the code go back to your riplet go to your CSS file and paste your CSS
code then go back to your chat GPT and copy your JavaScript code get back to our riplet let me paste my JavaScript code so after doing everything let me run my project now right so let me run so when you click on your run button again you got your new output for two different players so player one let me choose Rock and it is telling waiting for the other player player two let me choose paper so it'll tell paper is the winner right so it'll tell player two is the winner like this you can test
for n number of options you will get the output as who is the winner in particular option or game so this is how you can develop the app using chat GPT as we reach the end of this video I hope you now have a clear understanding of how to develop a simple game app using chat GPT prompts and python the key takeaway is to break down the task into manageable steps and Leverage The Power of chat GPT to guide you through each stage the technology has rapidly evolved and has made statistical and data analysis very
convenient how using the most popular generative AI tool Chad GPT so let's quickly apply statistics on one of the healthcare data set by using chat GPT 40 so first we're going to upload the data set over here once we'll upload the data set of healthcare then we're going to give a prompt what pront will going to give over here that you can see that now based on the share data set first read the data set and perform if data cleaning is required using Python and display the output as well with the code so when I
will run this let's see what output we are getting over here so as you can see that I have asked over here based on the share first read the data set so first uh you can see the summary of the data we are getting over here and then after we are getting the potential issues all the issues what we are having with the data data cleaning steps over here CH GPT 4 is showing that we have to convert the date of admission and discharge date to date time format Correct capitalization in the name column so
these all are the basic data cleaning steps you can see so once it will uh do all this it is showing the data cleaning steps and we are getting over here the cleaned data set but we are not getting the code over here right so for that I have asked that provide me the code with the output so when I'll give this prompt over here you can see that we have the code and based on this code we are getting the output so this is the output the data description also we are getting over here
the sample of negative billing amount we are getting over here and this is the complete C data set which is over here you can see on the screen now let's proceed with the next step so the next step is that perform the feature engineering if it is required and display the output as well so we are not only asking for the output with that we want uh chat gb4 to show you the code properly so let me add one more point over here that display the code and its output as well okay let let's run
this uh prompt and let's see what we are getting as an output CH GPT 40 is not only uh giving you the code over here it's giving you the explanation as well that feature engineering involves creating new features and what is basically feature engineering is and uh other types like one hot and coding is the part handling missing value is the part of it so you can see that code for feature engineering complete feature engineering once we perform based on that we are getting the output also and once we'll get the output then we'll going
to apply the statistical test as well At Last you can see the feature engineer data set has been displayed for your review if you need further modification analy please let me know now what we want we're going to give the last prompt over here that is please provide the statistical analysis and what we want to include in that it should include the correlation Co variance between the features and the statistic iCal test which should have T Test Kai square and Anova give the code separately and provide the output as well for each code so as
you can see I've given uh The Prompt over here to provide the statistical analysis that should include coration co-variance between features and the statistical test like T Test Kai square and Anova when I will enter this prompt you can see the output over here that it's providing the correlation the output of that correlation Matrix and then after it's showing the co-variance code how to get the covariance matrix by using coov function so as you can see that we are not getting output over here we can ask again the chat gbt to display the further process
which I have asked in the previous prompt based on that again it will uh read that prompt and based on further steps it will going to display so co-variance now we are getting the Cod related with that and the output once we'll get the output as you can see it's a lot so after that again it stopped so when I asked again proceed further more now so based on that it started and it start uh showing you the various statistical test so you can see that we are getting T Test K Square test and an
no over here first is T Test what is the meaning of T Test why it is used it's also giving over here and based on uh T Test we are getting a code and the code you can see that we are performing T Test between Bing amount for males and females and at last you can see the output also that what is the T State value and P value so this indicates what that there is no significant difference in the billing amount between males and females because the P value is greater than 0.05 after that
it's showing you the kai Square test also a Kai Square test is used to determine what if there are significant association between two categorical variable here we can see that we will test the association between medical condition and test result so T GPT 40 is not only showing you the code not only giving you the explanation of the kai Square test but with that it's showing you the output as well so based on the code we are getting the output that what is the kite2 value and the P so 0.146 and 0.703 respectively what it
indicates what in what we are getting as an inference from here that is there is no significant association between the medical condition cancer and the test result so because the P value is greater than over here 05 now let's move to the Anova test why it is uh used so it is used to determine if there are significant difference between the mean of three or more groups so we're going to compare the billing amount among different admission types so the code for that you can see over here urgent bailing emergency baing elective billing we are
using the data and from the uh data we are using the relevant feature which is called admission type urgent equal to one billing amount and once we store we going to get the value of f St and P value now what it indicates that there is no significant difference again in the billing among different admission types because the P value is again greater than 0 Z five so you can see over here the summary also we are getting of a statistical analysis using chat GPT 40 it's very easy to use and easy to understand also
if you really want to understand furthermore you can give the prompt in chat GPT 40 and you can get the relevant outcome we'll be using HTML CSS and JavaScript as our basic programming languages the coding playground of choice will be an online editor one compiler with chat gb40 by our side creating our Dream website is as simple as describing what we envision whether it's a portfolio a blog or an interactive application GPT 40 will guide us through the process ensuring we achieve our goals efficiently and creatively let's roll first let's open our Char GPT playground
and let me select 4 o now let's see what will be our proms I want to build a portfolio website let our front end be HTML CSS and JavaScript I want to execute this on an online editor namely one compiler this goes our instruction constraint to the GPT I want this for the target audience who have zero coding knowledge or at their learning phase like college students and early professionals so let it be simple and easy to execute and understand by them this sets target audience to the GPT let the creation of this be very
responsive creative and with simple explanation and step-by-step process which helps them to understand and create these easily this sets context to our chat GPT please explain everything from scratch we are instructing chat GPT how it has to give us the steps great now that we understand our prompt let's execute this on chat GPT 40 and see the results wow we got the results so fast and in a well structured manner for more information on chat GPT 40 check out our video on chat GPT 40 versus 4 versus 3.5 great let's quickly execute this to see
our results how our website looks like we just copy pasted three different codes executing this codes is very easy on the online one compiler platform we just have to copy paste the relevant codes in the sections first goes HTML then comes CSS that is tyy as a third part we just copy paste the JavaScript code to the JavaScript container you can see towards your extreme right you have a preview window once you click on the Run button it will execute the complete code which you have provided and show your website how it will look like
this is the basic structure of your portfolio website you can enhance the design color patn responsiveness and add few of the actions as per the requirement as well thanks for joining us on this journey as we built a portfolio website with chat GPT 4 remember we covered creating a portfolio website using GPT 40 added customized designs and basic functionalities now let us understand what is GitHub because I'm going to talk about GitHub co-pilot so you should understand first GitHub many of you would be familiar with what is GitHub might be using it also on day-to-day
basis whoever is our developers or a software coders they might be using it for sure but again you need to understand exactly what GitHub is to understand GitHub co-pilot before understanding GitHub you need to understand what is git because you know GitHub is based on the foundation of git Concepts so git stands for Global Information tracker so git was initially developed to do version control system and it's a distributed architecture Version Control System meaning suppose you have developed a software or a project now that was the initial version now you did some updation you know
some functionalities were added or maybe removed and the next version of the software is created so somewhere the older version should also be stored right should be worked upon and then the newer version should be installed so how or where you can do all these git provides that platform okay so it's a distributed Version Control System where you can keep your code that is you can store your code you can manage it you can modify it update it and release the new version of it initially git was created by Linux tools the creator of Linux
operating system kernel as well and this was C founded in 2005 now I'll talk about some functionalities of git these functionalities are also carried forward to the GitHub Concepts that's why it's important to understand this here as well so git has a distributed architecture it means what that so there is an architecture where know it's called centralized architecture where you know suppose you have written a software code and it is stored at one place you are able to access it on day-to-day basis but distributed architecture meaning it is stored at a place but every developers
machine or a laptop is able to access it or keep it a copy of it on their own laptop as well okay so that's your distributed architecture provided by git second it helps you run a parallel version now suppose that two three developers are working on the same piece of code know or same software or project and they're using git to store that software or a project all three developers can run or do the modification of that code and the local machine and keep on doing the updations parallely so the same code will have three
different versions on three different developers computer parall it is maintaining all these versions so that the three developers can work independently on their machine for the same piece of code next it maintains history so all the metadata related to uh you know modification or updation of a code like who updated the code what was the changes is done when was it done all these metadata is being is also being maintained on G platform that's what it is that maintains history next is collaboration and conflict resolution so how is this you know doing conflict resolution so
same piece of code as I said in an example that three developers are you know accessing it and updating it right now suppose all three did the modification in the exact same piece of code which code will be stored back uh you know in the original copy how to resolve this conflict is also handled by Git You know there are different Logics behind which which is used to take care of all these conflict resolution things next the coming to Performance wise get us fast and secure I said fast because you know it provides uh know
person to access it on their own machine and that's how uh you know it is faster and get is faster because it can handle even large code base even if your code file is very large you can efficiently handle it and it's also secure it's open source means it is freely available so not just know GitHub is freely access G is freely accessible but also you can uh access the code or manage your project freely there is no uh subscription charges as such then integration get can be integrated with various IDs it can be integrated
with various online platforms like GitHub so here GitHub comes in picture so GitHub is an online platform platform or I can say a web based platform for git so all these functionalities which I just now listed down for git is applicable to GitHub as well and apart from this GitHub offers some more functionality some more features as it is a webbased platform and uh it provides Version Control for various systems now coming to what is GitHub co-pilot so you got an idea what is git what is GitHub talking about GitHub co-pilot now so as GitHub
a web based you know uh version control system which was already allowing you to store your projects your software codes basically and access them also freely so associating you know co-pilot with GitHub would have been beneficial because a lot of training data is freely available and I say freely it is open source is freely available to train the model so as I said GitHub copilot is a AI developer tool so it is a artificial intelligence-based developer tool so the programmers the software coders or the developers are going to use this GitHub co-pilot to write down
a code to create a software it is based on generative AI model which I spoke about and it is developed by GitHub open Ai and Microsoft as I said initially it was founded by GitHub and openi but as Microsoft is one of the investor here the name is associated in the founders okay talking about some of the features of GitHub copilot the first one is it provides sharp and intelligent code suggestion meaning when you use GitHub co-pilot the moment you start writing code it will help you to you know suggest you the next line of
code or the next few words of that code for example suppose you're writing a for Loop okay so for Loop if you write just the for keyword it will automatically give you the brackets the know what has to be written inside that bracket those curly brackets and everything now this example I've taken it from java if it is a python code it will suggest you accordingly or it is any programming language code it will suggest you accordingly okay next feature is it auto completes the code now why the software developers are loving this GitHub co-pilot
because for talking about you know Java programming language like I am a Java programmer so when it comes to Java programming there are so many you know syntaxes which has to be keep in mind that is semicolons the curly brackets uh you know the round brackets so where curly brackets will be there where round brackets will be there should I put a semicolon at the end of for Loop or should I put it after curly brackets or there is no semicolon required so these kind of lot of small small things are there which has to
be kept in mind I'm very sure that other programming languages are also having these kind of syntaxes which has to be learned but if you're using GitHub co-pilot you need not learn those or you need not Google every time after writing a single piece of code because GitHub co-pilot will automatically complete your code with the right syntax of the same programming language in which you are developing your software or writing a program It quotes faster now why does It quotes faster because as I said the if you're writing just the first few words of the
program or first few you know keywords of the programming language it is going to suggest you what should be the next few lines and you just have to you know press a tab or enter and it will uh write down the code in a fraction of second so that's the reason it quotes faster so it understands all the file types if you have created a file you're using GitHub copilot there and youve created a file with py extension it understands that it's a python code file if you have created a with Java extension it will
understand that it is a Java code file right similarly for other programming languages and it will suggest you the code or the next few lines of your program on the basis of the programming language or the file type which you have stored uh that file off okay so that's the uh know feature that it understands file type next Cloud code understanding so not just you know the basic programming Concepts it also has concepts of cloud uh suppose you're writing or developing a software that uses a cloud platforms that may be using any cloud-based platforms it
will suggest you according to that the next lines of code okay it also has database understanding may be any database MySQL uh Oracle SQL POG SQL or you know mongodb depending upon which database you're using it will suggest you or I can say it will help you write the query in that particular uh database programming language ID Integrations it can be integrated with lot of idees integrated development environments lastly but not the least it gives you the best suggest as this you know GitHub co-pilot is based on AI uh you know generative model and this
model is trained on lot of freely available codes that is open source available codes and hence it provides you the best suggestion possible for that particular piece of uh you know code which you are trying to write or a program which you trying to write there are you know disadvantages to it as well about that I'll talk in a little bit of time but this is all about the features of GitHub C pilot so what all you can do with the help of this okay you'll get a more clear understanding of this once uh we'll
see the demo or a handsone session that how exactly is you know it is autoc completing the code which you're trying to write or how is it understanding even what is there in your thoughts so let me be very clear here that it will not read your thoughts what's going on in your mind but yeah what you are trying to write from the keywords which you have already written on the GitHub go pilot platform it will help you to write the complete software now let us talk about what are the languages and IDs that are
supported by GitHub copilot before getting started with Hands-On sessions of GitHub co-pilot you need to know what programming languages you can use on GitHub go pilot so the list is infinite although not limited to this python JavaScript typescript Java C C++ go Ruby PHP and many others as I said the list is not limited to just these programming languages some might be added more or it can add in the future initially uh the language on which you know GitHub copilot was developed and the first test was done was JavaScript so uh later on you'll see
that you know with JavaScript programming language you can utilize all the features of GitHub co-pilot when I say all all the features of GitHub copilot meaning different versions are there or different you know subparts are there for GitHub co-pilot tool which you can uh fully utilize when it comes to JavaScript programming language but yeah other languages are also supported so the this is the list which you can go through and depending upon the programming language you want to work upon you can choose GitHub copilot talking about the ID supported by a GitHub copilot so from
the official announcement on the GitHub co-pilot website basically you can see that it mentions that GitHub copilot integrates directly into your editor including including neovim jet brains IDE Visual Studio Visual Studio code right so these are the four idees integrated development environments on which you can use GitHub co-pilot I will be using visual studio code to demonstrate you uh the quotes and also the SQL queries but why I have selected this so there's no I know uh such strong reason but yeah initially when the GitHub co-pilot was developed it was tested and developed on on
Visual Studio code and as you know the first always gains a preference so that's the reason I'm using visual studio code and also as this uh you know GitHub copilot was developed using this tool initially it will be fully fledged supported or seamlessly supported by Visual Studio code and that's the reason I'll be using visual studio code going further for demonstrating the handsome session let's talk about the advantages and disadvantages as well I have spoken a lot about advantages right like you know it helps you to develop uh and write faster code because you don't
have to you know now keep remembering the small small syntaxes or small small keywords of the programming language GitHub co-pilot can suggest you what should be the next keyword here or what is the Advan syntax which you are missing here why is your code even giving an error that also can be pointed out by GitHub copilot so all using all these features you are able to write the code really quick right even if suppose in the comment just as a comment of any programming language you write uh know what uh application you're trying to develop
it will give you a basic code or basic idea suppose you want to write a program for linear search you just write in a comment program of a linear search and within a fraction of seconds it is going to demonstrate you or give you the code snippit for a linear search right if you want to develop a GUI you want create a graphical user interface with suppose two text boxes two buttons two labels right just just write down this as a comment it will give you the basic uh or I can say it will give
you the complete code of what J you want to develop that might need some customization depending upon what exactly the G you want to look like but yeah it will help you to write down the basic at least right but there's a disadvantage to this as well as I said first thing is that you know GitHub copilot is not going to read your mind it's not inside your brains so definitely you have to give a proper prompt you have to properly write down the comment what you exactly want GitHub copilot to help you with right
and as it is trained on the freely available programs it might give you incorrect results sometimes right suppose I said uh I want to write a program for searching a element in an array okay and to search an element in an array GitHub copilot has suggested you a a logic which is somewhat like linear search but you wanted to write binary search so right there's an ambiguity here correct suppose you wanted to create a GUI uh as I give example right now and there you want uh two text boxes and two buttons it provided you
the text box but it didn't provide you the label right so text box should be associated with the label what exactly has to be entered into the text box is provided as a label right so these kind of incorrect results might be produced by GitHub co-pilot so it's not exactly incorrect but it's incorrect when it comes to the reference of what exactly you wanted to write code with or code for right so uh you have to be a little bit of intelligent programmer and software so you need to have a understanding or a knowledge of
the software programming language to just check and review that whatever you know code suggestions are given by GitHub copilot is it right or wrong so that's is the smaller disadvantage AG but yeah as I said if you have knowledge of it you might not uh you know be affected with this with this disadvantage and convert this disadvantage to your advantage now let us see some Hands-On session for the different aspects of GitHub co-pilot starting from the installation to running program related to Java programming language and also executing Mya skill queries we'll see everything so let's
get started with how to install get co-pilot and in the process I'll also tell you how to uh you know integrate it with Visual Studio code so just go on Google and there you can write GitHub copilot okay and once you press enter the very first link if you see here GitHub co-pilot your AI PA programmer you have to click on this okay and then the link opens up this is the link here right now you can see all the details about uh you know GitHub copilot is over here you can see here the TS
file how the GitHub co-pilot is helping a small uh you know GIF is here where in as a comment uh the developer writes that determine whether the sentiment of text is positive use a web service and it automatically generates this code within a fraction of second similarly you can see with the go programming language or python or you know Ruby or whatever programming language you comfortable in working or you want to develop a software the same thing applies coming to the different plans which are available from GitHub copilot so one is that you can use
a one month free trial so to give it a try you can go with a free trial and that's what I'm going to do it right now here so you can click here start a free trial and it will give you a one month free trial which you can utilize or there are two different plans one is for individual one is for business bus so depending upon your account your GitHub account if it is an individual account or it's for corporate account you can uh you know leverage this uh benefits so for individual this is
the plan per month or this much is per year for business similarly the other plans are there you can go with buying it trust me guys you're not going to uh you know disappoint if you are taking these plans for individual and trying to use it I'll show you also how you know it is helping in writing a software then maybe you can get a CLE idea that I should go with the plan or not uh similarly you can see or go through this complete page it also tells you here down the lane that you
know how uh it is helping the developers in in the entire world the research has found GitHub co-pilot help developers code faster focus on solving bigger problems stay in the flow longer and feel more fulfilled with their work so this is some data research data which has been put up over here and similarly uh down you can see somewhere it is also mentioned which all uh you know yeah so you can see uh it writes the program for if for if you want to draw a scatter plot in JavaScript this is how quick it is
writing down so you can see just with a blink of an i it has written the complete code you can go with python and how to write drawing a scatter plot in Python just within seconds similarly you can go for you know other program related to memorization or you can fetch tweets or I mean writing the program for fetching the tweets can be written within fraction of seconds right so this is a example how co-pilot is helping you WR code better now coming to how to install it so as I said you have to click
on GitHub co-pilot link that is the very first link when you type GitHub Co pilot in Google and here you can click on start a free trial so if you click on this it will take you to the login page of GitHub so uh you can see here that it is already logged in so uh my account this is TR prudent here if you are not logged in so or for the first time you're using it maybe you have logged out it will ask you to log in else you can uh if I go back
to that page as I said if you click here or you can just before itself you can sign in from here so as I've already signed in it's showing my you know profile icon here but if you have not it will click uh it will show you the option to sign in or sign up okay so I'll go back again there clicking on a start a free trial and as I have not yet taken any plan or I'm not using the account which has plan Associated so this is the free version which I'm using right
now to demonstrate you and in this if you can go here and see there's option of billing yeah here billing and plan so if you go on this plans and uses so you can see right now it is saying it's due by September 8th because recently I've taken the free trial version uh if you want to buy it right itself you can go here and do the upgradation and you can see what benefits are there and what is not included included so these are many things are supported or right now in my GitHub go pilot
you know account so you can see GitHub go pilot it is there with the GitHub now how to link it with Visual Studio code so I'm going to use an ID Visual Studio code you can use all uh the other IDs also which I have mentioned to you which GitHub go pilot supports and the reason using I'm using GitHub pilot with Visual Studio code is that because that was the platform and ID on which GitHub go pilot concept was initially developed so for that you should either have Visual Studio installed in your laptop or system
if not then again you can take help of Google and you can just directly type here Visual Studio code and the very first link here can click on this it will take you to the page of uh visual code Studio you can go to this download option here and it will show you the option if you are using Windows Mac uh depending upon your onew or whatever Red Hat whatever is your operating system you can download it from here so if you click on Windows here uh it will uh download you see thanks for downloading
Visual Studio code for Windows uh actually I have already downloaded and installed the IDE so I need not install it again but uh guys it's very simple so you can once it downloads so it's still downloading so once it downloads you have to just click on it uh click on I accept the terms and services and just clicking on next next next it is going to install it right so it will take maybe two to three minutes hardly to install once it gets downloaded now point is how to connect Visual Studio code with uh GitHub
co-pilot so let me show you that as I said I have already uh installed Visual Studio code so here is the terminal for it so once you install it in open Visual Studio code you will see like this something of this kind of a screen will come in front of you so I have also created a project folder here and and a file uh that's the reason it's showing it but if you have not and you have open vs code for the first time this these options won't be here you have to create a folder
and a file separately later on okay so you can actually go from here create a new file or you can open some folder and then inside that you can create a file so these options are here which you can do but before that how to integrate now GitHub co-pilot with vs code so so for that you have to go to uh this extensions option okay this also the marketplace so here you have to search the extensions in the marketplace so I'll just write GitHub copilot okay now again GitHub copilot has various versions okay so GitHub
co-pilot GitHub co-pilot Labs GitHub copilot night different versions are there which has different features and also some are used for different activities different kinds of you know activities related to software development depending upon your use you can but here I'm going to just show you the very first basic version of it that is GitHub copile so once you write there the very first option which you see with a you know tiny bear kind of an icon you can uh click on this so once you click here like as I have already installed it for a
faster demonstration here but if you have not in place of this uninstall it will ask ask you to install so just have to click that install uh you know option you know these reload required disable option would be there there will be only one option given to you here install okay for the first time if you have not installed click on that it will install here itself with a fraction of second it won't take you to any other page or anywhere it will just simply install and then it is going to pop up asking to
authorize vs code by GitHub to use GitHub co-pilot right so authorization from GitHub is required so if it doesn't automatically pops up then in the notification center you can click here and definitely here there will be a notification saying kindly authorize GitHub co-pilot to be used in vs code okay so you just click on that notification it will take you back to to your GitHub account and it will ask you here to uh it will pop a message here saying that uh editor vs code is trying to access GitHub account so you have to allow
it and authorize it so you have to click on authorize button and that's it it will link there and once the link is completed here you can see this small icon here so this is the icon for co-pilot okay GitHub co-pilot so once this is active here if it is not installed properly it's not active then there will be this icon with a you know cut Mark over it like a slash on top of this be icon it means it's inactive right now if you want to inactivate or deactivate you don't want to use it
right now you can click on this and then you can click on disable globally so disabled globally is for disabling it for all the programming languages which you are using on this vs code if I'm using some programming language suppose Java then it will ask you that want to disable for only Java okay so depending if you want to disable it for some time or you are finding it irritating you can disable it as well but good to use it fine so this is done so you have to just close it and if you are
using if you have to run Java codes here so vs code Java programs right so you just have to install the jdk uh en machine so make sure that jdk is installed if not you can again uh install the jdk or the extension for Java from here and uh install it in the same way how I explained you for GitHub co-pilot so you are able to run Java programs also properly now coming to already I have okay so once you have installed Java you have you know installed the plugin of GitHub copilot you can goad
ahead with creating a folder here or you can open a folder if you've already created or it's there in your uh you know system and then in this you can create a new file you know I can give it a name suppose uh know first program. Java okay so just click enter uh you can select the folder so it's already selected vs code folder is there and Java project folder I already created so you can just uh create the file so it created you know the file first program. Java right so you can see J
here so it's a Java file right so that was all about your installing GitHub copilot installing vs code integrating GitHub copilot with vs code now we have installed GitHub copilot we have installed vs code and integrated GitHub Co pilot with Visual Studio code now time to start writing program so the programming language which I'll be using here is Java programming language Let Me demonstrate you how to write on a program quickly using GitHub C pilot okay so we have created a file here first program. Java if you want to create a new file you can
go here click on new file and then give the extension which you want to here I'm using Java programming so I given do Java extension it will ask you to select the folder select the particular folder where you want to create this file and then just hit enter it will create the file okay so first program. Java is the file which I have created now you can see here my GitHub co-pilot is active right so it's asking to deactivate it means it is activated to use this if suppose I'm going to write a hello world
program right to check how you know GitHub co-pilot is helping you so you I have just written a class here right and it will automatically give me the suggestion that I want to write a hello world program because For the First time without any input whenever you start programming the very first thing you learn is writing a hello world program right how to you know print hello world keyword so this is what it is going to give me so I've written class and then I'll just write you know half of this first it is automatically
you can see showing me the suggestion for the code and it's completing the line that after first I should write program as this is the class name and then all the syntax public stat wi main string ARs and system.out.print in hello world everything is being demonstrated here right so you can see here right it is giving me the suggestion for the entire hello world program so I just have to click or I have to just press Tab and it will come to the end of the suggestion okay and just program for printing hello world is
complete so you can see how quick it was right now if I want to print something else here I can print something else maybe um good morning okay and if I execute this by clicking on this icon so you can see that the program ran successfully executed successfully and good morning here is the output right now I don't want to you know uh write this hello all program going to the next level suppose I want to uh write here a program for maybe printing a pattern okay so there are different pattern programs which are frequently
Asked in interviews as well right so suppose I want to write a program I'll just comment it out you can write it without comments also doesn't makes a difference just that you have to delete the line after the code is written right otherwise it will give you an error so I'm going to see an example so in the comment you're just going to write program to search an element okay in an array hit enter so you can see it's showing me what should be my next line of code you have to press Tab and then
enter now the next line suggestion will also be given so you can see not just the next line it has given me the entire code which should be there for a program to search an element in an array so you can see it's asking enter the number of elements in the array first so it's creating the elements as well then it's asking to enter the elements of the array and then you have to enter the elements to be searched so which element you want to search you have to enter and then it founds and once
found it is going to print the position else it will print a message element not found right so let me execute and see so this program is uh for a linear search right let me execute it enter the number of elements in there so suppose five elements are there now number now the element to be searched if I write suppose 90 so you can see the element found at position five because these are the numbers which I have entered 23 54 67 89 90 so 90 was the last one position five so 1 2 3
4 5 90 was the position of the element so it's correct right now what logic it is using is a different thing as I have not mentioned any logic to be written here I just want to search an element in an array so it's using just a linear search but if I want a specific you know uh logic to be used like I want a binary search here I can delete this and then I can rewrite the comment and it will again give me the suggestions and you just have to hit Tab and enter and
it is going to accept that suggestion right so you can see here now it is using binary search so binary search is that we find the middle element and uh middle element is first plus last by two and then we start searching from first till the last element but always comparing with the middle element right if it is smaller then we go with the know all the elements smaller to the middle element if it is larger than the middle element we go to the other half of the array so this is the binary search and
see you correct output is given or correct Cod suggestion is given by GitHub co-pilot so that quick it was right now I'll show you a program here to you know create a GUI with two text boxes or we can write create a login form GUI with two text boxes and a button and I have to just delete all these see it is also giving me a suggestion for the comment like do you you want to add some functionality to what you want to write right so you can see how intelligent suggestions are there when the
user clicks the button the program should check the username and password you want this also to be added okay let's add it if not you can just leave it as well so now it is asking that know when the you know program should be logged in or know it's log in successful so if you want something if the username is this if I I don't want you know username to be admin I want username to be admin in one okay and I don't want password to be 1 2 3 4 5 four I want it
to be 1 2 3 4 5 I can change it as well right and then we're going to just press enter okay otherwise the program should now it will start writing the program I I didn't wanted that particular comment to be accepted so I just pressed enter so it is now showing me that I should you know import all these necessary packages needed to create a GUI form right so once done yes you can see here that if I pressed enter it again showed me the entire code which should be there if I press enter
Tab and enter I have just accepted a suggestion but you can make sure that it's correct or not so you can see it is extending the jframe and implementing the interface that is action listener is creating a text field two labels one password field and two buttons okay One login and one exit button initializing all these in the Constructor for the class so all these initialization is done it's created J panel now all these uh you know text boxes and uh buttons and password field has to be added to the panel so it's adding to
the panel here panel. add then you have to implement action listener on these two buttons because it's going to perform some action so you can just check that you know the code is actually correct and here is what action is performed that if the button which login button is clicked and the username is this password is this so it is fetching the text from here and if it is equal to admin one so you see here in the commit I've changed that I don't want admin I want admin one so if even if you make
certain changes according to what you want to write a program it will take that as an input and password I wanted 1 2 3 4 5 and then it should say that login successful else login failed let's run this code and see so yes you can see here as there is no uh you know length breadth and you know width defined for these buttons and text boxes it's showing me like this but you can add some piece of code if you want here for that also the suggestions will be provided so usern name if I
write admin only and I write password as 1 2 3 4 5 and I write login it is going to say login failed because the username should be admin one now if I login it says login successful so you can see how easily you have created a login page GUI right within a fraction of seconds so this was all the help you can see a program example which can be provided or helped with by GitHub co-pilot okay now I'll show you a small not just you know these basic programs but also the concepts of the
core programming language of java that is your multi-threading concept suppose I want to create a class okay which uh a thread class basically so class first program which extends thread so you can see it is giving me the suggestion and if I press tab I've accepted it so you can see the basic layout is ready there's a there should be a run method there should be you know the start should be called on the object of that class so basic layout of multi-threading is ready now if I want to do some certain changes here I
can May write my own you know functionality here inside the run method right depending upon what I want and I can similarly the you know call this run method from here if you want to add one more uh run method or you know override run method and just learn or see the concepts of overriding with run you can do that as well so you can see here it has written one more run method for me right so uh not just the you know guub go pilot is helpful for those who are just writing a program
who have learn the programming but also for them who are learning it like you can see here the very basic concept of threading multi-threading can be learned in Java using GitHub co-pilot so that was all about Java programming how is it helping to for the developers to develop the program in Java now let us see how we can use GitHub co-pilot with mySQL in Visual Studio code so just now we saw a Java program right now moving ahead to mySQL so what we should do here is that uh you have to first install the plug-in
related to mySQL in your vs code and that you can do in a very few simple steps that you can just go to the marketplace you know installing extension position and here in the search you can write uh MySQL okay and here the second one this you know if you see here this two uh this database icon you have to click on this and install it as I have already installed again it showing uninstall option otherwise it will show you install option here just click on it it will install you without asking you any accept
reject or next option it'll simply install here so MySQL once it is installed so these icons if you see are on the left side database no SQL uh you know these will not be until you have installed MySQL so once you have installed these icons will be visible to you so if you click on these icons which is visible here so if you click on database uh right now I've already connected it to with the database so it's not asking me any option to connect but for the first time when you are uh you know
integrating MySQL with vs code it is going to a popup window will be here where you have to fill the connection name okay and now what is that connection name and password which I'll tell you so let me open MySQL for that so you can see here i' I've opened MySQL workbench which was already installed on my laptop so you make sure that you have my school workbench already installed you know already uh logged in so you can see here two connections are there local instance MySQL 0 and then frya by my name I have
created a connection you can create a new connection from here and then if you click on this uh you'll be prompted to enter a password right so you can keep whatever password you want uh now here when I come to visual studio here when you will click for the first time this database part it is going to ask you to connect this vs code to the database my connection here okay this one so to connect what you have to do you have to fill few details as that power window is not displaying here because I've
already connected I'll just tell you what details you have to enter you have to enter the connection name there and the password apart from that you need not change any field there okay need not alter anything and just click on connect so once you click on connect all the database which you have created in that particular connection on MySQL workbench will be shown over here okay if you want to create a new uh you know database you can click on this plus icon and create new database you if you want to you know select any
one of the database suppose I have selected this employees database okay so these are the tables which I already have in my database right so I can use any of these you can see customers employees office these are the tables if I want to see what uh you know uh this customer table is all about you can click on these Dot and it will show you so you can see here employees database there are few entries here and then there are these are the columns here similarly you can go to the customers and you can
see that as well it's see time to load so once it loads it can show so you can see here customer name customer uh number uh contact last name and all these details are there which has already been created if you want to create new you can do that as well now let me our main motive here is to see how uh you know GitHub co-pilot is helping us to run MySQL queries so for that you can just go here uh the query uh also okay I've already created these two files that's why it's showing
here the name or you want to create a new file you can just press control shift p the shortcut this will be displayed you can click here notebook so you can create uh or open a notebook so if you open a notebook it will create a new notebook if none is uh created right so I have clicked on that so you can see this notebook is created here you can write your queries over here as well or the another way is that you can click on this query option and then here you can click on
plus icon right so this new dialogue box will come here here you can make or write which database you want to use right for example use uh or you can just write any name here as well the query file name this is basically Okay so I can write okay start with so this is just give me create this file and now here you can first you can write the queries now so the first query is I have to specify which database I want to use so I'm going to use here uh classic models okay now
you can see here automatically it is giving me some suggestion so this suggestion here you can is from GitHub co-pilot okay but if I want to use well and good if I don't want to use then I can write my own query as well so suppose I want to select I'm writing a simple query to select everything from a uh table okay so select star from so it's showing me some output right but if I don't want to display everything from customers table I want it from employees table so if I write just this it
will show me an option right start from employees where last name is something like this but I don't want that also I just want uh this much now you can execute it you can see here the output right all the data which is there in the employees table is being demonstrated right similarly you can write uh I'll write some complex query here uh for example let's write a query to join two tables okay so if I write select customer name so you can see here it's showing already customer number customer name so I'll click tab
but not select everything from okay from customers is fine where so I have to join two tables right it's not giving me automatically here the suggestion but if I would have written as a comment that I want to join these and these tables it would have definitely given me the uh suggestion but I want to show you that line by line how on the basis of the prompt or whatever text You' have written go gith up copile it helps you so where if you write here or let's join the two tables right so it is
inner join so I just write inner so you can see now it is giving me the suggestion right inner joints on customer. customers and orders table automatically it is combining if I want some other table to combine with I can combine write that as well okay let me just now head enter for this and you can execute okay so it says that the customer number field is ambiguous right so you can see how clear suggest is there for even the error it means customer number should be either specified from which table I want it to
be displayed or I can just remove it for time being okay let's execute the query so you can see it has demonstrated only the customer name if I would have written customers. customer name I would have definitely displayed the result now if suppose I want that query for calculating sum of all the orders uh for each customer or maybe for you can write any query here depending upon what is your requirement so you can see it is I just written the you know simple English statement and it has demonstrated me the result select customer name
Su quantities ordered into price of each as total displaying it as total from customers inner join order on customers. customer number equals to orders. order number again it's inner joining with order details because the details of order like the price which we are calculating is there in order detail table now if you want to execute you can execute but you have to delete this line okay because it will throw error so if I execute this you can see your some name of total is being displayed here for each customer what is the quantity ordered total
price for that is being displayed right so you can use agregate functions you can use inner joints and without spending much of her time you are able to run and execute queries the various complex Concepts like subqueries can also be executed with ease if you want to create complex queries like creating a subquery that also GitHub copilot is going to help you with okay I'll give you an example here suppose I want to create a subquery so I'm writing select star from employees where employee number employee number in select so now you can see it
is automatically complete that I want to write a subquery it automatically understood right so the subquery by default it has written is that select reports two from employees where reports two is null right meaning it's something like a self uh join or it's referring to I'm trying to fetch all those employees details who is reporting to someone from the employee set only meaning the reports to is null Tab and enter right and then you can execute it so you can see it is executing the subquery successfully so as I said that you need to know
the concept basic concept of any programming language or query language which you are using and depending upon what you want to write depending upon the prompts or the text which you have already written GitHub copilot is going to suggest you the next line of code or the complete program we saw the various examples for SQL queries MySQL queries and also in as a Java program now let's talk about Claude what is Claude it's a AI conversational model conversational model meaning you can interact you can communicate with this tool meaning you'll be sending certain prompt or
certain uh statements it will respond you in the on the basis of what statement you have given it right so that is the conversational model AI stands for artificial intelligence the concept of Claud is based on the models of AI that is the neural networks and deep learning models of artificial intelligence CLA is developed by the company named anthropic okay the people who have formed or developed this CLA are the ex employees of the company who developed chat GPT so the chat GPT was developed by the company open Ai and GitHub and uh you know
Microsoft right so the ex employees of those companies are the people behind development of CLA and that's one of the major reason it is said that you know Claud can be uh the biggest competitor for CH GPD although there are certain features which are better than CH GPD I'll talk about the difference between the two as well because chat GPT is the most talked about AI tool right when it comes to the content or when it comes to uh AI tools right and Claude is one of its biggest competitors so it's important here to talk
about the difference between the two so I'll talk about that in a bit of time but yeah that's the reason why this is said as the biggest competitor for CH GPT so as I said Claud is developed by anthropic right the people from the company anthropic what are the different capabilities what Claud can do it has a natural language understanding natural language meaning you can communicate with Claud in your own language that is English or Hindi or whatever is your language it can you can communicate in with Claud in that language initially Claude was only
available in UK and uh us but now it is available in many countries it supposed 19 languages it helps you in summarization and search meaning you can perform the tasks like suppose you have given a big text to Claude okay and if you want to summarize it like suppose you have a 20 Pages or 30 pages research paper or if you have a document which contains of like 10 pages of information you don't have time to read all that 10 pages you can provide that document to Claud and ask the Claude to provide you the
summarization or the output in just few lines of out of that big text so that kind of task you can do with the help of CLA you can search any information or any uh data on cloud okay another capability is you can do creative writing meaning if you want to develop a report right a final year student suppose you're a final year student and you want to develop or write a complete report you can make Claud do that for you next is coding if you are a developer or a programmer you want to write down
a code or a program for any given problem statement you can make CLA do that for you although the capabilities of you know uh downloading that script file is not there or uh it will not develop or it will not give you the output then and there in the cloud interface but yeah it will write down a code for you you can copy paste that code from there and paste it on your uh machine depending on which programming language you're using and then you can run it and make the code execute right so it will
help you in writing down a code if you are a developer next is the pricing plan which is one important criteria of using CLA it is freely available although there is a Pro Plan of CLA okay uh the Pro Plan is because there are certain limitations with the open source or the openly available or freely available Cloud version uh that is that in a hour the number of prompts is restricted okay and also uh claw to API like if you want to use the API of claw the two second version then you have to take
the Pro Plan or you have take the business plan of CLA okay so there it's restricted but majorly all the task which a CLA can do is freely or open source available to you okay so that's one of the important aspect of CLA the second major aspect is that it supports 75,000 words as prompts right so you can see the number of words in a prompt you can provide so you can provide like six seven pages of of uh prompt right which is restricted in other AI tools if I talk about chat GPT there it's
restricted right so here in Cloud you can provide as long as prompt you can design okay so these are the capabilities of CLA so I hope you got a basic idea what CLA is it's a conversational model like chart GP itself you can interact with CLA sending it certain prompt sending it certain information or certain statements and then getting back the response it can do s these tasks which I just now mentioned it is developed by the company named anthropic now what is the benefit of using Cloud so the benefits of using cloud is not
very much different than benefits of using any AI tool or AI assistant it's somewhat the same the first one is time saving definitely Claud is going to save you time with respect to uh writing any kind of a Content or extracting any kind of information from a document from a book or from internet okay the one of the aspect of Claude is that it can it can serf the internet for you right meaning if you want certain information that okay what happened in the month of uh October in year 2021 it will help provide you
that and the best benefit or the biggest aspect of Claud is that it has access to the data till 2023 okay even the recent data like if you want to know what happened in the year uh 2023 in the month of uh July or in the month of August or in the month of January it will fetch that data itself so the data is up to dat here till 2023 next is improved efficiency so definitely the work which you're doing that is more efficient if you're doing it using Cloud uh the example which I gave
was like writing a email like I'll also show you how efficient and nice email it writes you might take a long time even to think in that way and write s such a nice email okay that's how it improves the efficiency enhanced user experience so definitely you don't have to always interact with this AI tool or with Claud uh by you know physically interacting with the sister definitely typing or you know using keyboard to provide prompt is one aspect but there are other aspects also like you can use voice control or you can just speak
and interact with CLA right so that's how it improves or enhances the user experience next is data driven insights so yeah Claud will also help you in extracting information that is based on data like as I said what happened in the month of July 2022 if you want to get that information it will provide you that as well we'll see these examples how you know these benefits are leveraged using Cloud when we talk about the Hands-On uh aspect of clot now to understand the technology behind clot as Claude is a AI tool AI assistant so
definitely the technology behind Claud is same as the technology which is behind any AI tool that is machine learning natural language processing right so machine learning as I said is a concept where you make the machine learn on the basis of the available data so there is a training data on the basis of which a machine is trained to behave in a certain way and once it is trained once it is efficient enough it is allowed to be used on the fresh data natural language processing as I said uh the CLA is AI assistant tool
and it helps you interact in your human language and hence the natural language processing is required uh so natural language processing is a very broad term so behind that there can be the concepts of you know genetic uh understanding contextual understanding semantic uh know analysis of the text so these all are the smaller smaller domains that together comprises or is behind this natural language processing okay so let's also talk about how does CLA differ from other AI tools like chat GPT okay as I said it is important to talk here because chat gpt's biggest competitor
is Claude the difference key differences I'll talk about here so chat GPT is good when it comes to uh you know you can upload the video audio files as well you can work on the audio videos files as well in chat GPT in one of its featur available uh called as Advanced Data analysis but yeah Advanced Data analysis is available in chat gbt only if you take a plus plan okay it's available in version of CH gbt 4 in Cloud mostly everything is available free whatever it is capabilities except that uh the API concept you
can use API of CLA only in its uh pro version other aspect chat GPD has constraint on the length of the prompt okay claw doesn't have if you paste a very long prompt and the CLA uh you know chat box then it is just going to summarize into into a text document and give it to CLA tool but in chat GPT you will get a error message saying that this long message or a prompt is not supported although that is very long uh normally or generally uh when we interact with chat GPT that long prompts
are not required but yeah there is the limitation uh the limitation in ch GPD is of 12,000 in Claud it is 75,000 so that's the difference the other difference is that the cloud is able to serve the internet for you and get the data and extract the data from internet but chat GPD is not able to serve the data and the data is available only till year 2021 in for chart GPD and it is trained the model is trained only till uh the year 2021 but in clot even the recent data for till 202 3
can be extracted so these are the certain major differences between chat GPD and Claud when it comes to programming or as a developer if I talk about the difference between the two then uh there is not major difference the only difference is that in charb plus uh plan you'll be able to create a complete software then and there it provides you the output it will help you also to refine your code more better uh in a way but in CLA even will not be able to do those tasks but yeah it will also write you
a code program or develop a complete software for you and provide you the code then and there in any programming language so that's all about understanding the technology behind cloud and also understanding the difference between the two biggest competitors of AI tool that is chat GPT and Cloud now let's talk about how can we use prompt engineering Concepts in Cloud prompt engineering is a very wide aspect or a very wide domain there are a lot of things to be understood there but here I'll only talk about those aspects which will make you write effective prompt
when you interacting with Cloud so what is prompt basically in a very simple terms if I say what is prompt it's input that you provide to any AI tool so the prompt is the input that guides AI responses definitely you will provide those prompt those inputs to AI tool and it will provide you the response on the basis of what kind of input you have given or what kind of a prompt you have provided right so AI tool is only is not know able to understand what is there in your head right what you are
thinking it will only understand what you have put it in the form of words or in the form of prompt so it is very important to write a very good prompt if you want to effectively interact with Claud okay now what is the basics of creating a prompt the first one will be clear and specific language you have to be very clear in what you are extracting what you want and the language should be very specific no there should not be any ambiguous terms like you know I want the information of some product on smartphones
not like that it has to be very specific you have to define the context and then the expectations meaning a very clear input and output should be given context should be defined like suppose if you want to write an email that email is to be written for an organization for noio for you know for applying for a job as a free uh as student or to a teacher so what is the context you have to specify that clearly and what is your expectations like if you want the email to be written do you want the
email to be a formal letter or informal email or you want it to be a very lenier one specifying every single detail or you want to be a very crisp and concise email so what is your expectations has to be clearly specified to in the form of prompt to clot ambigous prompts can lead to unexpected result so as I said there should not be ambiguous terms like you know I want some information about some smartphones under the range of 20,000 not like this some some and some right there should be specific like this this company
smartphones I want information with respect to its you know storage space uh the size of it the camera features of it this way now if I give you a tip for clarity in your prompts the first one is specific input and output right you should specify clearly what is the input what is the output you are expecting you have to use explicit language example suppose you have to ask for a summary of a book to ask that you have to specify which book you are referring to who is the author of that book okay you
want summary in how many words how many points how many pages right if you want to summarize the book suppose in two pages or you want to summarize it in just one line or you want to summarize it in just 10 points so that should be specified right also why you need that summary of a book are you a student are you a a teacher or are you a developer why you want to know summarize that book what's the purpose behind it that context should be set properly and you should use the adjectives properly which
you want in the form of your output like I want the summary to be concise I want the summary to be uh you know effective I want it to include certain uh keywords like I want certain SE optimized keywords to be used in that should be reflected in that summary so all these points whatever you want should be effectively mention ured in the form of prompts so I hope it is clear uh that how you know you can apply the small concept of prompt engineering which I talked about just now when you're interacting with Cloud
okay so once again what are the tips for Effective prompt the first one is be concise now as I said you have to mention everything what is input what is the context what is the output certain adjectives if you you have to add if you want uh you know in your output to have that features for but at the same time you have to be concise you know you don't have to write like a whole story behind to be very concise with what you expect from Claud you have to use correct grammar and syntax now
although even if small spelling mistakes are there the CLA is able to figure out or understand what you're exactly trying to talk about uh but yeah if you're using any syntax and trying to get information from cloud then make sure the syntax is correct when it comes to programming okay avoid leading or biased language okay so the bias language should not be used for example what are the key features of this product right you're talking about key features okay this product this is not referring to anything right so what product are you exactly talking about
should be specified here in what context you're looking for are you looking to buy that product or you are looking for uh you know teaching someone about that product or if you are trying to give forther give a review of that product so that context should be set those are the concepts of prompt engineering which you have to keep in mind while interacting with CLA now let's get started with the Hands-On part of cloud okay meaning I'll be demonstrating you how you can make use of clot for different set of task okay but before that
you have to install clot or maybe you have to understand how to make use of clot so let me tell you here that you don't need to install anything on your laptop or a computer it's a web based interface you just have to log to the account and make use of cloud so you just have to go on Google and type Cloud you can see the very first link from anthropic so there is a symbol of anthropic the company behind Cloud who formed the cloud okay so you can click on this link so you can
see here uh it it took you to a page where it says talk to Cloud you have to enter the email address or and continue with email or you can continue with Google if you have already logged in you can see here Cloud for business and then constitutional AI is another the thing so CLA for business uh is mainly for uh using apis claw 2 apis if you want to make use of it so it is available in the Pro Plan of uh claw 2 right free version is available from here so you can continue
with email here or you can continue with Google so I have already logged in so I will not be using this continue with Google and I'll be using my account here and yes it has taken me to my dashboard okay so this this is the welcome screen where uh which you will get once you log to it but if you are a new user I'll show you in that case also what should be done so I'll log out from here you can enter the email address here suppose if I enter a email address okay so
you continue with email uh you'll get a login code to your email ID so yeah the login code was sent to my email ID ID which was logged into my phone so I'll enter that okay so after that you will be getting this page where you have to enter the full name uh your name you can write okay so my name I'm writing here PR uh you have to agree to this and you have to be at least 18 years old to make use of this then click on continue you have to also enter your
phone number and get verified there okay so yeah the verification code was sent to my mobile so you have to enter that verify the code and then you're all set to get started next so these are few information that will be given to you it may display incorrect or harmful information it is not intended to give professional advice including Legal Financial and medical advice do not rely on CLA without doing your own independent research so this is what I was talking about ethical consideration right so this is one of the limitation here to not make
use of Claud for any kind of medical advice or any kind of legal or financial advice because there are chances that the cloud can give you incorrect result as it is trained on freely available data on internet okay uh okay so these are again few things which you have to priv related to privacy policy and all which you have to accept also there are certain notifications from CLA here that we may change the users limit functionality or policies as we learn more about how people use cloud so there is already users limit here per hour
you can only use these manyi prompts there are like dislike buttons for the responses which are being sent or generated with using CLA you can like or dislike them or you can send them the Fe feedback so this is there then click on finish and then here you are okay so uh you can start a new chart here or uh all the charts which you will be doing with the cloud will appear over here okay as of now this is the new account we just now have created so there is no history related to prompts
or uses of CLA that's the reason nothing is given although try these are given that's a know example like summarize this PDF document help me practice by Spanish vocabulary explain how this python game works so yeah these kind of single line prompts also can be be given to cloud and you can get the response you can see here you want to go to Cloud Pro feature you can unlock more with Cloud Pro if you click on that it will take you to the billing aspects of it and it will tell you what all benefits it
has so you can level up your Cloud uses Priority Access during high traffic period switch between different versions of Claud Early Access to new feature so yeah CLA earlier version was 1.3 1.2 and then uh the two version and then there were certain more versions before that as well so if you want to switch between the versions then you can do uh when you have a Pro Plan of CLA if you go to subscribe now it will take you to the billing aspects of it so you can see here Cloud Pro is $20 per month
plus taxes so if you are uh you feel that you have requirement of it which majorly the difference is that uh until unless you are very uh much making use of CLA for like you know in anr you need a very high interaction with CLA lot more prompts have to be given when I talk about number of prompts has to be given then you can go to Pro Plan else if you are having certain limited number of prompts per hour to be given to Claud then it's okay to be use a free version of CLA
itself and second aspect here if you want to use the API CLA to API then you have to take this plan else you can go with the free version of CLA itself okay so I'm not going to subscribe it over here you'll cancel it go to our page here okay so you can see here there's a attachment button okay so it says that you can add the content you can load the files so maximum five files so multiple selection facility is also there in Cloud which is not there to remember in ch GPT multiple files
cannot be selected one by one you have to select the files so five maximum files you can select 10 MB each should be the size of the file it accepts PDF text CSV so you can see all these you know document kind of uh the text or the code documents are only can only be uploaded not the audio or video files okay you have to click here start a new chat and get started with CLA so CLA by anthropic this is how you will install CLA and get started before going ahead and interacting with CLA
let me tell you what all task we are going to do using CLA so we'll be talking about how to generate a course structure then summarize a document create a report format write an email do a market research generate an m and write some software codes in programming languages like Python and Java so let's go to Claud interface and get started so the first uh prompt which I'm going to give to Claud is that I want to generate a course structure for a course or a subject database management system okay and uh suppose I am
a teacher or faculty who is going to teach uh this subject database management system to the students of btech 2 year and I want to cover this subject in 50 lectures so all these is my requirement right I have to put all these requirements in the form as an input or in the form of prompt for clot so let's uh write the prompt generate a elaborate Cod structure for the subject database management system for a student of ptech second year okay this was my requirement what else is the requirement okay the course structure should cover
all the aspects of the subject so that uh a learner can work efficiently in the real world on database okay this is the main motive main name why I'm teaching the subject right as a uh instructor what else so also I want to complete this course in 50 lectures so the course should be completed in 50 lectures so you can see the prompt here it has to be very uh elaborate at the same time concise and very effective okay so let's give this prompt and see so you can see that uh the speed by which
the cloud is generating response is not very slow you don't have to wait much like in for example if you have used Google B there the speed or you have to speed of response generation is very slow you have to wait for some time this is equivalent to chart gpt's uh paid version although the free version of chart GPT is more faster than this but yeah so you can see how beautifully it has generated a core structure for the subject database management system system so here it says here's a proposed structure for this introduction two
lectures then it has also divided into number of lectures each topic should take to cover like data data modeling SQL database design and all so even the advanced concepts are covered because I have asked that the course should be enough for a person to work in real world on database so yeah it has taken all the advanced concepts into consideration also the programming aspect of database like how you can connect to these and all is also important when you're working in real world right so all that is there query processing and optimization so yeah uh
Claude has clearly understood the context which is I have set up for it and has given me the response accordingly right so this is how you can generate the responses the first example we saw so the next one which we are going to see is that we are going to summarize a document so for that I'll be uploading a research article just downloaded uh from internet which was freely available the research document is on digital image processing so I'll just upload that there the size constraint is there that 10 MB it should be right and
should not be more than 10 MB so yeah this is the paper okay I'm not giving any prompt or any input let's see what CLA does to this okay so you can see here here is an edited and formatted version of the research paper this is a research on distal image processing technology and its applications by this this person uh okay okay it is providing me the complete details of the paper right okay let it do its task till then I'm writing the next prompt that I want to summarize this document some okay so I
we can use the polite words right it it is more interactive in that way if you use certain polite words like please kindly and all that so I'll use please provide summary for the document uploaded above in just 10 points and just 10 points okay so this is what it has done it has read out the complete paper but not completely but in a concise way or in a summary way is what it has read right picking few aspects under each subheadings I feel okay so what I want is provide a summary for the document
so document uploaded about in just 10 points okay let's see what response is being generated so yeah here's a 10o summary of the research paper which was uploaded so this can be any document just I have taken an example randomly pepck any research paper so you can see 10 points it has given which is covering everything which is there in the paper yeah if you want to know know summarize the document in just a line in just one line so you can see in a single line also it can summarize that huge seven to eight
pages of a PDF or a document okay so this is what is its summary or the output right so this is how you can summarize any document next let's see the M can be generated or not so for that I have a meeting transcript I'll upload that meeting transcript here okay and then we can say her that can you please generate a m for the attached meeting transcript writing m is a task right whenever there's a meeting the people who are working might relate to this so if you have a meeting transcript roughly written document
or a meeting transcript you know as a whole you can give that to Claud and ask it to create an M for it so you can see the date time location a proper format has been followed for creating an M who are the attendees and then what the agenda and then what are the key discussion points and what should be the action items right this is how a proper format of mm should be right and that's what it has followed so we saw few of the aspects how we can interact with Claud we'll explore some
more uh benefits of CLA and some more prompts how uh know CLA responds to that okay now let's see some more ways how you can interact with Claud so we have seen three or four uh aspects right now we'll talk about how we can make use of clot for writing an email for us okay so the context is that I have to write a email for applying for a sick leave okay for two days uh and you have to also tell Claude that you're working as a maybe developer in XYZ company or maybe in any
IT company and uh I want to apply for sick leave for 2 days because I'm down with cold and cuff okay so let's give this prompt and see how good email is written by Claud okay all the inputs is given given fine let's see how it generates an Emil so yeah you can see how effective and an icient email for sick leave application was written by Claud to write this effective email you might need at least 5 minutes but this did it in less than a minute right so yeah the subject dear the manager name
and that I'm writing to inform you that I'm only sick leave this this from date this to to this date I've been suffering from cold and cough condition has wors and also how see how nicely it has framed the situation medical situation and I plan to use this time to rest properly so that I can recover and I'll respond back so yeah even the nice finishing lines for an email is given okay so please let me know if you need any additional information about this leave application I apologize for the short notice and inconvenience okay
so regards any your name so this is a very effective and efficient email right written by CLA so as I have already mentioned that if you want to write if you are working with content writing and also for creative writing you can use cloud so if you want to generate a report format okay for example uh can you please provide me a report format the report is for the final year project of an engineering College okay so you want to report format for writing your final year project of a engineering College okay so this is
the prompt which you have written although the prompt is very small you can elaborate it again according to your requirements more so let's see uh in this with this prompt is Claud able to generate a format for me or not so yes you can see there should be a title page with the the names right it's correct title of this is how we prepare correct student name enrolment number the example this I took because most of you will be able to relate to it right most of you would have prepared the report for your final
year project or who those who are still in the btech might prepare right so this there should be certificate then the acknowledgements then abstract table of content list of figures list of tables literature review so you can see even pointers like the subsystems under system design and implementation what all should come under result and Analysis future work references appendices all that is included so this is the perfect report format generated similarly you can generate a report format for any kind of a report which you are looking forther to prepare either it's a financial report medical
report or SRS document when it come to software development or I know what should be the format of a design document if you at all you want so all these report formats can be generated using Cloud okay so now let's see the last prompt which we going to give to Claud that is for doing a market research okay so we'll be doing a market research research on a product iPhone 15 and we'll also see try to find out what is the current users pattern of it okay so uh I'll give a prompt can you do
a market research on the product iPhone 15 okay also also give me the latest users pattern for iPhone 15 so this is how you can see the response has been generated by Claud okay so it has given me the market research like the even the recent data uh is collected and given in the response okay so key markets like this this in the worldwide what is the market aspects as I have not mentioned that I want the market research only in specified country India or anywhere that's the reason it has given me worldwide and also
users pattern I have asked right so you can see the weather and map apps users have spiked so all these information gaming adoption Rising as the a a16 bionic enables immersive graphics and experiences so all that has been considered in the latest iPhone model that is iPhone 15 also it has summarized that strong performance AED by aspirational brand image and key featur editions like satellite connectivity is the advantage Al although pricing and competition Remains the disadvantage so this is how claw generates the responses based on what prompt you have given now lastly let's see how
claw generates a software code for you so if I want that can you write a Python program for a tic tac toe game program for a t g so it is going to write you a program okay but it will not provide you any downloadable link or anything of that that short which was available in chat GPT right 4 version but it will write you a correct code and if you will give the same prompt in other AI tools it will also like other AI tools when I meant it is chat GPT it will generate
almost the same code meaning that CLA also generates the most optimized code for you but it might be that sometimes it doesn't in that case you can further give it a prompt to optimize the code so you can make the conversation interactive as here I have not made it interactive except for the case of when we uploaded a PDF document where we summarized it first in 10 point then in one point and so on but if you want certain additions and certain things to be removed in the response which has been generated by Cloud you
can do it that as well it is a chat feature you can interact with CLA like how you interact with any other human okay so you can see this has generated a program for a Tic Tac to game okay uh still generating so here is the two player game okay one is you and one is it is assuming to be a computer so you can just copy the code from here and see if it runs or not okay so you can copy it from here and you can paste it in the ID or uh like
this is a python game right so pyth py code so you can paste it in any python ID and make use of it or execute it similarly if you want some program to be written in Java programming language you can give it a prompt accordingly as well so this is how we talked about how we can interact with Cloud by understanding how we can give prompt for different aspects like generating a course structure summarizing a document for a report market research generating an m and also writing a software code in Python now let's talk about
claw 2 API so claw 2 API is a available only for business users and it's not freely available you have to take the pricing plan or you know the Pro Plan of Cloud 2 and then only you can leverage the feature of Cloud 2 API although I'm going to talk about here the steps which is required what you have to follow how you can write a program and make use of the API for your own use okay so Cloud 2 API is being offered to business customers at a pricing plan as I already told you
you have to you can use it through web API as well as Python and typescript clients fine I'll demonstrate you a program here how you can write it and leverage the benefit of cloud to API for python but before that if you are not a you know business customer of CLA 2 you can also make use of it but only selected customers are given the benefit of it okay so this is the link here which is provided right if you click on this so once you click on that you will be reached on this web
URL that is www. anthropic Early Access so Claude API access thank you for your interest in anthropics language model our API is currently being offered to a limited set of users we hope to expand access in the future for that you have to fill out the form here which is mentioned below okay but the form the email which you are filling here should be your company or organization email address okay fill the details submit the form okay once you submit it on the email ID which you have mentioned they will provide you the link to
access the cloud API okay and that will be an access to the console okay so once you'll click on that link which will be sent to your email you'll be having access to the console of CLA okay and there you can make use of uh the benefits of CLA API although when I applied here this has been like a month I have not got the response back from Claud maybe uh the number of users whom that access has to be provided is already filled or I have no idea what's the reason behind this but yeah
the access is not yet granted but yeah if you are in immediate need of clot to API benefits then you can take the pricing plan of business customer and then you can make use of it tyly okay so once this is done as I said you'll get an email you have to click on that link and go to the cloud API uh console what after that what steps have to be followed let's understand that so once uh you have have reached to the cloud uh know API console you have to generate the API key that
you can do it by going to settings I'm not able to show you the hands on here but the steps I'll make you very clear so that if you are a business customer you can make use of it okay so going to the settings you know the user settings you can generate the API key from there okay that API key is what you're going to use it in your program later on okay so make sure you to generate this API key then you can write the program the program which you're going to write will have
these five steps the first is you have to import all the required modules okay the modules like so you can see here this is the code okay I'll be talking about the steps and also explaining you the code side by side so you have to import all the required modules like anthropic you from anthropic you have to import anthropic human uncore prompt aior prompt okay this is for understanding human prompts and this is for generating AI response okay so these are the modules which you have to import have to Al also import OS module after
importing all the required module you have to initialize the client using the API key okay so you have to create an anthropic client okay that that client is what you have to use or you can say it's a kind of object or a variable which you'll be using in the entire program to generate the response okay so you have to initialize that client using the API key make sure you have generated the API key because that is what is being used over here let's see in the program so here you can see in this line
of code this step you have to generate you have to use the generated API key and that is the here key here okay this is double codes I have written anthropic _ apore key right you don't have to write this here but you have to write the key which will be generated once you have been given the access of Cloud 2 API okay once you generated by going into the settings the API key that API key has to be used over here and then you have to create this client so this is the variable I
can give any name to this I given aore client equals to anthropic this is the name of the class you can say or the module name and inside this apore key equals to OS module. invar it's not involvment or anything the spelling is this only en NV i r o n okay and in the square brackets you have to provide the key so this is how you will create a anthropic client the next step is that you have to now generate a response by providing a model name maximum tokens and prompt so you have to
also specify what is the maximum token token uh is also something uh which you provide input like in the prompts okay and what's the model name so all that is what you have to provide information in the next step so in the program if you see you can see here so C1 this is a variable or object entropic do completions do create and in the Square uh sorry in the round bracket you have to provide the model name the maximum tokens and the prompt so model is claw to this is a fixed okay you cannot
change that model will always be CLA hyphen 2 maximum tokens you can change give it whatever you want so 300 is what I have given the value here now the prompt make sure these keywords are the same model maximum _ tokens and prompt okay these are not to be changed prompt should be provided here human prompt and then write the prompt which you have to provide as an in put to get the response from the cloud okay and then write AI prompt here so this is the actual statement or input prompt I have written here
how to write a email using Gmail account okay this is a one line single prompt I have given although this prompt is not very good or very know effective but just for an example I have put it over here okay and then you have to print whatever is the response you will get so print completion. C CI is this object which I'm referring that was the last step so uh you have to print the response before that your prompt usually has human prompt and AI prompt right if we saw in the code right now correct
human prompt and AI prompt right and then lastly you have to print the response so these are the steps which you have to follow when you are making use of Cloud 2 API in Python okay and this is the program a program is very small you know very effective just few lines of code and you'll be able to use Cloud 2 API in Python so uh I not demonstrated you the exact Hands-On session there because as I said the Early Access request has not yet been fulfilled by anthropic Tob but yeah if it does for
you you can make write the same program there uh and check how you can use the benefits of CLA 2 API now we will understand how do we integrate Python and Google gimini 1.5 Pro here there is no manual work in need but there are few steps we have to follow let's explore and understand how this integration happens let's quickly switch to Google AI studio interface First Step you have to follow is click on documentation I'm clicking on that it will open this Google AI for Developers interface here you have Gemini API you have gamma
you have Google AI Edge and tools you have to choose Google Gemini API but by default this is where it will launch if you want to get API you can go here else I'm going back and just clicking on the quick start once I click here you have a quickest way in order to choose between what language you are working with either you want python by default it is python or nodejs go dot that is flutter for Android and iOS development you have Swift and web see you have a readymade option in order to work
with python with the help of Google AI Studio you so I'm clicking on run once you do this you will get to see this interface what is this all about you can see a predefined integration that's happening between Google AI studio and the python interface that is Google collab generally we use that here you can just run pip install Google generative AI so it will install all the libraries available in order to work with generative AI the complete package is being installed once you run this once it is complete it will show that the particular
generative AI packages are being loaded to your collab see it gives us a check mark then you have one more option here in order to get an API key to set up this particular code you have to just run this let's see 100% you will get an error what is that error you should get an API key and you should try to put that and remember API keys are always important in order to keep it secured so you should not share your API key so let's resolve this error how do we do here you have
to click on the key option available it says add new secret key if you don't have Gemini API key you have to click on create Gemini API key it will land up to get API Keys interface of Google AI Studio you have to click create API key it will show safety setting reminder read the terms and click on got it you have to just press on create API key in a new project if you are putting a API key for a new project else you can search for any old project but it is having my
first project here no let's choose to have a new project I'm generating the API key for a new project so it's loading it will take some time in order to generate once it is generated we can copy this particular API key okay you can close this go back to your add new secret in the collab of Google AI Studio click on ADD you have to give a name what name you have to give here however it is mentioned that is in capital it says Google uncore API uncore key so this should match the particular format
how they have given in this code I have given the name and the value is what you have copied from the get API key interface of Google AI Studio then remember you have to switch on this toggle button once it says checked you have to run this particular code once again right now it will establish the connection see it says a right Mark here that means it is been loaded the API key is now connected to your python Google collab of your AI Studio then our work is done if sometimes there comes an error even
after putting this particular API key just go back to this and check whether the toggle is on or not every time you use you have to have this button switched on next if you want to see the model details you can run the generative model you can initialize this so I am initializing for Gemini Pro so everything is written actually this is just a sample you have to generate a text it says in the content write a story about a magic backpack so already it has been run so you have a compl story here if
you want to write your own Cotes you can use that so there is no use of running but still I'm running this so it will generate the response of the text however you have given the prompt for here it is the text generated a small story what's next if you want to learn more you can learn across in the Python tutorial available integrated to this platform itself hope the integration of python and Google gimini 1.5 Pro is very easy and understandable now let's understand building a simple chatbot before doing this demo let's have a basic
knowledge about what is a chatbot what are the types of chat Bots and major applications of chat Bots across the domains first what is a chatbot chatbot is a software which is De developed in terms of artificial intelligence where it can mimic the human conversation or it can also answer on behalf of human in order to address the queries generally these chat Bots are used across nowadays integrated towards each and every software you open right go to any website be it traveling booking website or food delivery website or any shopping websites jeweleries cloths anything for
the matter are small chat bot will pop up and it will ask us for if we need any help how it is working it is having a predefined trained data like if we question on something generic which can be handled and addressed by the Bots it will resolve the particular requirement how it is helpful it is giving us a support in order to minimize the human intervention if you take an example of swiggy if your order is delayed or some of the other problems happen first the approach they follow is by resolving these issues with
a standard questions which already has a predefined answer for that in case beyond that if you want to talk to a agent or a customer support person then you can choose to talk with agent right in the first place they are trying to reduce the human resource or intervention of the human in order to resolve the issues cost for a customer from the service or the product so this is the approach nowadays We are following and we are accustomed and used to in order to interact with Bots next let's explore what are the types of
chat Bots we have first is Rule based in rule based it is having a straightforward quick rule in order to be followed to get the answers we want as simple as that next we have ai powered which is a little Advanced version of rule based it is having a support of machine learning in the back end and it can address the wide range of queries across next comes task oriented task oriented is following a certain set task the Bots will do the same task monotonously again and again so the bordom we have as a human
beings it will not be felt if we use chatboard for a routine or a normal repeated task so task oriented BS will help that way next comes conversational as I told you before conversational generally we converse with a customer support for any of the product or service so their intervention of the chat bot which is conversational will come across hope we have a broader knowledge on what are the types of chat Bots we have next let's talk about the applications of chatbots here chatbots is there everywhere Ecommerce online delivery apps Amazon mtra at goou take
any of the e-commerce places you have intervention of chat Bots next healthc care any application of the hospitals has this support finances you have any Bank applications you want to log to your net banking you can get the option of having a chatbot or if you want to open an account you go to that particular website of the bank the chatbot will pop up and ask what you are here for what how can I help you right that is how it will try to resolve the issues or the address the requirements next education here we
are using in all the projects we are trying to create uh and understand be aware of what's the technology that's taking in order to create this chatbot next comes entertainment here there's a broad perspective be it gaming be it storytelling having certain visualizations Graphics everywhere chat BS can be used hope we are clear with the small introduction of what is chat Bo bot types of chatbot and applications of chatbot now let's move on to our creation aspect demo before starting our demo let's have some ground work done first how do we design this we'll keep
it very simple and very countable number of lines of codes in order to understand because it is very basic chatbot we are not integrating two many things we are using Google gimini and we are trying to integrate that AP we get in order to make a conversational code that's it so designing conversational chatbot involves interaction between users and the computer one thing including it should have a uh greeting it should have a response to the greeting back and you should also provide uh certain exit comment as well which is very much important you have to
ensure the chatboard will not end randomly if you press an option called exit you can type exit it will exit out of the code you should keep it very organized way it should not throw up an error or land up in middle of the particular code running and it should not go to error forever right we have to ensure it will be a smooth landing and smooth takeoff so that's how we have to plan across let's understand the code and we will work along together first I will be showing sh you a code where it
is not conversational but it is serving the purpose then we will make that particular code improvise that and make it a conversational chat bot let's quickly go back to our Google collab in order to understand more about this now here we are in our empty collab I have just named this collab as build chatbot do ipynb this is a python file here let's start with our first line of code what we should do we have to import certain libraries we need in order to start off so I'm using the keyword import and trying to import
the Google generative Ai and I cannot be using this particular whole Cent as google. generative AI right so I will be giving a name for it as gen AI so short form of it then what I should import I have to import a OS what is this we are trying to set the environment with the help of the OS Library okay then comes our crucial part what is that part we have to import our API key you all know how to get your API key I'll quickly give you a revision here you should go to
Google AI Studio you should click on get API key it will land to this particular interface here you can create an API key or use the existing API key then paste that key once we start coding so let's go back to the coding platform and check what we can write we have to First Call the environment so os. Environ and I'm trying to create a API key variable so it say API key then what is our API key we have to select and paste it inside the single code I have got my API key so
I am pasting that right here so our os. environment is set AP key is also being set next what we have to do is we have to create a model model variable is equal to we have to call the Gen AI which we have already imported and we have to use gen AI do generative model so gen AI do generative you have to write the model right here so it is clear now next you have to use a small bracket and you have to put up a single quat inside this quat what you want to
write which Gemini model are you using so Gemini 1.5 Pro and you have to give a complete detail about it by adding a word called latest the latest version which is present in Google AI studio is what it is taking so that is why we have to mention the same next you have to create response a variable which deals with model interaction model generate content you have to generate the content for the question given by the user right so model. generate content is equal to who are you in a double quote that is to be
done then you have to print the response how you do it by using a print function print response. text right so this is a simple code you have to write it across once you do this coding let's try to run this code now once you run this code you will get an answer let's see what is the answer from the Gemini 1.5 Pro here it says I'm a large language model trained by Google think of me as a computer program that really has a good understanding and generating humanik text so on on and so forth
it gives the complete details what can I help you with today is what it is questioning and if you want to talk to it again you want to change the prompt here which is mentioned in this double quotes which is not that chatbot kind of feing right now what we have to do is we have to give a chatbot field we have to make it conversational so how do we make it conversational we are just extending the same code and making it conversational let's check how it is done before extending this code let's check out
the output given while I was talking I just changed this as how are you which was before as who are you right so it is telling I am a AI language model so I don't have feelings like humans do however so on and so forth it means it is working the conversation is happening but it is not happening in a chat bot mode right let's quickly give a good optimization for this and extend the code and make it conversational here the conversational code is ready what are the new additions we have made it was very
easy to compare so let's do that okay first line is same that we imported google. generative AI as gen we also imported the OS that is environment we set up a API key which we usually did right in a initial coding it's same configuration so we are trying to configure this particular API key into OS environment as API key next what we did is we selected the model generative AI model that is also a same line here comes how we make this particular code which we actually executed into a chatbot mode for that we need
a function called chat with bot here we are trying to give instructions where if user in case types exit or quit the chatboard should mention it as a goodbye and it has to break the particular Loop next what happens we have to give a error handling method that is by using try and accept here if the user input is something wrong which chat board does not understand we have to give a common printing statement that is sorry there was an error processing your request and what was the error you have to mention that right here
next we mention a main content and the chat Loop so main function should be there you have to call this function of chat with bot and the loop continues until the user input will come as exit or quit if you try to do something in between it will land up to a error Loop so which is not very much helpful and gives a good feeling about the chatbot which we created I hope we have got the extension and what are the extensions we have done is been told and it is clear I'm trying to run
this code now and let's understand see the first line which we gave here that is chat board is ready to chat type exit if you want to end the conversation we are trying to give input to the user as well where they understand they can type exit in order to end the conversation they want you is what we are that is user hi I mention this I click on enter so what happens the chat B will give an answer hi there what can I do for you today I say how are you so once I
do this it will give a good conversational way as a large language model I don't have feelings and experience so on and so forth what we did the same thing it is just telling what can I do how can I help you today so next what is data science I am trying to give this prompt to the chat bot let's see do we get the definition of data science or what it will do let's explore let's wait and watch okay you could see the answer it is a very descriptive answer where it says what is
data science what are the key ingredients of it what is the recipe of it and how what are the applications it is not The Words which is used is not that relevant but still it is giving us certain answers right so the applications in nutshell what is data science it is giving a summary as well trying to give us that particular summary so I'll just scroll here till here so it is giving us this complete details so I say exit now chatbot says goodbye because we have already coded as if you get exit or quit
the chatbot should say goodbye so it is working good and fine hope this simple chatbot which is built is very much easy to understand and you could integrate that with the help of the API key which you generated now let's start with the first topic overview of python when you hear the name python you know the various applications of it first and foremost thing it is a highlevel programming language which is very unique compared to other high level programming language why almost it will use English like statements in order to execute the code it's very
easy to learn as a beginner this particular Python language now why do we use python in generative AI it's not about generative AI it's about python is already having a well supported set of libraries which is already in use since years with respect to domains like data science machine learning natural language processing deep learning Etc now artificial intelligence and generative AI is grabbing the libraries which we have already in Python other programming languages are also used but I could say python is a versatile programming language which makes life easy for the people working in this
technological domain after understanding a overview of python let's quickly hop on to the next topic introduction to generative AI applications which is the core concept which we have to learn generative AI refers to algorithms which enables machines to produce content that is not only new and original but also includes a reflective data and it will be always trained according to the requirement right generative AI deals with a lot of models what does these models include G that is generative aders networks vaes variational Auto encoders and Transformer based models such as chat GPT right what do
we do with this generative AI applications it's very important in order to train the algorithm or the machine in order to keep it updated the more you interact with this the more it gets trained that's how simple it will work and generative AI helps you to generate your own models how you want to train that particular model you can train it accordingly just like simple example how does the scientist train the robots each robot will do its own different work right hope you would have seen the requirements are different the catering of requirements is different
hence the models will be trained accordingly with the help of generative AI yes it includes lot of other Technologies deep learning neural networks Etc but still generative AI is also a base of it what is the significance creativity boost it creat enhances processes by providing very good content ideas new content ideas the new way to approach the problem efficiency it is giving a helping hand to human beings in order to be more efficient the more good you use the more productive you'll be automates the content creation or saving time is very much important it's important
resource now it aids to this particular Saving Time resource then personal ization it generates particular personalized content as per your requirement as per the prompts you give to chat GPT that's how it works right so it will cater various applications but the same this is the overall picture of generative AI applications now let's talk about the next concept development environment setup how do we do this what is it all about you have to have a platform in order to work with you need to have a basement in order to build a building right so so
let's learn how do we build this particular basement so what does this thing consist it consists of few steps in order to set up a particular environment it is not dealing with much higher softwares or something from moon and stars it's very simple you have to go to python official website and download the latest version for now it is 3.12 you can download that python in into your local system and you can execute this via command prompt right first step we have to open command prompt which we have in our local system and then we'll
have to navigate to the location where the python is installed hope everyone are having a clear idea of how do you work with Linux and Unix at least basic commands like CD change directory MK make directory so only these two commands are mostly used in this complete session I'm not going deep into advanced level of Unix Linux and all if you want to work with a command prompt you have to just use CD and MK command and you can make directory or change directory simple as that if you're using Windows you can use command prompt
or Powershell for Mac OS or Linux you can use terminal this is the platform you need in order to work with after navigating to the location where the python has been installed you can install all the libraries using pip Command right so pip install is a basic command and you can change the libraries you want accordingly first we will talk about numai numai is very well known amongst the domain called data science why the first thing is numai will always k in order to help the mathematical calculations also working with high level data structures and
give you the complete access to the functionalities and arithmatic and Logics that is why data science is dealt with lot of data numbers and other elements we use numpy for the same then we talk about plk when you hear the word flask it is a library which is related to python where it is web based framework you can create web application using this particular flask framework that is the major help of using flask the next one is stream lit or stream liit this particular Library deals with visualizing the models created then you have torch torch
vision and torch audio basically this Library caters computer vision models you can work with the model creation you can view the model and also you can add certain multimedia to the model created right you have this torch library in order to cater computer vision projects models Etc you have Transformers next Transformers will always help you in classification text summarization many other aspects again dealing with data and majorly we use all these libraries in machine learning artificial intelligence NLP natural language processing and deep learning also computer vision this is the applications of where this particular Library
will be used using pip install we are always installing all this Library single-handedly not in Mass Library installation every Library will be installed along with the execution output command stating it has been installed it will show you it is installed now still you don't trust how do you check it verification of the installation is very important because since we are working with machine sometimes it might help you to have a better Vision when you verify if you don't verify if the installation is crashed you never know it will affect your project so better verify once
you install it's very simple in order to verify as well you can open command prompt and type python double hyphen version it will ensure it Returns the installed python version what is this particular version you're working with next you verify the installation of the libraries for that you have to just go open python interactive shell type Python and then import every Library which you have already installed if it is imported properly without any error then it is installed properly right so this is the overall development environment setup idea which you have to have and which
you have to create in order to do coding in order to create certain applications or work with a project now we are in the command prompt in theoretical aspect we have known about various libraries in Python that is numai flask streamlet Tor Transformers let's install the same libraries with the help of command prompt first if you could see it's in a general path it's in my personal path but yeah it is in C drive now in my personal laptop the location of the Python is being fetched for that I have to use CD command change
directory paste the location where your python has been installed and then press enter when you do this the command prompt goes to this particular folder let's start with the First Command pip install numpy now since I've already been working with python a lot many times for many projects you will get a output just wait and watch I'll click on enter it might take sometime it will try to analyze what's happening what they're trying to install and requirement already satisfied this is what you'll get the output that means numai is already installed in your particular system
because we were already working and there is a warning message you could notice if you want to upgrade the particular Library which you using you can go for the version mentioned I am currently using 21.2 2.3 it is suggesting upgrade for 24.1 point2 then what is the command for the same is also being mentioned here you can use that command we have now installed numai which is already existing it is given the message if in case it's a new installation of Library how it will display let's try it other libraries as well pip install flask
I'm giving enter let's wait for the results again if you could see it states flask is already present that is satisfied again you have a warning regards to the version I have almost installed all the libraries but let me check for the next one streamlit if you could see how it is downloading the streamlit library if you're trying to install the library which is not in the current local system in your python this is how it will start loading if it is already existing this is the message which you got for numpy and plusk when
you try to install streamlet Library which is not present in your python this is how it starts downloading and it takes 5 to 10 minutes at least to complete the download depending upon your system configuration likewise you can install all the libraries required for you into your system right so I have given two examples one how it will download the library which is not in your system if you already have downloaded the library how the message will pop up that is requirement is already satisfied that means it is already installed right so this is how
you import your libraries in Python now in order to verify is your particular library is installed or not first it will try to prompt you that it is already existing if it is not it will start downloading as mentioned now again in order to verify that you have to go to python interpreter so I'll click onto the same particular location type python here when you C click enter it will go to the python interface where you can execute your code now what you do is you try to import numpy when you try to give this
particular instruction to the python prompt inside the command prompt which we have logged in it will try to enter or import this numpy Library which is already existing when you type the statement import numpy if your numpy is present it will not throw up any error it will look just like this this indicates your numi is there in the python Library folder this is how you verify the libraries which is already installed before using it or else if you mentioned in your code as well it will throw up an error if it is not installed
before make sure you install the libraries then use it in your code this was a simple demonstration how you install and verify if the library is present in your python with the help of command prompt now let's talk about flask chat GPT app we are integrating open AI source to our application that is the main agenda of it let's understand more about this application how do we work with this and also look at the demo for the same what is the basic setup we need is as simple as that which is mentioned before you have
to have python installed in your system and all the libraries mentioned to be installed in your system that's the basic ideology for all the demonstration which is carried in the session Hereafter the components we need is flask for web framework as I mentioned we use flask of python library for web framework and then open AI GPT API for generating responses a simple logic we are taking a API key of open API putting that in your python code and then we are trying to execute the same first we will check how this particular code look like
and what are the in detail step in our Google collab note I'm not executing this in Google collab I'm executing this in command prompt but for a better Clarity I'm using the online coding platform that is Google col in order to have a good interactive and bifurcation between the text and the code right Google collab is a very good place in order to work in order to have a good python content on let's check out the code and understand what all does it do in order to create chat GPT app using flask library in Python
now here we are on the Google collab first step we have to is set up the environment and as I told we will activate the virtual environment of python what is this python M V and V Let's understand one by one python invokes the python interpreter which is already installed in your system then m v n v this option always tells python to run V en V module as a script this module is used in order to create virtual environments on that that is why we try to invoke this next again you have V EnV
this is the name of the directory where the virtual environment will be created and it is not mandatory that you have to keep the second V EnV as it is you can change this to be ABCD also or you can also put it as virtual environment it is not mandatory that you have to use the same name but with M you have to use V and V this is mandatory and the second V andv is optional you can change the name accordingly naming convention can be changed according to the requirements next after doing that setup
we create the flask application as I told you we are using only two commands of Unix since we are working in command prompt first is make directory M kdir that is we are creating a folder with the help of command prompt that's it there is nothing great that's happening creating a new folder the folder name is GPT chat app and you are trying to change the path change directory CD to that particular location and remember wherever you have installed your python software that folder itself these things to be created we have to first navigate to
that particular python location then only we can create a new folder else there'll be a execution problem and path issues after doing the folder creation we will have to create code file first is a python code file which I would like to name it as app.py application.py again this is not mandatory you have to have a you can name it accordingly but you have to remember what you have named while you're executing this you have to remember the exact python file name including the cases it is case sensitive okay first we will import the flask
elements first is flask library next request jsonify and render template then we will import the requests and also import time these are the libraries we'll try to import which is available in Python to our particular code we will try to use request and render template also the timing in order to have the conversation between the system that is CH gbt which we are trying to create and our questions then we will initialize the flask application this is the flask application initialization sent tax we will try to give a open API key this is a secured
key which you should not share with anybody else or else they can utilize and you have to pay the bill for the same better you keep the API Keys very discret this is a random API key it's a sample API key or else you can just put in your code enter your API key here this is far better rather than giving people your original API once you put this API key you will next Define the root of the homepage where it has to interact from obviously you cannot show this backend code to the user you
have to have a front end you have a front end that is called index.html there comes the second code file first one is having ap. py core python file next it has to be integrated to the front end that is index.html right we will render the particular HTML template that is why we'll be using render template Library okay we have usage of these libraries everywhere then we'll Define the root for chat end point which accepts the post request post request is nothing but what message you put to the GPT and what it has to respond
back and this complete thing will happen with the help of Json then you get the response from gpt3 and remember there's lots of GPT models which one you're using you have to have the knowledge about it you have GPT 3.5 turbo 16 K you also have just GPT 3.5 turbo you have many kinds of model the current model which we use you have to mention it here and you also have to carry the input given by the user from the front end to the back end with the help of this messages the data should be
transferred from the front end to back end what is the maximum limit of the response is 150 letters characters it's not words okay it is very minimal if you want to make it more obviously you can make it 300 again it is according to the requirement you are curing for then you have to attempt to get a response from API which tries in case if it fails now comes the word of error handling the code which is not having error handling capacity is not a worthy code simple as that if something goes wrong first you
have to let the system give you the message that something is wrong not directly land to a error page it should be interactive and it should tell the user whatever you have entered is wrong or something has happened what has happened this particular responses should always be there for example I'm giving you possibilities it's not that every coder will be knowing everything what they have to do right but still there are few standard error handling techniques when you work you have status code 200 when this particular 200 status code comes 44 error comes 429 comes
right how do you handle that what is the error what is the particular response you give for example if it is for 200 we can return the message to the user rather than going to a random wrong page you have to give a message an error occurred while processing the response from open API that means if your GPT is not connected properly it is not able to give response then you have to just not push that particular code to a error page you have to send an error message you have to tell this next you
have 429 here we are trying to request open AI if fail status again we are trying to reattempt how many times back off retrying attempts are two we'll try to do two time attempts and then we will go for sleep that means we are putting this particular system into sleep that it is not able to solve the more you work with it the more limit exceeding will happen and 429 also deals with if your particular open AI is out of limits it is not having any um limits left it is exceeded you have to buy
new you have to put your building again it will say you have exceeded your current quota please check your plan and building details so that's how you have to try to give error message to the user so that they understand something is happening we have to go and address because nobody will go to back end and debug what is the error right in the front end itself you have to show what is happening so this is sample example of error handling then if any other error comes more than this there are two errors which have
listed if anything else comes up you have to just give the status code directly 4 not4 error or Internet disconnectivity error anything might come an error occurred while communicating with open AI is a standard default error message you can send if you don't know what you have to do just put there is an error please decode then comes run the flask application this is the main method which we create for this code and the code execution starts from the main method here now comes the second part of it the front end which we had discussed
already there and what are the complete content of that we'll have a quick overview that is index. HTML here you can see people who know HTML will always know this do HT HML you have head you have HTML Lang English and meta character set is always there you have style for your particular page and you also have the body here you have a chat box you have a text box you want a button it's a simple thing you have to have a text box where the user will put their input click on send button so
that it will interact the chart GPT in order to display the chart GPT message you have to have a label or again a text box so that's how it will work it's a simple JavaScript which is being used in order to have this interactivity that is fetching the information from the input and putting that to GPT and taking the response from the GPT and putting back on the front end to view for user so this is the simple fundamental function that happens in this script section how do you run this just you have to be
in the location where you have created the folder that is GPT chat app right I'll just go back and just give you overview GPT chat app that is the location where the command prom should be pointing out then you can execute python ap. py when you do this you will be able to access a browser where it is loading in this particular address what is this address why only 5,000 why not 4,000 you might question as you all know HTTP 127.0.0.1 will always deal with local hosting when you do this local hosting you have separate
ports for every library or every kind of execution you do 5,000 is the port number which is allocated in every local system for flask library of python any flask web framework code execution you do it will launch on the Chrome with this particular address hope you had a complete detailed view of how this particular application will work a quick recap you have to install Python and necessary libraries then you have to create a main python code file that is ap. py then client side that is frontend interface you have to make it index.html again this
is as for your requirement this is a common name which we keep that's why I've used the same you can create a simple HTML interface to interact with the chat GPT then you can run the application and check for the output right so now that we have understood what is the code at the back end front end and every aspect let's execute this code and check for the output before going to the demonstration of flask chat GPT app let's understand the folder structure I am here in the location where my python is been installed since
we've already worked on you could see many folders here we are trying to create GPT chat app right according to our steps we've already done it so if you go to this particular folder you can find two different elements one is templates another one is app what is template template is actually the index file which we had discussed the HTML file app is a main program once you execute this the back file will be created if the execution is successful or not successful doesn't matter once you run this through interpreter it will generate automatically that's
why it is present if you're executing for the first time this will not be there okay now let's hop on to the command prompt and check how does we work with this here we are in the command prompt and in the location where our file is am I right no I'm actually wrong we are in the location where python is now we have to enter to the folder created what is the folder we created we have to change directory to that particular folder GPT uncore chatore app this is the folder name which we generated right
let me enter to that see now we are in that folder how do we execute the steps which I mentioned we have to type Python and we have to mention the app.py or the name which you have given to your main python file you have to mention that and click on enter once you click enter after executing this app.py file this is how the output looks like are we in the right output screen no it is just indicating that it is been executing it is running the location is we have to go to HTTP 127.0.0.1
5000 the port number let's quickly hop on to that location on our search engine any browser you can use you can go to this particular port number let's hop on to that once you go click on HTTP the same ID where it has been launched across you will find the interface now what you have to do you have to communicate with the GPT so I'll press hi and click on send button it will say hello how can I assist you today the next question which I ask is how are you when you do this you
send this and just a computer program so I don't have any feedings but thanks for asking how can I help you so this is how it is trying to interact with the human being if you try to give something which is not existing still your chat GPT is not trained to that level it is a normal base IC model I'll say where do you leave I click on the send button you have exceeded your current quota please check your plan and billing details it will not throw up this error really if your limits are exceeded
right that is when it will show this error it will try to do that error handling which I've already mentioned so for the third conversation itself how did we get this message you might be having this particular doubt the thing is open API API key is not very much free to everything you only have access for $5 worth of conversation that can happen API key that can generate that is how you can converse after it exceeds $5 it will try to ask you to fill up and select the plan and do the bill link right
the payment should be done for the same so this is just a simple example you can enhance create you can buy a paid version and start building the projects and help your small scale business if you own any in order to have a private chat bot so customers can interact without any actual agent service required customer service you need not take it you can use the Bots there on your website right this is a simple idea this is how the execution looks like by now now we have understood how does chat GPT while using flask
how we can execute what the code required and how the outputs look like now that we have understood and also saw the demonstration how does a chat GPT app work when you create with the help of flask Library using python now let's check out the next topic using the same flask how do you use text to image application here the simple idea is text to image generation involves creating images for textual description using AI models you will give a simple description here we are not focusing on description we are trying to get a image for
the word which we give as I told cat dog any animal or what you want to fetch for significance of this particular application enhances creativity and design processes useful in various Fields like advertising entertainment and virtual environment say you want to uh get an image you can give a description cat which is sitting on a mat or dog which is sitting on a bed you can give certain description you will get images in certain way or sketched cat image drawing of a cow so you can give a certain description to AI it will generate back
the output for you how do you to implement this particular text to image app first is we build a web application that converts text description into images again if you want to build a web framework it is about flask then you use open AI again HTML CSS for the front end that is very much mandatory and basic what are the prerequisites you want for this first is python to be installed in your system next you have to have a required library that is flask and open a and you have to have a API key from
open aai this is the basic requirements it should have in order to start off with the development of this application now let's understand what is the code for this particular app what is that purpose and what all we use here then later we will execute this hope we are clear now let's quickly hop on to Google collab understand more about this application here we are on the Google collab first step if your python is not installed install your python if already exist ignore simple as that again create virtual environment we already had the description about
each and every element of this particular code statement then we activate the virtual environment by using this particular code here next comes installation of flask and open AI it's very important to install the libraries which is necessary for your coding first place we'll be using pip command to install flask open AI it's a simple statement here the code line you can just execute the same then we have to create the project directory again nothing but the new folder it is named as flask uncore textor 2core image if you want to put some other name it's
left to you you have to go to the folder which you have created then only you can start creating your python code file and HTML code file first first thing is main python application code file which is again named as app.py it gives you a proper signification it will not mix with the previous one because the folder is different so again we have to import the necessary libraries we initializing the flask application you have to have a open API key you can replace this your open API key into your original open API key how do
we do that then you have rooting which has to go that is index HTML in this HTML file you have all the designs related front and related that has been fetched and you will be rooted with the help of function call generate image as a post method we'll be using Json post method means the response which you get from chat GPT right either it might might be your user input also access a post and also the response will be also post use open API to generate image based according to the prompt which is been received
the size should be only this much and the number of images generated at once should be one only then the prompt which is given by the user will be pushed to open API it will get a response and then that particular image will be displayed if you want a detailed explanation of what is every line means I have it for you you can just read it what are the different elements we use and why do we use right next you have to create this HTML interface that is the front end again you have to have
a text box a button and where you print your prompt and then you will put that inside the GPT it will fetch the output on the same screen right you have to have a simple text box and a button that's it if you want to do more styling you can more welcome use CSS files and you can do it this is a general basic setup or the front end which you need and you have a script again you have a function called generate image here what happens it will fetch the information the prompt from the
user and then it will put that particular prompt to open API once you get the response from open API it will push back the response on the front end this is the the code for the sing right it's just an interaction code between the front end and back end we'll be using JavaScript this is our the HTML file we'll be using again if you want to run this code you have to type Python app.py and you have to be on the same folder where you have created at the start if you go somewhere navigate to
some other location on your command prompt and if you try to give it will not execute let's have a quick recap here first is we'll be installing p Pon next library is called flask and open AI later we will create a flask application that is ap. py then we will also try to include the functionality of converting text to image that whatever the text we have given related image will be provided so we have to root for that and you have to have a simple HTML interface in order to have the connection between the user
and the system then you have to access and run the code start the flag server and again you have to go to the same location that is the same IP address which ends with the port number 5,000 so this will be your particular location address it will run there you can execute the same now let's quickly check how does this work in our demo now here we are in the python location of the local system we are trying to execute flask text to image app if you try to go to that particular folder you could
find the same folder structure it is having a main python code and then then you also have index in the template right once it is executed back file has been created so that is why they are here this is the structure now what we have to do go to command prompt type this particular location and try to execute with the help of python app.py command so I'm copying this location completely going to command prompt and changing the directory to the copied location now we are in the folder flask text to image application which has been
generated we straight away try to execute this with the help of python app.py command once we do that we will click on enter this is where the flask server is been running it is active now we have to go to the location which is mentioned right here with 5,000 port number let's hop on to that particular location once you come to this location you could see the basic HTML design which we have made and it's our time now to give certain description regarding the image and try to generate the image I'll give just one line
of description if you want you can give in detail description so that the GPT will give you a right perfect required image as per the command I'll try to give mountain with skylight okay so let's mention color also that is that will be good so mountain with green Skylight this is my description of a image which I need I'll try to generate okay this is how we got the image from the GPT it has given the lights which is in green color on the sky and mountains are right here so this is how the descriptions
will be taken care of the more precise description you give the more precise image you will get as an output so this is how text to image application will work with the help of flask open AI in your python which also help to generate the images for digital content creators or any kind of creative people who work in that particular feeld Having learned about generative AI tools and LMS let us know if you are tried any gen tools yet if so what did you find most interesting share your thoughts in the comment below up next
we level up our skills with L chain a powerful framework for building Advanced AI applications we'll also explore Lang chain for generative Ai and even dive into rag using L chain ready to turn your knowledge into action let's get started let's begin by transforming Theory into practice where we'll be learning regarding python lank chain let's quickly hop on to the agenda and check out what all do we cover in the session first we will talk about introduction to Lang chain and then we will try to understand what is the basic environment setup we need in
order to work with applications using Lang chain on Python and obviously with the integration of open a then we talk about core Concepts we have to understand followed by the components of Lang chain at last we will have a lang chain case study where we'll be understanding more about how does this work in practicality why wait let's quickly start with this session here we are in the first topic introduction to Lang chain first let's understand what is Lang chain Lang chain is a framework or the library which has been designed to streamline the development and
deployment of applications that utilize language models so this is the exact definition of Lang chain why we always call it as Library generally we used to do one kind of operation that is installing the libraries we use pip Command right we also use pip command to install Lang chain in the python environment that's why we consider that in a normal way of talking it is also a library right so there's no doubt it's a library but but also they mention it is a framework it provides a set of tools and components to make the language
model work in a efficient and stable scalable manner it is just like a playground where you build certain models by using different kinds of llms and particularly Lang chain library or framework what are the key features of Lang chain what is it included of llm wrappers when you say large language model wrappers or nothing but we are having a small models which is working as a prototype in the Lang chain Library by itself you can try to use that prade one or you can create new wrappers then you have prompts and prompt templates prompts are
the basic building tool which a user will use in order to communicate with the GPT or l m model it is a English like statement it is just a requirement you are putting it into a system so and so I need this is how you work with chart GPT right you give the prompts what is your requirement in detail in depth how you need then you will get the output accordingly so we are communicating prompt acts as a language between the GPT and the user as simple as that from templates yes Lang chain also has
templates it will give suggestions just like you could see suggestions in GPT nowadays what you're trying to ask the GPT or any other coding platform if you go it will try to prompt the next code or next word which you want to type generally it happens in visual studio if you have observed so it is having a template already or it will autogenerate if required so that's how it works next you have chains what is this chain uh let's not go to Lang chain aspect let's talk about a normal chain chain is nothing but interl
elements right you will link the elements like this one to another when you link then it is a chain likewise here when you work a complete project in a module wise you want to link those modules in order to make a complete whole project so chains will always act as linking those modules together and making it whole part so that's the use of chain next embedding and Vector stores obviously embedding if you want to embed any other content from outside the world of Lang chain python or openi you can use this embedding Vector stores you
have the content which is already present in the internal storage of Lang chain so we also mentioned this to be a vector store right we just address it that way these are the key features of Lang chain hope you had a overview and understanding about the key features now let's understand why do we need L chain we need L chain in order to overcome certain challenges okay what are those challenges uh while you work with llm that is language models integration issues might come up difficulties in scalability uh it is serving for 20 people now
you want it for 50 people or multiplies into 200 300 so on and so forth so by providing modular or user friendly framework always Lang chin gives you a open window where you can explain or demonstrate your ideas your requirements you can make a model and do the job next it particularly specifies it is having applications all over the domains majorly it will always try to reduce the work of the human that's how it will give Advanced processing capabilities with customer support intelligent automation for Content recommendation system whatever the models which you build for your
requirement it will try to give a advanced level support and processing capabilities if you use l chain now what are the use cases what are the applications where do we use l chain it's used in every other domain you can name it you have it but I have listed very few of them first comes customer support when you talk about customer support what happens here I'll give a simple example for you if you use a chatbot which is built with the help of Lang chain framework it makes the customer support very smooth without human intervention
even though if the chatboard is not able to solve the issues then you can intervene the human find minutes of the time is saved human resource saved it helps in time saving as well next you have content generation if you are a person who is a blogger who works with content content writer right you can you want some ideas how do you have to elaborate the content how do you have to create a blog you can just give a prompt that I need a Blog on python it will give you a Blog you can just
use that as your inspiration and you can try to modify them and put up into your requirement then comes intelligent automation again automation CH GPT it's not only that you have many other robots also if you want to build something a basic robot you can use this framework in order to create that um thinking ability by using language models which is available in Lang chain so intelligent automation helps there semantic search if you are a person person who is working towards domain of SEO search engines you want uh the ability to be proficient in your
domain you can use l chain in order to have a semantic search also you can use this for data as well a person who is purely working with data uh content creative wise or technical wise you can use Lang chain then comes personalized recommendation you want something towards for yourself you want to create your own automated mail inator which is having the ability to give you this task which you have for the day or any prompting you want in your system you can create a personalized recommendation model with the help of L chain these are
the use cases or applications of Lang chain now let's understand what is the software or environment requirements for Lang chain is it huge requirement or it is a simple requirement let's understand them now we have python in our system right so the basic version of python is all you need nothing else where you can import the Lang chain library and you can work across so it is supporting 3.7 and later versions of python interface next what are the libraries or dependencies you have for Lang chain uh it is very common uh you have pytor tlow
Transformers numai pandas for data man manipulation and Analysis and flask or fast API so for deploying the web services applications you have tensorflow for training and interface and you also have Transformers for pre-trained language models so these are the libraries with are the dependencies you have for it's not that everything will be working everything every Library will be used it's a broad uh structure of Library where you can use development environment so where you will develop this in which interface it is all about the Google collab or Jupiter notebook so these are the preferred Ida
for development it is not restricted only for this you can also have various IDs in order to work with Lang chain whichever ID supports python will always support Lang chain as well because they both are integrated now let's move on to development environment setup let's understand what all we need already we had a word but let's see in detail what is the command we use how do we work across first thing prerequisites as I told you python ID and 3.5 or later versions what is the installer we use it is always pip Python package installer
we use pip command generally we talk it in that language so install Lang chain how do you install just open your command prompt and run the command called pip install language before that ensure that you are in the directory where the python is been installed after that pip install L chain will work then verify installation very simple go to python ID and type import L chain once you do that you will get no prompt nothing no command Sor it will be a plain input then sorry it will be a plain output that means it has
been installed properly if you want to check if it is installed then it will print this next statement you can give it for verification if you don't want to see the plane output you can give a print command L chain is successfully installed it will print the same if it is not installed it will not go to the next line when it does not go to the next line it is ensured that it is not installed properly these are the verification types you can try in order to know whether it is installed properly or not
now let's quickly check this in a command prompt and understand how this installation happens as I told you first I am navigating to WS the python location which is installed in our local system then I'll try pip install L chain I'll type this and click on enter once I click on enter it will try to analyze yes requirement already satisfied that means I've already installed this particular Lang chain that is why it is not giving me any loading kind of output good it is telling everywhere that is it has been installed not only Lang chain
any packages related to that is also installed and it also suggests there is version 24.1 point2 would you like to upgrade you can use this command and upgrade for now it is 21. 2.3 then it goes back to the python location now if you want to check I suggest you click on python it will go to python interface that is 3.10 which I'm using as I mentioned it is anything after 3.7 this particular Library will support what was my verification method which I mentioned import Lang chain when you tell this it will go to next
line right so if you want to confirm whether the Lang chain has been imported properly you just have to do this as mentioned import Lang chain when you click on enter you can also Type A print statement print Lang chain is successfully installed if this executes obviously when it has come to the second line that is print statement it is already installed if the import Lang chain is not working it would have shown a error by now so this is how you verify Lang chain is installed properly or not so we had a installation method
if it is already installed it will go to this kind of output and to verify you can do this you can enter the python prompt and then you can give the commands hope this is clear we had a successful demonstration and how it looks while you install or you try to verify with the help of import command on the command prompt which is both python ID as well as the normal command prompt we use both of them in order to verify we use python in order to install we use normal command prompt so that's about
the development environment setup now here we are on L chain Core Concepts topic first we understand Lang chain is B built around the concept of modularity because we use this library in order to work in modules and then make it a bigger project it will always allow users to easily work with the language models by creating managing chain and agents which can maintain certain State using memory so what happens here a core concept for this includes chains agents and memory let's understand what are these chains what are these agents and what are this memories first
we'll try to understand what is chain here chains are sequences of operations or components that process inputs and generates outputs often using language models so we try to have a sequential operation or the components which is inside the language model of Lang chain we try to generate the output one by one what are the types of chains you can find it's very easy simple is just like a simple statement and sequential is you have to go one after the other only you have that particular rule when it comes to conditional if Case will comes in
picture if this particular condition is being executed then only you can go further or go to the next chain or next agent so this is how you can have a conditional chain you can put a condition for your chain you can make a chain to mandatorily sequentially execute or you can keep a simple chain which integrates all your small modules this is about the chain now let's understand what are agents agents are entities that can make decisions based on the inputs interacts with chain and utilize memory for maintaining State we want somebody in order to
communicate between the linked modules right chain will always act as a link between the modules who will communicate from a module to B you have to have an agent you want to have a Salesman who will come from a location to B location or a module to B module so agent is simple as that it will try to establish the communication between the chains between the small modules which you have connected with the help of chains types of Agents you have reactive agents proactive agents and interactive agents so reactive is just that you tell something
it will give a reaction for an action there is a reaction that's how this agents work proactive agents oh it will always have a eye on what's Happening where it is which module communicated with whom the log history proactive agents is always used for the log history session history like when did they log in when did they log out what happened so what is the session time so they are proactively behind the user who is using that particular project or the content whatever it is so reactive agents is only when you provoke them to react
that is where they will react Proactiv is without any provoking the actions you do with the system is all noted that's how proactive agents will work next we have interactive agents just like a chat GPT right you interact with the system with the help of proms or any other medium or coding anything as such so these are the types of Agents you have now let's talk about the memory memory in Lang chain allows agents to maintain State across the interactions it's not that you interact and you keep quite right you have to have the state
on off for example that is also a state yes no is also a state enabling more contextual awareness and coherent responses is always possible with the help of memory shortterm memory long-term memory the name suggests is just like RAM and ROM so shortterm is only for that particular session particular thread after that cuts it is done long-term memory it gets listed towards the side of your project so it will have the log of it when did this happen what happened so all the details is stored in the long-term memory and you want to reuse that
to multiple aspects as a requirement you can do it but shortterm memory is just like you use it there you finish the work you close it you're done you cannot go back there to find what has happened right this is one of the important core concept which we use with chains and agents this is all about Lang Chain's Core Concepts now let's know more about Lang chain components Lang chain components involve few of the major aspects the first one is prompts we all are familiar with what are prompts nowadays because of Charity right so prompts
are essential for guiding behavior of the language models in a lay man perspective it is an interaction between the GPT or the language model or the bot and the user we use prompt as a language in order to communicate that they Define the instructions or questions presented to the model so gbt we'll ask certain questions to that it will give an answer for us it is having variety of options in order to load your file upload that and involve a statement in the prompt stating analyze the updated Excel or any other file and give me
so and so output so it is having the ability to analyze as well so we are evolving with language models a lot and very quicker designing prompts H we need to have certain Clarity context and specificity which is very important you cannot put the machine into confusion with all the kind of instruction you give or the prompts you give you have to have a Clarity first what you want what is your requirement then give the context this is the situation I want a requirement like so and so specificity I want exactly AB b c d
I don't want efgh so you have to give certain prompt which is having all the three aspects one is Clarity one is context another one is specificity so this gives you a complete prompt and also it is important for you to make the machine understand that's our basic job so we have created that doesn't mean it has to work by itself we have to train in order to work properly best practices keep prompts very simple avoid complex instructions giving Which is not able to process by the human itself if you try to give it confuses
the model the more you interact properly with the model the more it gets trained that is how it will work so test and iterate you try to continuously test the prompts refine them based on the output you receive try to give training to the model use templates if you already have templates in order to have a proper consistency and reusability if you're having a prompt ready and it is used by multiple other people it's easy if you have a template if your work is based on monotonous you can just use a template for the prompting
as well so prompts will help the Lang chain or any language model to train well next you have models Lang chain supports various models each suited for different tasks what are the types of models we have GPT comes first right it is a old GPT now it has been evolved a lot it is having 16k turbo GPT 3.5 you have many kinds of models that's available hugging face models example bird gpt2 again it has been evolved it's just an example example you have custom models integrated models for other libraries or custom build models which are
integrated in your website or anything for personal use so these are the examples of models and Lang chain will always support different types of models it's not only restricted towards no I'll only support GPT I'll only support the B it is having open Forum in order to support the languages and the models now let's understand the third component that is tools Lang chain offers various tools to enhance the functionalities of language models what are the kinds of tools we have we have text processing tools data augmentation tools evaluation tools so in text processing you can
find tokenization steming and litiz tools in data augmentation you can use tools to generate synthetic data for Learning and also evaluation it is very important metrics and evaluation tools to access the model performance to get the output how it is working the result the analytics you call it so you have to have evaluation tools as well it's not that you generate and keep it aside and start using it what is the progress how to improvise that in order to know the drawbacks in order to know the faults you have to evaluate in a different way
it's just like when you do the software development you always do the software testing right so there are types of testing likewise evaluation is a matrix so it l chain also support evaluation Matrix in order to improvise your model performance next you have data loaders so the fourth one data loaders facilitates the loading and processing of data for language models I give a simple example right in this video that we will try to upload some file to chat GPT we'll ask the chat GPT to analyze the updated Excel file or any other file which you
give it will try to analyze and it will give you the output and what you want to do with that you have to tell the GPT so file loading is very important for that data loaders is a tool which we use and you have two types of loaders CSV loader load data from CSV files or Json loader load data from Json files in Lang chain I'm talking about in particularly we have components called Data loaders where it supports these two kinds of files so we are clear about the components of Lang chain hope we are
clear with Lang chain components again we have data loaders we have tools we have models and we have prompts so these are the major components you have to understand now what is next we have Lang chain case study here we will be talking about a simple case study where you are trying to generate Your Own Story with the help of open AI python Lang chain library or framework work and with the help of the API key which you generate I'll explain you regarding the codes we use the installation we have to make and how do
we do that everything in the Google collab and we will also execute that in command prompt why Google collab it's very easy to teach on Google collab we'll use command prompt and how do we install all the required libraries and how do we save the files where do we get the files from how we have to save the structure everything is explained so let's quickly hop on to the Google collab and understand now here we are on the Google collab what we are trying to understand what is the case study first thing personalized story generator
that means it will try to take certain inputs from the user and try to generate the story for the user this project will take inputs might be the character names settings and theme of the story it will generate unique story every time you try to communicate with chat GPT 3.5 why I'm struck with 3.5 there is four and four or that's coming right but when you go to open API it's still at the 3.5 version itself it is having 16k 1105 there is some other codes that's going on it's just a turbo GPT 3.5 you
have many kinds of models but for now it is 3.5 in open API platform not talking about the Char gbt 4 or 4.0 okay don't get confused with that steps in order to create this project very simple set up the environment collect the user inputs generate story using AI model and display the generated story simple as that first when we talk about the installing of libraries here we have to install open AI L chain it is related to open AI also related to python so pip install open Ai and L chain both of these libraries
we have to install then in order to collect the input from the user you have to have two different python files here so that is the differentiation previous demonstration we had only one python file one front end file we used to work with the code here you have two files what does this do first one says user input. py that means we are trying to welcome the user and take the input from the user here you could see welcome to the personalized storage generator you have enter the main character's name please enter that then enter
the setting of the story and enter the theme of the story for example Adventure mystery horror anything as such return the character setting theme to the particular file called story generator. py you are taking input with the help of one python file and you are trying to give that particular collected input to another python file that is story generator p from Lang chain import chain prompt text model then user input as I told you user input file you have to take all the user input get user inputs what is collected character name settings and theme
this is collected you have to take this as a input and import that into this particular story generator python file this is the function where you can just create certain story according to the inputs given by the particular user then you will have a prompt you will have to work with a text model as I told you this is GPT 3.5 turbo here and Here Comes Your open API key you have to put your secret key here and then you have to execute this particular main block it will try to give you the story generated
here in the print statement generated story is so and so a paragraph of a story will be displayed for you so this is how the main block will get executed and these are the commands or the codes which we use every code is having a self-explanatory command that you can read and understand once again if you don't follow it here okay this particular learning material or code is always provided no worries you can go back rework on this again then what do you do in order to display the story you have to execute the file
how do you execute story generator. py here why are we not using python we have to use Python right it should be python story generator. py that's how it will execute simple as that you will have a quick recap here first is environment setup that is you are having python in your system you have two libraries that is open Ai and Lan chain then you have to have your own API key which is discrete you have to create main script that is story generator and you also have to create a subscript that is user inputs
then you have to generate the story functionality that is open as chat GPT you have to use that and you have to give them the character name setting theme Etc it will develop the story and if you execute it will give back the story which is already developed as simple as that hope this is very clear for you now let's see the demon demonstration what is the output and how it will work we are trying to execute personalized story generator which we have already discussed that we are using the library called Lang chain here it
is you have two different python files one is story generator one is user input it is already explained user input is used to take the input from the user and story generator is the main app you should not execute python user input. py you have to execute story _ generator. py that's how you will get the output screen this is kind of special execution that every output is seen on the command prompt itself we need not navigate between any other locations for output what is this P cache if you click on this folder after you
compile your code this is actually generated compiled python file will be generated so that is why it is here now let's quickly hop on to our Command Prompt and try to execute this particular code file in order to do that first what we need is we have to copy this location where it is actually situated the folder of your app now we are on the command prompt we are changing the directory and pasting the location which we copied and clicking on enter we are in the folder called personalized story generator what we have to do
we'll try to execute python story uncore generator do py once you click on enter this is how it starts executing welcome to the personalized story generator and I'll type a name of the main character as Alise and it will ask setting of the story where it has to happen I can say Enchanted Forest I'll give the location visualization idea for the GPT I'll click on enter it should be a mystery one or Adventure one or horror one whatever you can mention that I'll mention as adventure story I click on enter see the story is generated
in this form you can read the story pausing the screen but yeah it will include the main character the setting of The Story also what kind of story what is the theme of the story it will try to give you the complete paragraph which you can use it for your requirement this is how a story generator will work using Lang chain you can create much more applications this is of one basic example hope this is clear for you as you know what is large language model we're going to discuss about limitation of of large language
model so why we will going to switch on rag so before that we should understand the limitations of llms so the first limitation is computational resources so to train the data and to deploy the model we need high cost there so this during training the model and deploying the model uh using llm there we need high computational cost so that is the first limit ation of llm next is data dependency so llm use vast amount of data so there it actually highly dependent on or heavily dependent on data so after data dependency next is performance
issue so llm having this limitation because factually it provides sometimes incorrect output which is referred as hallucination after performance issue let's move to the next limitation that is ethical and social concern so as you know llm use uh very uh large amount of data right so during that that data might have some sensitive information as well so we have to be very careful about the privacy of the data and there we should be careful about the all the ethics for data uh what we are using and uh the legal things related with data so that
limitation what we have in LM is regarding this ethics for using a data and the data privacy after that explainability and last is generalization limits so these all are the limitation of large language models and because of these limitation we'll move to rag that will reduce that limitation will be used to overcome these limitation let's understand about retrieval augmented generation which is rag Now what is rag over here first we'll discuss about the definition part then after we'll use why we use this rag the purpose of rag and last we're going to understand how it
works but before we start this I'll tell you the meaning of that retrieval retrieval basically that R is working like a brain that search for all the information so what we can say here that R is basically work like a brain okay that search all the information then after this generation part what is this generation so that generation basically it's a creative part of rack okay so that use that is used to work on that the data what we have searched through retrieval and then after the last part is knowledge base after once we'll finish
the definition and the purpose and and how it works we're going to understand the knowledge based part as well so knowledge based is the last thing where we basically use to collect all the information what we have uh gathered after this retrieval and generation part okay so let's understand first what is definition of rag so rag combines language generation with information retrieval to enhance text generation basically it makes the data more accurate reliable it uses external data sources to provide more accurate as I said before and relevant responses next comes is purpose of rack now
the main goal is what of course if we are working on data if we want to train a data if you want to deploy the model so basically we need the accuracy there we need the quality there right and reliability should be there so these all three terms what I said is there in rag so the purpose of rag over here is the main goal to improve the quality accuracy of language model output because that issue we are getting in llms as I discussed before the limitation of llm so there is performance issue right so
in performance issue what I told you that uh the output we were getting through llm sometimes it factually Incorrect and but the output how llm show is like very confidently llm provide the incorrect output which referred as hallucination so here that rag is used to overcome through that limitation here we can see that to improve the quality and accuracy and the output what we are getting is more better and accurate and reliable rag help models provide upto-date information and contextually Rich responses so basically in brief we can say the output what we get through rag
is very accurate very reliable and the information what we get through rag at last very up to date now the next point Come is how rag works so here you can see that rag retrieve relevant information from from a database or the internet before generating a text so database we can use like SQL or any other database to retrieve the relevant information this retrieved information is then used to produce more informed response to get the better output and upto-date information so that's how rag works now the next you can see on the screen is benefits
and example of rag so first we'll discuss about the benefit benefit of rag you can see there the rag improves the obviously accuracy this is the best and the first main benefit of the rag that it improve the factual accuracy which we generally don't get in sometimes in llm of generated text it allows model to provide answer based on the latent available data so what whatever the available data we have based on that it provide the correct output there whatever the information we have based on that it always try to provide the accurate outcome there
and as well as up to-date information after that you can see that use case of rag now what is it common application you can see where we can use the rag is include chatboard virtual assistant and content creation tools chatboard you already know a lot of chat boards now we are using virtual assistant which actually assist you virtually if you ask anything you will get a response based on your query so in these many application we can use the rag and there we actually get the proper accuracy next is rag is useful in any scenario
where accurate and current information is crucial after that let's discuss about the example of R so you can see on the screen that I have given there that Google search augmented model and open AI models using retrieval plugins so this is a well-known examples of rag system where we are actually taking the benefit of the rack these system set the Benchmark for integrating retrieval with generation so basically uh these system uh you can see in open AI through in open AI we have lot of models like chart GPT and all there we are using these
uh rag and all because they are giving the uh the The Benchmark what I said before that this system set the Benchmark because the output accuracy we are getting through that output is uh far better than uh because of rack so this system actually set the Benchmark for integrating retrieval with generation so these Google search augmented models and open AI nowadays are using rag just to get the proper outcome in a very accurate manner so we're going to understand more about R so let's dive into the rag more so now the next concept is basic
terminologies of rag let's understand one by one all the basic concepts and terminology which we use in rag first is retrieval component now what it is so the retrieval component as you can see on the screen it searches for the relevant information and it uh actually works like a brain who search for all the information right so the retrieval component searches for the information and from where it search from data base or some other source like internet then after this component ensures the model has access to be up to date so this retrieval component when
it search the information the key Point here is that whatever information it will search that should be up to date so it always focus on that also and after that why it always focus on up to dat because if the information what it will search for basically if it is up to date then based on those information only we'll get the accurate final outcome so here it should be first up to date and the contextually relevant data right it should not be something else like if suppose we are searching for a data so it's completely
relevant for what we are searching next is Generation component now what it is the generation component use that retrieved data means this the data uh what we have searched through retrieval component so it uses that data and why it uses to generate some response based on that upto-date information this ensure that generated text is both coherent and factually accurate so generation is used to get the information from the retrieval component and then after it start working on it and try to generate the accurate and reliable data or we can say the outcome now the next
you can see here is very important which is knowledge based these three are very important parts of rack so knowledge base is basically what the knowledge base is the source of the information from where you are getting that information uh in retrieval component so here you can see on the screen that the knowledge base is the source of information used by the retrieval component as I said before it can include database or or other sources from where the retrieval component is actually retrieving the info so it can be uh any Source it can be a
website or it can be any database like SQL basically the structured collection of the information from where we can get the proper up to-date uh data is uh basically uh comes under knowledge base so from where we are taking what from where we are retrieving the data is knowledge based when once we retrieve the information the important and useful information the up toate information then it will go to generation component where this component start using that information to generate a response which should which will be very accurate and reliable so now the next one is
contextual understanding so you can see uh on the screen that this actually allows the model to interpret the user very accurately this helps in retrieving the most relevant information for generating response so basically it understand what uh is the context behind the retrieved information it of allow or it try to interpret the user's query accurately So based on that query only we going to generate the response like what we want that will come in the outcome right so if it will understand that properly then it will generate the accurate and reliable response so this is
contextual understanding next comes is relevance scoring now what it is relevance scoring ranks the retrieved information based on its importance to the query this ensures that the most pertinent information is used in the generation process basically it gives you the assurity that the information which you are using is the most relevant information to generate the response after that comes integration of retrieval and generation basically the retrieval and uh generation when it will get merged basically the final part the seamless integration of retrieval and generation component is crucial for rec this integration allows what for Real
Time information retrieval and response generation so basically the last part of this uh which is called uh the merging part basically can when you get the proper information through retrieval the process is done then when you will going to generate so that seamless process it comes under the integration of retrieval and Generation Now we will going to understand the benefits of using rag with alms that why we are using rag with alms so as we have already talked about the limitation of Alm so to overcome uh from that uh limitation we will going to use
rag because rag somewhere is more beneficial in terms of hallucination in LMS we have discussed about that also and a lot of other limitation what we actually have in llms while using llms so to overcome through that we are using here so let's understand step by step everything so let's discuss uh all the benefits of using a rag with llm so first is enhanc accuracy as you know that while using llm we actually have a limitation that the performance accuracy is not up to the mark so here it actually work on it and how it
work it always try to gather the information uh or to retrieve the information which is up to date and if the information is up to date of course you will get the correct response at last so it ensure the generated text first should be reliable and contextually correct it should not be like we are working on today and we are getting the outcome of yesterday's part so it always try to fet the up to- dat information if we are working today so try to fetch the exact and up toate information what we have today and
based on that when we'll get the uh response obviously that will be more reliable and correct next is contextual relevance by retrieving relevant information you can see here the rag ensures that responses are contextually appropriate it should be relatable then this leads to more meaningful and relevant interaction with the user it should be more relatable it should be more relevant so then only user can get the response proper there it try to cut down the irrelevant information and it try to fetch only the relatable uh data next come is handling diverse query R allows model
to handle a wide range of U Topics by accessing a vast knowledge base it enhances the model's ability to answer questions Beyond pre-trained knowledge so we always train the knowledge base right so beyond that limit also it tried to give the accurate and proper answer based on the user's query so that's why the third is handling diverse queries next is reduced hallucination which is a very important point over here which makes it a better uh version which makes it why we are using nowadays why we are more focusing on drag with LM because it help
minimizing the hallucination where model generate incorrect or nonenal information because while using large language models most of the time we uh face this problem of hallucination means the model confidently Prov provide or give you the incorrect information right and based on that when we get the response of course it will be not not accurate it will be incorrect so it try to minimize that problem it try to minimize that hallucination you can see here the retrieval of accurate data supports more credible and coherent text generation so at last when we generate the response that will
be more uh correct accurate reliable and relevant and up toate as well next one is after that scalability now you can see on the screen that rag can scale to include new information without need of retraining the model means you no need to retrain the model you no need to train the model again and again so it can scale to include new information basically if you are working on some info and if you are getting more info later the up to date obviously because it works on up toate information so it include that up toate
a new that new information it can include but no need to again work on training the model so without retraining the model we can scale to include the new information next it makes it easy to maintain and update the model's knowledge base so the database uh knowledge base is what means when you get the data from uh when you store the data in the database right the the database automatically get update so it helps or it makes the process Easy by maintaining and updating the models knowledge piee now the next one is versatile application so
rag is beneficial for various applications such as you can see here that customer support content creation and virtual assistant so it has a lot of application where we can use this R its ability to provide precise and the relevant information enhance the effect effess of these application because these all are those application which actually need the accuracy more so the point here is why we are using this rag in in these application because they actually depended on the accuracy more and it actually because of this rag it these application provide more effective outcome so these
all are the benefits of using rag with large language model now we're going to understand by using a simple example so now next is key components in retrieval what are the key components of retrieval the first one is retrieval component overview the retrieval component is responsible for finding relevant information to support the query response so if a user ask a query it start working okay it start trying to find the relevant and related able information based on the query to provide the accurate response through generation it not only use the pre-trained knowledge base it use
the internet also to get the relevant information based on the user query so it accesses external data sources database or the internet for this information just to get the up to-date info after that comes knowledge base this is again the very important component here the knowledge base uh is what it includes the collection of all the information relatable information collection of document collection of Articles what we are getting through other sources so it serves as the primary source of information for the retrieval process after that efficient search mechanism so here you can see the retrieval
component first understanding the query context to search effectively it involves what it involves passing the query to identify key terms and concept so if you uh ask a query it actually make the search process more efficient how it try to get or try to fetch the important key terms out of your query try to work on it more to give the correct and accurate response now the next one is information retrieval algorithm now what it is as you can see over here that algorithm like bm25 or tfidf are used to find and rank relevant document
relevant document as you already know that those who are more relatable with the query of the user so this algorithm is score document based on their relevance to the query terms and based based on that only we can say that yeah we are getting the accurate response because more relatable things only can show you the final accurate response next come is relevance scoring now what it is retrieve document are score to determine the importance to the query so it prioritized based on the ranking of the info and uh the most important will be on the
first rank then so on so so higher scoring document are prioritized for generating the response so it will move to the generation stage then after that again you can see here efficient search mechanism what it is advanced search now so this advanced search mechanism do what it ensure the quick and accurate retrieval of the info of the information this efficiency is very important or crucial we can say for realtime applications like chatboard and virtual assistant next come is key components of generation let's talk about first the generation component overview so the generation component overview is
what it uses the retrieved information the information what we have retrieved to retrieval it use that to create the response and the response will be very relevant to the query it integrates the retrieve data into meaningful and contextually relevant reply so whatever information that retrieval process retrieve through various uh uh different different sources it actually integrate those data based on the ranking after that you can see here is contextual integration now what it is the model integrate context from the query and retrieved information then after this ensures what the response is not only accurate but
very appropriate also as per the user query then after next comes language model now what it is the generation component is actually used various Advanced language model like gpt3 GPT 4 these model have been pre-trained on on vast data already to generate humanik text so if you ask a query obviously you only can understand if it will come as a response in humanik text right then only you will be able to understand what you are getting as a response you cannot understand in the form of encoded text so basically it generate the humanik text to
make it more easy to understand for a user and uh it works on large amount of data because these models have been already trained pre-trained on vast amount of data next come coherence and fluency now what it is the generated text is designed to be coherent and fluent resembling natural human conversation this quality is critical for user satisfaction and engagement so here uh if you have noticed what I have explained before that suppose if you have have uh asked a query right with the model so there if you want to refine it right because uh
as a response what you are getting you are not satisfied with that uh it's not like it's not accurate but the point is you want something more for uh as a response so here you can do the conversation in a very natural human conversation in that manner you can uh do the conversation and uh you can uh uh refine the um response in your based on your query and based on that again you will get the more accurate outcome so this quality is critical or important why for us means for user satisfaction and engagement next
is dynamic content Generation The Generation process is dynamic and allowing what the model to create uh diverse and adaptive response it can handle various type of queries from factual question to conversational prompt conversational means when you start asking again and again based on the response what you are getting through the model and based on your uh refinement of the query uh it start giving you more better version of the response so this is a very important component of generation now the next is response optimization now what it is the model continuously optimize the response based
on the feedback and new data feedback means if you want to give something more based on the response uh that feedback loop will work over here so based on the feedback and suppose something new come up right so based on that new data it always optimize the response in a better way this adaptive learning actually improves the accuracy and uh the relevance of future responses so this adaptive learning not only improve the accuracy but actually it improves the relevance of the future responses also because it continuously optimize the output or we can say the response
based on the updated information and based on the feedback what we share or what we give uh to the generated response so now as we have discussed as we have spoke about uh the benefits of using rag with LM let's talk about the R's Effectiveness by using a simple example so first you can see here query example so here you consider a user asking what are the latest advancement in AI a standard language model might provide a general answer based on its training data because the training model based on that only generally uh the language
model provides the answer retrieval process is the next the rag model first retrieve the recent article research paper on AI advancement so this is the example of that up to-date information like rag use always rag uses always the upto-date information so this is the part there this ensure that the response includes the latest information available it not only will work on the trained data but if you have some up to-date information regarding your query so it will try to fetch that without uh retraining the data then comes generation process you can see here that using
the retrieved data the model generates a detailed and up to- date response there because first the retrieved data is up to date already so obviously once once the generation process starts so it will give you the up to-date response and in a detailed manner this response is more accurate and informative compared to standard models answer reason because it not only work on the train data but if you have something uh new related with the query what you are asking for it work on that and then will give you the final response so the accuracy is
more here and more informative now the next next is comparative effectiveness what it is a standard model might say AI is improving rapidly with new technology in contrast a rag model could say recent advancement in AI include development in Quantum Computing and ethical AI framework as reported in the latest research paper from 2024 so you can see the difference that the standard model how we'll get the response through that and if we are talking about rag then how and what response we could get through this model next comes user satisfaction now what it is obviously
uh the name itself is explaining the whole thing over here that the detailed and current response provided by rag enhances user satisfaction you can see in the above example that the reply the response what we are getting through standard and the rag which one is more satisfied so if I see the outcome through the standard model and if I see the outcome what we are having right now through rag model the second one is more satisfactory right we are understanding properly here what are the recent advancement in AI right so this satisfaction should be there
uh for the users and user receive precise and relevant information correct information reliable information up to-date information which improve their overall experience so that should be there it should be uh there for a user because uh why we are uh working of on large language model why we are working more on rag just to get the satisfied experience from the user now next comes practical implication now what it is as you can see on the screen this example highlight how Rec can be more effective in real world application how rag is uh uh so helpful
in terms of uh better user experience it demonstrate the Practical benefits of integrating retrieval with generation for better AI interaction now the next part workflow of rag how rag works so basic workflow of rag we going to understand the the step by step we're going to discuss about the first one is user query we'll work on the user query the first step is the process starts when a user inputs a query or a prompt into the system this query can be a question this query can be a request for information means if I want to
know something about uh anything like suppose if I want to know about the latest advancement in AI so I can ask as a request for that information what I'm providing or any text input requiring a response after this user query when we starts with our query next is retrieval stage there regag start doing its work so the model searches for the relevant information based on our query from a predefined knowledge base or the Internet it's not only depend on the predefined knowledge base it try to fetch it try to retrieve the information through the internet
as well try to collect all the relatable relevant information and then once it retrieves the most relevant information articles data whatever based on the query we it move to the next step and the next step is ranking retrieved information now what it is we have already uh discussed in a brief that the retrieved information is ranked based on its relevance and importance to the query that how much that retrieved information is relatable to the query the user query based on WE ranked those information and the most relevant information will be prioritized for the next step
for the next stage okay so here you can see that the retrieved information is ranked based on its relevance and imp importance to the query how it is important how it is useful how much it is useful for the query what user is asking for this step ensures that the most important data should be prioritized for the next stage because if we not prioritize if we not going to rank the most prominent data most relevant data then the accuracy might vary so here uh the step actually is very important to have because it actually do
the ranking thing ranking process basically trying to prioritize the most important info from the retrieved information now the next is after that generation stage so what generation stage do we have already discussed it uh in brief that the model uses the top ranked retrieved information to generate the response for the user here you can see it involves what it involves creating coherent and contextually appropriate text that addresses the users query very accurately and very clearly basically it works on the ranking retriev information why because based on the top prioritized information which is what which is
more relatable to the query okay so based on those top ranked information it start generating the response and it start giving the detailed and accurate response which is more relatable and relevant to the query so this is called generation stage next is response delivery now what it is the generated response what we have discussed uh in the last slide that the response uh what we uh based on what we retrieved through the ranking Ral stage when we regenerate the response that response is delivered back to the user if we have worked on uh your query
we have retrieved the information then we have uh prioritized the most relevant information based on your query we have generated the response in a detailed manner now it is time to deliver back to the user it is time to send the final response to the user based on his or her query the response aims to be accurate relevant and informative means deta in a detailed manner enhancing user satisfaction so we have already talked about we have already uh discussed in between the standard model and the rag model you can uh understand now the difference between
both that how we get the outcome response from the standard and how we get the through the rag model so here because of this uh rag the response what we get at the end will be more satisfactory next is feedback loop here you can see that user feedback can be used to refine and improve the retrieval and generation process at last we can see it is also a process sometimes maybe we can get the proper uh refined or some um thing that model also can miss right it try to give you the accurate uh response
but sometimes suppose what if we miss anything so here the feedback loop do what based on the generated response when it get delivered to the user when user get the uh response at last user feedback here uh basically what user give the improve uh feedback like where we can improve where we can Define it more it's like if you give a prompt right if you're getting some outcome based on your prompt and if you want something more it's it's not sufficient for you so that information what you getting as a response so you will again
go ask back okay you will going uh to ask back or you going to give the prompt back okay I want more on this so that feedback is working here like a refinement on your query so what it improves it improves the retrieval and generation process more and this Loop help continuously enhancing the models performance you will get better version uh of the final response so this is the feedback loop after this we're going to understand the key components of uh rag means for retrieval and for Generation so let's discuss now about the various applications
of rag first one is customer support how we use rag in Customer Support rag enhances customer support by providing accurate and timely responses to the user query seconded user realtime data retrieval to answers diverse question improving customer satisfaction next application is virtual assistant virtual assistant is like what Siri Alexa these applications use uh uses rag to deliver the relevant information quickly so whenever you ask any question or whenever you uh say anything to um Siri and Alexa you get the response very quickly and correct also so these all because of uh implementing rag to that
virtual assistant and it enables them to handle a wide range of task from setting reminders to answering complex question so it actually uh uses there uh as in uh to set various different different task so rag is not only helpful to get the relevant information or up toate information it is helpful in other task also for especially virtual assistant next come is uh content creation Now how it can be helpful in creating a Content so it adds in generating high quality content for blogs article and social media if you ask a query based on your
uh um information what you will provide that I want in this manner I want in that manner so based on your information informative query it will try to give you the correct and accurate and detailed response there so that can be helpful to create your blog or article or any other social media post it retrieves the relevant information or data to ensure the content is accurate and up to date next application is medical diagnosis now how it can be helpful in terms of medical so in healthcare rag assist doctor by retrieving and summarizing the latest
medical research this support accurate and informed decision making in patient care so suppose if you are not aware about the latest or advancement in medical uh research obviously it will not be so helpful for the in terms of patient care right so if you have the complete info about the latest uh research what is going in medical or in health care so that will be helpful for a doctor to take care of a patient based on the latest and advancement research what we have in the medical field after that you can see here that it
is helpful in educational tool how it is helpful in education as we use uh in education to provide a student with precise and detailed explanation it helps in creating personalized learning experience by retrieving relevant study material based on the uh child or based on the student we can provide the relevant study material so they can go through that and it will be more accurate and up toate so there also the rag is very helpful next comes legal research so basically Legal Professional use this rag to quickly find and summarize the pertinent legal documents this speeds
up the research process and improves the accuracy of legal advice so you know that if we go with legal documents we actually uh have to go through with a lot of documents just to read the document it actually takes a lot of time so it give you the summarized form of those document and based on that summary you can work in a more effective manner that research or you can give the more effective advice there quickly so that's why rag is very helpful in legal research as well now the next one is text summarization now
before we understand how it is helpful there we'll understand what is text summarization so text summarization is what if you give a a ly par just for an example in a in a very simple way I'm explaining that if you are giving a very uh lengthy uh and very text Heavy paragraph right and if just want a brief about that it summarize that complete par in a very uh precise or very brief so you just what basically text summarization do it focus more on the key words out of that complete par and which is very
important for you to understand so tax summarization is what it is the process of condensing a long document into a shorter version while retaining a uh the key information rag enhances this by retrieving the most relevant part of the text to include in the sum summary so what basically Rags do actually read the complete last document okay out of that it tried to U Pick the important keywords okay based on those important keywords it tried to summarize in a very shorter version so that will actually help uh to uh reduce the time of reading the
whole large document and uh it somewhere save your time only so this is called text summarization now next is automatic summarization tool rag powered tool can automatically summarize articles reports and books so it is very helpful in Tex summarization and by using this tool it actually give the summarized version of any report article or book in a very quicker way this saves time as I said before and effort for user who need quick Insight from from lengthy text so just not to waste the time we can summarize the large amount of info in a very
uh shorter version next is improving comprehension summarized text help the readers understand the main points without reading the entire document what is the meaning of this basically that the entire document if you read it will take time for you to understand first and once you will read the entire document then after only you can come on some conclusion so when you will get the summarized text means when you will get the main point out of that entire document you can work on it uh in a very quick Manner and then you can come on the
conclusion also very fast so uh this is particularly useful in academic and professional setting where time is limited after that keeping summaries relevant so by retrieving up to-date information you can see on this screen that what is written that rag ensures the summary include the latest development it always try to uh work on the up to-date information this is very important for those fields uh especially in news and research where the upto-date information is must where information is constantly evolving means it keeps changing right so there it is very important so next after we going
to discuss customizable summaries what it is user can customize the level of detail in the summary according to their need if I just want uh suppose if I am giving a large document and I just want to understand the main point if I am at beginner level and the document is not only for beginner it is for the beginner for professional or basically we can say for advanced level also but I just want a summary based based on as I am a beginner so I just want the summary based on that part so it will
actually customize that okay based on our need rag can generate brief over or more detailed summary based on our preference so if I am a professional and what the document I shared I want a detailed summary of that so I can customize in a way if I'm at beginner level I just need easy summarized version so I can customize in that way so it is very helpful in customizable summary after that integration with other apps it can easily integrate with other apps How uh you can see on the screen that text summarization can be integrated
into other applications such as email client and document editor so if this flexibility there now it will be very helpful for us in our work so this enhances productivity by providing quick insights directly within the tool user already use so no need to work again separately because it can easily uh integrate with the tool on which we are already working so uh that will actually somewhere make our job easy and uh this flexibility should be there while using any tool right so rag is having that um Quality or we can say that benefit what we
have through rag is that it can easily integrate with other apps now next is Advanced question answering system how rag is helpful over here first we'll understand in the introductory part of what is Advanced question answering system Advanced QA system utilize sophisticated algorithm to understand and answer complex query when we ask any question we should get the proper answer so in a very simple manner I can say uh what is question answering system if you ask very complex query you should uh get the proper answer so that uh Advanced Q system understand that complex query
and give you answer this system go beyond simple keyword matching to provide accurate and contextually relevant response after that integration of retrieval and generation it combines retrieval mechanism to gather relevant information from database uses generation model to synthesize coherent and accurate answers from the retrieve data so integration of retrieval and generation is what when you retrieve the uh relevant information based on the user query it generates the accurate response and uh up to-date response on and relevant information that is the part of integrating uh retrieval and generation part after this role of LM in QA
system now what is the role of LM in QA system large language model like gp4 enhan QA system by providing uh different uh context aware responses Alm can handle a wide range of topic and generate detail answer based on vast amount of training data because if you talk about GPT for is trained by using a large and vast amount of data so if you uh give any uh query if you ask uh any query using a prompt it you will get some response through that model gp4 so how because it can handle wide or different
range of topic and generate detail answer based on that trained data what we have feed there that data is a large amount or vast amount of data next is realtime data access now the meaning of here uh realtime data access is Advanced QA system can access and retrieve realtime data ensuring that answers are up to date real time means what right whatever you are asking it should be up to date on Real Time basis so whatever query you will going to ask based on that this uh actually retrieve the information and generate the response which
will be up to date this capability is very important for application require current information uh especially uh for for example for news channels right such as news and financial data the financial data of course should be up to date only in stock market the the data should be up to date so for those application it is very important after comes personalized response Qs system not only used for real-time data access but uh can provide personalized answers based on user history and preferences personalized enhances user experience by making interaction more relevant and engaging let's discuss about
application across domains widely used in customer support we have already talked about virtual assistant health care and education these system help automate response reduce the workload of the user and improve the efficiency across different different sectors so this is all about various applications in rag we have discussed about various application like how we are using in Customer Support virtual assistant and many more after this we're going to understand more about uh rag that the hallucination term which is very uh crucial or important to understand so let's discuss about hallucination in Rex system now we're going
to understand the most crucial part which is hallucination in rack system how uh rag will be helpful and how does it help uh this hallucination in Alm but before that we'll understand hallucination in LM first what is hallucination hallucination in LM occurs when the model generate the text which is factually incorrect okay this response may seem plausible but are not based on the accurate information so in short we can say that when the model generate the output generate the response but it is not correct it is incorrect so because of that we obviously lose the
accuracy there and that is known as hallucination it actually uh this issue actually we have more uh in large language model so next is cause of hallucination why it happened why we get this issue so hallucination can occur due to the limitation of training the data or the model inability to retrieve the relevant information so in that case this hallucination occur so they often occur if the model attempts to answer question Beyond its training scope so if you remember I uh uh have discussed this that rag is actually um not only work on the trained
data but if the information is getting up to date right so it try to retrieve that up to date information also without retraining the model so this thing if is not there during the modeling then this hallucination can occur after this example of hallucination an LM might generate a false historical event or incorrect scientific fact for instance stating that Albert Einstein was born in 1900 is a clear hallucination so if the information uh what we are getting as a response very confidently llm provide you the information but it might be possible can be wrong or
can be incorrect so this causes the hallucination now we'll going to understand the impact of this hallucination on user hallucination can lead to misinformation basically it mislead you from the correct or accurate information reduce the reliability of AI system because if you are getting incorrect information if you're getting in accurate uh response so obviously that reliability on an AI system automatically will get dropped down user may lose trust in the technology if they frequently encounter the inaccurate information there so that trust what we generally have on AI that will get the proper and accurate response
automatically will get drop down then after detection challenges detecting hallucination can be challenging as the generated text often appear coent and plausible it require verification against reliable source to identify inaccuracy so if you get uh uh the inaccurate uh response so obviously you have to work on it again and again to get the correct response and at last also maybe we'll not sure that what we are getting at last as a final response is correct or is incorrect so that challenges we have to face if we are having this hallucination in llm after that need
for addressing it mitigating Hallucination is crucial and important for maintaining The credibility and usefulness of AI system ensuring accuracy enhanc user trust and the overall effectiveness of LM it not only enhan the user trust but if we get the accurate response then only we can proceed further for the next step so everything will get sto if you are getting the inaccurate response or the response which is not relatable or relevant okay with the query what we are asking for so here that need for addressing it means that ensuring accuracy enhances user trust also and and
the overall effectiveness of LM as well now how does a rag help mitigate hallucination how it is helpful to overcome this uh problem so first is integration of retrieval in this you can see on the screen that rag mitigate hallucination by integrating the retrieval component that access accurate information rag always retrieve the relevant and uh accurate and up toate information try to integrate that this ensures that the generated response are based on and will be based on the upto-date info after that enhance the accuracy automatically it will enhance because if you are having the upto-date
info accurate info so the accuracy automatically will get enhanced and improved so by retrieving relevant document or data rag provide a factual basis of generating text this reduces the likelihood of the model predicting the inaccurate or fabricated information next is contextual relevance now what does it mean that rag always ensures that the retrieved information should be very contextual relevant and relatable and accurate to the user query it should get match with the query what we are asking so this helps the model generate the responses that are both accurate and appropriate after that realtime data access
rag not only work on the data which is there we have in knowledge base pre defined knowledge base it triy to fetch the up to-date current information as well so rag system can access realtime information which is very uh important uh Point what we have uh in rag that it can access real time information so that's why because of that reason it keeps knowledge current so basically if we uh get the up to-date information so accuracy automatically will get maintained this is particularly useful for answering question about recent event recent event or rapidly changing feeds
like news channel in News Channel we cannot uh um like uh keep the old or historical information only we have to be very updated there and in fincial Market of like a stock market that details should always be up to date right so in those type of fields this is very important to have next is feedback mechanism user feedback on the accuracy of response can be used to improve the retrieval and generation process continuous learning from the feedback help refine the system and reduce hallucination over time so if you continue learn from the feedback and
if you continue uh work based on the feedback refin your uh response automatically it will reduce the hallucination issue next comes continuous Improvement R system are designed to improve continuously um by incorporating new data up toate data and refining the retrieved algorithm so if you are getting a response and uh if you use that feedback loop So based on that it start working on refining the retrieval algorithm and if it start refining that if it continuously improve uh on the information uh so you will get the accurate outcome at last this ongoing enhancement further reduces
the occurrence of hallucination so that's how rag help mitigate hallucination after that we're going to understand understand this rag how it is helpful in llm and what is the process what all are the steps uh that should be covered while using rag in llm we're going to discuss next so now before we uh Implement rag using Lang chain we're going to understand the steps to implement this rag with Lang chain First Step uh would be install required libraries uh you can see here that begin by installing langin and other necessary librar is using pip use
the command pip install Lang chain Transformer fs and then CPU it actually is helpful because if you going to use you have to install all the required libraries first then only you'll be able to move further then the next step is setting up the environment import required libraries and set up the environment for Lang chain and rag initialize necessary configuration what configuration you required for retrieval and generation component that you need to be taken care of next is prepared data you have to load the data and you have to pre-process your data set to create
a knowledge base because knowledge base is what basically used to store the information okay from various various database I said and from various various sources so you have to load and pre-process your data set to create a knowledge base from there you will be able to get the response through generation process ensure the data is in format suitable for retrieve such as document or text P passages after the next step is initialize retrieval component you have to initialize retrieval component why because based on the query uh it start searching the relevant information so set up
the retrieval component using Lang chain building function configure the retrieval setting to Define how the data will be accessed and ranked and then the configure generation component after retrieval of course the generation component will be there so you have to initialize that component using a pre-trend language model like gpt3 or 4 set up the model to generate the text based on retrieved information so when you will set this then only you will be able to get the response based on the retrieved information what you have here then integrate Retriever and generation combine both component to
create the finel rag system ensure seamless integration so that the retrieved data inform the text generation process there test and refine this is the last step where you can see that test the r system with sample queries we're going to uh then after give the query sample query and uh just to get the shity uh that whether it work correctly whether it's working correctly or not refine the retrieval and generation parameters based on the result for Optimal Performance now we're going to understand the simple example to demonstrate this complete process so first step would be
install Lang chain second would be import libraries third would be load and pre-process data fourth would be initialized retrieval component then after configure the generation component after that create reg chain and at last test the system by giving the sample query now let's understand the future of rag as we have understood how to implement uh rag uh in Lang chain we're going to understand what is the future of rag so upcoming Trends and innovation in rag are integration with realtime data future rack system will increasingly integrate with real time data because uh now the things
are uh keep changing right so uh the data uh what we are working on again and again we have to uh refine it right so if the real time data is getting changed and getting up to date so this R system will increasingly integrate with those this will enhance the relevance and timeliness of generated response making them more useful for dynamic application next is improved retrieval algorithm now what it is advances in retrieval algorithm will allow for faster and more accurate information and uh if the retrieval algorithm will get improved more so obviously that advanced
part of that advanced version of that will give you better result at last so enhancing the algorithm making it more advanced will improve the quality of the retrieved data that will be more than up to-date version means that up to date will be there but the accuracy will get enhanced there multimodel rack system now what it is combining text with other data types like images audio and video this will enable more contextually Rich response expanding the application of Rec it will be more helpful in various applications where we are using Rec then after enhance security
and privacy Innovation will focus on improving data security and user privacy in rack system because if we use a vast amount of data we should take care of the um uh the data could be sensitive and uh there we have to be very uh careful so this enhanced security and privacy do what it actually secure data handling practices and it will be critical as this system handled sensitive information there after that scalability Improvement RX system will become more scalable handling larger data set and more complex queries this will enable their use in broader and more
demanding application right now we have discussed few application somewhere it is not in use in uh in that level but if we improve the scalability we can broad or we can use this uh rag system in more demanding application as well now the next part is we're going to talk about that potential impact and advancement of R advancement in AI research first rag will drive significant advancement in AI research pushing the boundaries of so what Poss what is possible with AI researcher will explore new architectures new method new uh different different uh ways to improve
the retrieval and generation after that enhanced decision making if um business and organization will use this rack that they can enhance the decision making process also access to accurate and upto-date information will get the accurate and upto-date information so that will improve the strategic planning and operation so basically overall it will be helpful in taking a decision correctly and accurately increased accessibility to information Rec system will make information more accessible to a broader audience we can use in uh a different domain where where right now if we are not be able to use this rag
there also you can use the rag to to get the accurate response properly now user will benefit from accurate and relevant information tailor to their need regardless of their background after the revolutionizing customer service it will lead to higher customer satisfection and more efficient support system improved health care solution in medical also it will be more helpful and beneficial it will not only work on the um the latest research of medical but it will give you more up to date or what will be the future of that after that research the up toate research uh
information what generally rag provide you right after that what can be uh there in the future that also you can get easily so this will improve diagnostic the accuracy and patient care leveraging the latest medical advancement so it will not only give you the upto-date medical information and research and it will give you what comes next that also will be there after that economic growth so this is also an impact of rag the widespread adoption of rag will contribute to economic growth how business will become more efficient and Innovative uh it actually save your time
right so there obviously the efficiency will get increased Innovation obviously will get increased it will lead to New Market opportunities and job creation so now we're going to understand how to use rag in llm by handson uh first we're going to import some necessary libraries so what they are and what is the purpose of those libraries we'll understand step by step so the first is we're going to import the first Library which is PIP install Lang chain community what is the use of this uh it is used for building the various application using language models
now we're going to give other Library another one which is C Transformer so pip install C Transformer so basically it focuses on F usage okay next we're going to import what pip install Transformer so these are like the hugging pH libraries of for model like bird GPT next the next Library which we're going to uh give here is now we're going to give another one another Library which is also very important to use in this and uh that is basically f a i s CPU okay so basically this library is efficient uh it is used
for efficient similarity search and clustering the uh dense vectors okay after this we're going to give another one which is PIP install sentence Transformers so this library is for sentence embedding okay so next you can see on the screen that uh pip install Lang chain hugging face what is the use of this uh Library basically it is used to build Lang chain or we can say language model applications okay so these all are very important libraries which we have to install uh while applying the rag in llm while uh performing the Hands-On part so as
you can see on the screen we have given all the important libraries that needs to be installed before giving the code once it will get installed then we're going to use the libraries from these all okay so next we're going to import the libraries and we're going to load the model so basically you can see that few things are downloading over here and you are already satisfied okay so once it will get finished we'll move to the next code so now all the required uh libraries that needs to be installed before we start the coding
it's already done over here as you can see it is uh installed and if it is already there then you can see the uh message that required already satisfied now we going to give the libraries and we're going to load the model as you can see I have given the code so from Lang chain Community what we are importing SE Transformer from Transformer what we are importing the auto tokenizer auto model why we're going to use it uh that we're going to understand step by step while using these libraries uh during our coding part okay
so here you can see that we have given the libraries and we have given all the required models also which we going to use for further uh modeling when I'll run this code it will take a time because uh step by step it will import all the required libraries over here now next step is uh what we're going to load the llm using C Transformer how we're going to do so for that we have a code that I'll going to show you over here so now you can see over here on the screen that uh
we have created a variable llm in which we are using C Transformers and we are using a model that is the block Lama 27b GG ML and the type of this model is llama what is the meaning of this code is basically that we are loading the model by using C Transformer so let's run this code and it's done you can see that it it's fetching the uh details and the model which we have used from there once it will get done then we'll move to the next code so now you can see that we
have have uh um used over here the model by using C Transformer library now the next task is to load the tokenizer tokenizer and model for Tings how to do that first we have to create a variable with the name tokenizer and in that we're going to use what Auto tokenizer from pre-trained in that we're going to type sentence Transformer what is the use of this I'll let you know so first uh uh in tokenizer we are using Auto tokenizer from that we are giving from pre-train sentence Transformer and after that we'll going to use
all mini LM L6 V2 now what it is this should be hyphen so it loads a pre-train tokenizer as you can see from pre-train so it actually load the pre-train tokenizer from where from mini LM L6 V2 model okay so this is the meaning of this as what we are doing here we are loading the tokenizer and the model for what embedding so that we have done now let's run this code now the next part over here you can see after loading this we're going to load the mini Alm model and why we're going to
do that we're going to do this for generating the embeddings so first we have loaded the tokenizer now we're going to load what the model so the next code you can see on the screen uh is loading the model for embedding so once we load the model for embedding uh we have loaded till now that tokenizer and model both for the embedding part next step is what we going to define the embedding text for that we're going to create a function so first I'll give the code over here Define ambid text and in that in
that user function we're going to give some inputs so for that we going to create a variable inputs tokenizer what it will use text return return tensors now the next code over here is after giving this we're going to use the torch. node grade function and in that we're going to use what the output last hidden state so basically it is what we're going to understand once I'll give the code over here so that is output output equal to model within bracket input in embeddings what we're going to use this output do last here in
state do mean Dimension we are using equal to 1 and then CPU dot num and at last written this code is uh is basically we are creating over a user defined function and it's converting the text text into what the embedding using the model which model we have used over here is is mini LM model so by using that model we are converting the text into into what into the embeddings so once we'll run this code then we can use this function in our further process so let's understand uh what we're going to do next
we're going to get the embedding Dimension how to get the embedding Dimension that we're going to understand by giving a code so now you can see over here that we have given step by step all the code first we have given the embedding Dimension how to get this by giving this code over over here if you want to initialize the index for the iding that we have to create a variable in which we have to use FS library and in that we have to take the index for the embedding Dimensions now after that we're going
to store the document in memory doc store that will get stored by using this function we going to store next is create an index to dock store ID mapping so index to dock store ID we are creating a dictionary over here in which everything will get stored okay then after we're going to create a vector store in which we are calling FS library and in that we are calling all the uh variable which whichever we have created till now okay we're going to give in this function and once we'll run this next process would be
what the last code basically is doing over here that Vector store it's it's actually creating a FIS Vector store and and uh it manage and search the document embedding now the next is to prepare your document how to prepare document for that we're going to give a code so the next step is to prepare your document so as you can see that I have given few documents from and uh uh in that document I am uh giving few statements right so when I ask at last a query based on this document I'll get automatically the
answer of that so let's run this first and then we'll move to the step four now what will be the step four you can see on the screen that I have given that uh code that is text equal to doc. pageor content for Doc in document for now I'll give you a brief about that that we are embeding the document and adding them to the vector store but the meaning of the first code you can see on the screen that first code over here you can see text equal to dog. page content what is what
basically it is doing over here it actually extracting the text from where is is actually extracting the text uh text content from each document it is extracting in the document list okay so the first is uh extracting the text content next you can see that over here I have given a code that for I embedding in numerate embeddings and in that I'm using a loop basically over here so it is doing what it is actually adding each embedding to the fs index and you can see over here that um I have given that index to
Doc store ID equal to Doc I page content what it is basically mean uh what is the meaning of uh this particular code is it's actually mapping the index entry and where to the corresponding document content okay so the meaning of this we have already understood let's run this code so now the next one is defining a simple retriever this code basically do what as you can see I have given a user defined function over here the Define simple retriever within a bracket I'm giving a query as an input so basically this function retrieves the
most relevant query okay so to retrieve the most relevant query we are creating this function over here let's run this code and uh you can see that how uh we have given over here a code that query uncore equal to embid text query and then we are giving that di equal to index do search so these all are basically for what just they do by comparing its embedding to those in FIS index so by comparing that from the F index we can get the most relevant query okay next is uh we'll going forward and we'll
give the next code now the next is is creating a rag chain for that we are calling a class and with the name simple retrieval QA so the meaning of this class it initializes a QA system first point second as I can see on the screen that it is initializing a QA system with what with a language model and a retriever okay so here uh we are calling the language model also we are giving the code for retriever also and by using that uh this class is initializing a QA system after once we run this
code let me explain you the second part also we are defining a function over here run self query and context equal to self. retriever within bracket we are giving query over here so this method is doing what is actually retrieves the context and for a query so whatever the query we're going to give so for that is actually retrieving that context for a query and it generate an answer so if at last when you give a query by using this it will generate the answer based on the query what you going to ask okay now
we're going to do what the next step so the next step over here is what we're going to ask a question using the rag model as my query over here is what is Lang chain So based on the document I shared based on the function which I have created based on the class which I have called over here to create the rank chain I will going to give this query over here is what is Lang chain and answer from where it will uh fetch QA uncore chain. run within that the question what we have asked
so QA chain is basically what where you can see here that we are creating a rack chain and it is used for basically to retrieve the context for the query so once we'll run this code you will get the answer that what is Lang chain over here so as you can see after giving the question after asking the question using rag model and how we are getting the answer from the rag model then uh you can see what is purpose Lang chain and it's giving the complete output based on the query what we have asked
over here so that's how we can use uh the rag in llm and and uh I want you now to go and check all these codes apply it manually and understand how it works we have explored the world of generative AI covering what it is its benefits and its potential to change the future you have learned about generative models the ethical issues they raised and explored popular large language models like GPT claw 3.5 Sonet and Gemini we also showed you you how to create an llm app for Android and introduce some top generative AI tools
from crafting effective prompts to working on project with chat gbt Python and GitHub co-pilot you are now prepared to innovate and build we finished the advanced topics like L chain and drag enabling enabling you to develop cutting age AI applications if you are eager to take your AI skills to the next level subscribe to our channel for more videos hit the Bell icon to stay updated on new content keep building stay curious and let your creativity guide you [Music]