Google Cloud Platform Tutorial | GCP Tutorial | Google Cloud Platform For Beginners | Simplilearn

452.56k views14337 WordsCopy TextShare
Simplilearn
🔥 Google Cloud Associate Cloud Engineer Certification - https://www.simplilearn.com/google-cloud-as...
Video Transcript:
good morning and good evening everyone welcome to this tutorial about google cloud platform and here we will learn about cloud computing what is cloud computing what is gcp that is your google cloud platform what are the benefits of google cloud platform and what are its different services a little bit about google's infrastructure a comparison of different cloud providers such as google offering gcp that is google's cloud platform amazon offering amazon web services and microsoft which offers azure we will also learn about domino's pizza use case and then have a quick demo on using some of
the services on gcp before we begin let's understand why cloud computing and it would be always good to learn about cloud computing based on a use case there are varied use cases where organizations are adopting or moving their solutions or their infrastructure into cloud or i can simply say integrating with cloud now here is one use case nina started a company that relates to website development the challenges which nina was facing were low memory space whenever required for processing or for any other kind of application related work high traffic to website that crashed it and
also less number of servers now with these challenges she was then referred to concept of cloud computing and how that could benefit and help her in solving her issues most of her issues were solved when she started using cloud computing and with cloud computing she could increase her memory space as required that is on demand control the load to the website that is basically load balancing and handling more requests on the website or request per minute buying servers at a lower price that is scaling up or down based on the requirement when we talk about
cloud computing cloud computing is use of hardware and software components which a cloud provider offers as a service which can be accessed over network cloud computing is use of these resources which could be either dedicated resources or coming from a pool of resources which cloud provider offers to deliver a service to clients users can access these different services applications files from any device which can basically access internet cloud computing allows automatic software integration it allows backing up and restoring of data it basically offers unlimited storage memory or computation capacity it gives access to reliable sources
which usually the cloud provider themselves are using for their use case and it is a cost efficient model which helps organizations to quickly integrate or basically modernize their infrastructure cloud computing is usually used with id or within id space wherein there are five traits if there is a requirement of resources as the business dynamically changes or grows so for this cloud computing offers on-demand self-service so users could be using on-demand computing resources or memory resources storage network and so on provided by cloud provider and also do a self service all this is possible using a
simple interface and users can be using processing power storage network as they need and pay as they go so there is least or no human intervention required when it comes to projects which might need scalable network access cloud computing offers broad network access that is accessing resources over network across geographical regions or what we call as availability zones which might be multiple sites within a particular geographical region cloud providers also have what we call as a resource pooling so this basically provides a huge pool of resources which are shared and can be accessed by customers
at a lower cost now there might be customers which are interested in not sharing the resources and would be interested in dedicated resources and in this case cloud provider also have sold tenant offerings which help such customers if an it business or any other business needs rapid elasticity then cloud providers also have resources which are offered which are elastic you can get more resources rapidly as needed and thus you can scale up and down think of a gaming company which would be interested in launching a new game and they would have predicted a certain number
of users which would get on to the portal playing the game and what happens if the request per minute or if the number of users who are joining in might increase now in this case organization would want an underlying solution which handles this dynamism scales as needed on demand and once the demand is done scales down this is possible using a cloud computing solution cloud computing solutions also include measured services that is pay as you go model for the usage or for the reservations which a user or organization would have made for resources offered by
cloud computing so when we talk about cloud computing one of the question which always arises is why is this model so compelling why is this so interesting for organizations or users who would want to use one or many services from cloud computing so first wave of trend which brought storage cloud computing was what we call as colo that is co-location i.t shops that have been using or managing huge amount of data from decades basically wanted to build their infrastructures to handle their business needs now instead of building costlier data centers they would rent space or
share facilities and this was being done by organizations even in past thus they would free up the capital for other use cases now this was more of user configured managed and maintained by them later organizations started thinking of virtualization so that was again user configured but provider managed and maintained so components of a virtualized data center matched that of a physical data center and organizations would have virtual devices separately managed from underlying devices then later came container based architectures or basically automated services so within google services are automatically provisioned and configured letting your infrastructure scale
on demand there are various reasons why an organization would think of integrating with cloud or benefiting by using cloud and thus instantaneously reaping the benefits of modernizing their infrastructure now few famous cloud providers are here so you have amazon which offers amazon web services and a huge list of services which come in with this you have microsoft's azure you have oracle's cloud you have saps cloud solutions gcp which is offered by google salesforce and so on there are many other small players which are also providing different services which are cloud-based or organizations which are partnering
with these main cloud providers thus offering cloud services to their customers when we talk about why google cloud platform there are various reasons why someone would choose google cloud platform gcp has better pricing compared to its competitors when it comes to speed and performance it is very fast and increases the performance of the project live migration of apps and there are huge number of solutions which i will show you further in further screens which help an organization to adopt to a cloud platform an integrated with cloud platform or even completely migrate into a cloud platform
none of google's competitors provide live migration of apps when we talk about big data ai machine learning kind of solutions gcp provides lot of innovative solutions in comparison to other cloud providers such as aws azure and so on so what is google cloud platform it has a set of cloud computing services provided by google that runs on the same infrastructure that google uses to and for its end user products like youtube gmail and so on let's learn about benefits of google cloud platform such as high productivity working from anywhere quick collaboration high security fewer data
stored on vulnerable devices reliable resources which can be used across organization across geographical regions across countries very flexible which allows organizations to scale up and down as the demand rises are as the demand declines and cost effective solutions for various use cases these are some of the benefits and if we look into different services which google cloud platform offers we could look into detailed benefits which each service offers in a different use case which basically helps organizations working in different domains handling different kind of small mid or larger businesses and with different business goals when
we talk about google cloud platform services here is a list of services or i could say high level domains or categories of services so you have compute related services you have storage and database you have networking big data developer tools identity and security management internet of things cloud ai management tools and also data transfer solutions when we talk about google's infrastructure google has one of the most powerful infrastructure in the world the infrastructure is available in two levels the physical and the abstract layers you have physical infrastructure and then you have the abstract infrastructure physical
infrastructure consists of data centers extensive development of high efficiency backend data centers you have a very strong backbone network which is used by google itself and also offered as services to customers via gcp platform services so you have global messed redundant backbone network points of presence when we talk about google it has 110 plus edge points of presence in more than 200 countries and when we talk about edge caching edge caching platform at periphery of their network so this is what defines the physical infrastructure of google now there is much more to it rather than
just these four points when we talk about abstract infrastructure that is divided in global regions and zones when we talk about zone a zone is roughly equivalent to a data center and a single point of failure so you could have your compute engine which is within a zone or you could say compute engine is a zonal resource you have regions which are geographical areas which contain multiple zones so you could have a region for u.s central or europe central europe west and so on and within a region you would have one or multiple zones and
zones basically would allow high availability of resources so you have cloud load balancer is an example which is regional resource then you have global resources and they are available and shared across the planet so you have various global resources such as network which could be even your ip addresses and so on now let's do a quick comparison of aws with azure wes gcp and let's look at what each cloud provider offers later we will also look into different services in detail when it comes to google cloud platform what each service does what you can benefit
from which service should you use in what case we will learn about those in later slides if i compare your different cloud providers when we talk about amazon and its cloud offerings that is amazon web services or aws as we will know it amazon web services has 69 availability zones within 22 geographical locations and soon it will have 12 more in future so this number keeps growing based on the spread of the services which a particular cloud provider offers here we are talking about availability zone specific information when we talk about microsoft's azure it has
54 regions worldwide and is available in 140 countries across the globe when we talk about google cloud gold cloud platform is available in 200 plus countries across the globe when we talk about virtual servers amazon's ec2 that is elastic compute cloud it is a web service which basically helps to resize your compute capacity where you can run your application programs on a virtual machine so using ec2 service you could launch virtual instances that could have any distribution of linux windows you could have different specifications when it comes to ram or cpu cores or disk you
could also decide on what kind of storage a particular instance should use whether the storage should be local to the instance or whether that should be an elastic file system or even an object storage when it comes to azure or microsoft's offering as your virtual machine that is infrastructure as a service gives a user the ability to deploy and manage a virtual environment inside a virtual network on the cloud and this virtual network on the cloud would be managed by cloud provider google cloud or the offerings from google cloud platform that is gcp vm instances
enables users to build deploy and manage virtual machines in order to run different kind of workloads on the cloud now when we talk about compute engine here it would be good to discuss a little bit more about compute engine and what are the different options which google cloud offers so when you talk about your compute engine you have scalable high performance virtual machines compute engine delivers configurable virtual machines which run in google's data center with access to high performance networking infrastructure and block storage and you could select vms for your needs that could be general
purpose or workload optimized and when we talk about workload optimized you have predefined machines or you have custom machine sizes you can integrate compute with other google cloud services such as ai or ml and go for your data analytics you have when we talk about your gcp vm instances just to expand on that you have general purpose instances which we call as n2 which provide a balance between price and performance and they are well suited for most workloads including line of business applications web servers and databases google cloud also offers compute optimized instances which we
call as c2 instances which offer consistent high-end virtual cpu performance which are good for aaa gaming eda hpc and other applications now when we talk about compute optimize your general purpose how would we leave memory optimized instances those are m2 machines which google offers so these offer highest amount of memory these vms are well suited for in-memory databases such as sap hana real-time analytics and in-memory caches so if i would summarize this when we talk about different instances aws also offers different kind of instances which are memory or compute or disk optimized you have general
purpose and each category of machines has a different pricing model you can always go to an aws website look for the pricing models and that will give you an idea of on-demand instances or dedicated instances reserved instances and so on similarly if you talk about google cloud google cloud also has instances with various options which become the key features of why customers would choose google cloud platform such as live migration for vms so compute engine within your gcp can live migrate between host systems and when i say live migrate it basically means without rebooting which
keeps your application running even when the underlying host systems require maintenance you also have preemptable virtual machines wherein you can run bad jobs and fault tolerant workloads on pre-emptible vms to reduce your virtual cpu and memory cost by up to 80 percent while getting the same performance so these are your preemptive or preemptable virtual machines the only demonic is these can give you a really cost efficient resource usage however can be taken off the shelf anytime and that's why we call them preemptable virtual machines you also have sole tenant nodes which are physical compute engine
servers dedicated explicitly or exclusively for users use case and when we talk about sold tenant nodes these are usually good when you are dealing with or when you are working with your applications which we call as bring your own license applications so sole tenant nodes give you access to same machine types and virtual machine config options as regular compute instances so there are different options which google cloud offers when it comes to these instances which we are talking about and it takes care of different use cases in comparison to other cloud providers which are also
offering these services such as you can have predefined machine types you can have custom machine types preemptable vms as i said live migration of vms you can use persistent disks which could give you durable high performance block storage you have local ssds you also have gpu accelerators which can be added to accelerate computational intensive workloads such as machine learning simulation medical analysis and so on and you have features such as global load balancing which makes google cloud a unique choice when we talk about platform as a service amazon has platform as a service offering which
we call as elastic bean stock among one of its services it's an orchestration service for deploying applications and helping in maintaining these applications azure cloud service provides a platform to write the user application code without worrying about the hardware resources google app engine is a service used by developers for building and hosting applications on google's data centers when we talk about server-less computing so amazon's aws lambda is a compute service it is used to execute backend code and scales automatically when required when we talk about azure you have something called as functions which allow users
to build applications using server-less symbol functions with a programming language of their choice when we talk about google cloud gcp has cloud functions which is easiest way to run your code in the cloud and it is highly available and fault tolerant so in these days when we are talking about microservices architecture which organizations are preferring when we are talking about organizations which can scale which can dynamically change their underlying architecture organizations would be interested in server less computing where they do not have to have a infrastructure setup planned in advance before going for their use
case and this is where monolithic applications are really not a preferred choice a lot of organizations are decomposing their applications into microservices based on business capability or decomposing based on sub domains we can learn about microservices architecture later but just to know that serverless computing which basically helps any organization if for example if you have a web application that receives non-linear traffic and you cannot keep an eye on your server always it would be good to have someone to auto scale your application serverless is basically a computing model where cloud service provider is responsible for
managing the piece of code without the developer having to bother about infrastructure setup management maintenance and so on now when we talk about applications being serverless or benefiting from serverless computing one of the key things would be zero administration so deploying application without any provisioning and management auto scaling capability that is let the service provider worry about scaling the application up and down you have paper use model which any customer would want to benefit from that is pay only for the resources that you have used or that you are continuing to use right shorten the
time between idea implementation and employment and this is something which any organization would want that you would want a faster bring to the market solution timeline in comparison to getting entangled with deployment management and maintenance of your underlying infrastructure when your applications are facing a high demand so when we talk about serverless it is a function as a service because each part of your application is divided as functions and can be hosted over multiple service providers you have serverless apps which are usually divided as separate units or functions based on functionalities or domains and so
on so serverless computing is gaining quite popularity these days in comparison to the traditional three-tier architecture where you had a presentation layer you had an application layer you had a database layer now that kind of infrastructure is not really preferred in today's modern times when organizations are working on different kind of newer applications when we talk about object storage amazon has simple storage service that is s3 it provides object storage which is built for storing and recovering information or data from anywhere over the internet azure comes up with blob storage that is binary large objects
storage offers large amount of storage and scalability it stores the object in the tiers and depending on how often the data is being accessed now same thing applies to even s3 which is from amazon so s3 also has different kind of storage classes which can be selected when a user or organization intends to use a storage service and when i speak about storage classes it basically means having a frequent access storage or a infrequent access storage or what we call as just an archival solution google cloud has cloud storage and it provides unified object storage
for live or archive data the service is used to store and access data on gcp infrastructure when it comes to advantages amazon web services has enterprise friendly services easy access to resources increase in speed and agility and that too on demand and takes care of your security and reliability of resources which are offered when it comes to azure it has better development operations strong security profile provides lot of cost-effective solutions and operation execution friendly when we talk about google cloud one of the key features here is better pricing than competitors live migration of virtual machines
which really interests lot of organizations would want to modernize their infrastructure without having any kind of disruption in their existing services improved performance redundant backups and so on when it comes to disadvantages in aws you have limitations when it comes to ec2 service well there are different options in the kind of machines which you can choose which you can work on you have a technical support fee which is incurred network connectivity and then also downtime which might be in case when you're migrating azure it has different code base for cloud and premise platform as a
service ecosystem is not really as efficient as infrastructure as a service poor management of gui and tools integrated backup when we talk about google cloud support fee is quite hefty it depends on what kind of solution or support you would or organization would be interested in it has a complex pricing schema although it has different use cases for which any user or any organization can benefit from downloading data from google cloud services is an expensive option storage might not be however uploading data or downloading data would be an expensive option when we talk about domino's
pizza use case now while we were discussing about these features i would really want to spend more time in discussing each of these features or services in detail such as you have compute engine you have storage you have big table you have data proc and so on so there are huge list of services now before we get into your domino's pizza use case let me show you this page on google cloud where you can look at different services so if you go to cloud.google.com and if you look at get started option under dock so here
you can find build solutions look at different use cases learn from basics of what is your google cloud what are cloud basics then you can look at different cloud products so here you have products in different categories such as ai and machine learning api management compute containers data analytics databases developer tools and so on and you can always click on any one of these to look into different solutions which are being offered what are the different use cases what are the best practices when it comes to migrating vms to compute engine or operating containers or
building containers and so on you can always look at featured products and that gives you a quick snapshot of what are the different products such as you have compute engine you have cloud run bigquery which is a data warehouse you have cloud sql which is a managed mysql or postgres you have cloud storage to basically push in any kind of data there you have security key enforcement ai and machine learning and so on so you have different feature products now if you scroll down to the lower end of this page you can again look at
different solutions and that would be really interesting to read and learn from so you have infrastructure modernization you have application modernization you have data management and so on so if i look at infrastructure modernization you could basically look at the solutions which google cloud offers and what it does when it comes to having your infrastructure modernized or benefiting by integrating with cloud and having immediate benefits of infrastructure modernization you can look at different use cases what they are doing how google cloud really helps when it comes to migrating the workloads to a cloud and how
your different cloud solutions such as vm migration you have sap on google cloud vmware as a service and so on and you could learn from these different solutions which are offered you can also look at application modernization so not only infrastructure modernization but organizations would also be interested in re-looking at their applications re-looking at how these applications could be moved from monolithic to microservices architecture or how applications can benefit from modernization and cloud computing offerings so you have again different use cases here which talks about different ways in which google cloud can help how you
can modernize your applications how you can use different solutions which google cloud is offering now when you talk about cloud computing services you can always go into cloud.google.com if you have created a free account then you can just login and every user by default gets a 300 free credit so when they can try out different products where they can use different services so here if i click on console where i'm already logged in with my gmail account or my google cloud account wherein i have 300 free credit out of which some is being used you
have a google cloud console and here from the hamburger menu you can click and you can look at different services within different domains so you have compute domain which has different services such as app you have compute engine and that basically allows you to use your vm instances which we were talking about previous slides different instance groups create your templates use soul tenant nodes create snapshots or backup of your data use different zones you can go for kubernetes which is containerized based engine you have cloud functions you have cloud run and then you have storage
related different options such as big table you have data store fire store file store storage and so on and for each of these services you can read about from the google's documentation or anyways i will be explaining that later you can also look at networking related operation related and other tools which are offered by google cloud so these are huge list of services which google cloud platform offers in different ways for different use cases now let's look at this domino's pizza use case and see what it helps us learn about so you can always access
this page by going to this link which talks about customers and then which shows you different use cases so dominoes increasing monthly revenue by six percent with google analytics premium google tag manager and google bigquery so this is basically when domino's started using gcp and what was the result of that now let's look further into this so we all know that domino is most popular pizza delivery chain operating across the globe but how was that possible let's take a look so the challenges were they wanted to integrate marketing measurement across various devices connecting crm and
digital data to create a clear view of customer behavior to make cross-channel marketing performance analysis easy and efficient now for these challenges which dominos was facing the solution was using google analytics premium google tag manager and bigquery which were used to integrate digital data sources and crm data reporting was made easier and more efficient by implementing google analytics premium because it had the ability to access a single google analytics account to evaluate web and app performance by using the new google tag manager implementation dominus were able to act fast they were able to connect crm
data with digital analytics which basically provided domino with greater visibility on customer behavior what was the result there was an immediate six percent increase in monthly revenue eighty percent of costs and ad serving and operations were saved increases agility with streamlined tax management they obtained easy access to powerful reporting and customized dashboards now that was just one simple use case before we go on to hands-on we can also talk a little bit back about these services which google cloud offers as we discussed and some of these services which can really make you think why not
google cloud platform so when we talk about your different cloud platform services let's learn about some of these services in brief in what each service is what it does and how it can help us in handling our use cases or working with different products so let's learn briefly about different services which google cloud platform offers now one of the domains is compute and then let's look at the compute services which gcp offers now here i can log into the console and this hamburger menu on the top left corner i can click on this one and
go into the compute engine section so this is the compute domain which has app engine kubernetes engine cloud functions and cloud run so these are your different services which are offered within the compute domain and here we can get into compute engine by clicking on this one and then basically going to vm instances so before we see how we can use this compute engine let's understand about some of the features of compute engine which basically offers scalable high performance virtual machines which are configurable and which runs in google's data center with access to high performance
networking infrastructure and block storage so from here you can select vm for your needs for your general purpose or workload optimized predefined or custom machines now here you can integrate compute with other google cloud services such as ai ml and other data analytics services you have different machines which are offered here such as general purpose which provide a balance between price and performance which are well suited for most workloads including line of business applications web services and databases you also have compute optimized machines which offer consistent high end virtual cpu core performance and which are
mainly good for gaming eda high process computing and other such applications apart from general purpose compute optimize you also have memory optimized machines which offer highest amount of memory and these vms are basically well suited for in-memory databases such as sap hana real-time analytics and in-memory cache now we can see these options here you can click on vm instances while you're logged in into your google cloud console here you can even create an instance template which can be used to spin up instances for example if i click on new vm instance from a template there
are some templates which i have already created for my usage now here i can basically use one of these templates or what i can do is i can go back i can go into instance templates and this basically allows me to create a template so you can click on create instance template which basically allows you to create templates which can be used to spin up different instances we can give a name to the template for example we can say template instance now here i can choose machine configurations and this is where you have different options
so you have general purpose as i mentioned which provide a balance between price and performance you have memory optimized which are large memory machine types for memory intensive workloads you also have compute optimized which basically give you high performance machine types for compute intensive workloads so you can choose any machine configuration which is available here based on your requirements now if you click on general purpose that has different options here so you look into the series where you have n1 series you have e2 which are cpu platform selection based on availability you have n2 and
n2d so let's just have n1 selected now you also have here which talks about machine types and here we can choose the configuration which we are interested in depending on the applications which will run within the machines we can choose a machine so by default it shows one virtual cpu core and 3.5 gigabyte of memory or ram you could choose a higher end machine so as of now i'll just say n1 standard now that basically allows me to choose these machines so there are different features which your compute offers such as you have live migration
for vms you have preemptable virtual machines you have sold tenant nodes and all those options can be seen here now in this machine for my boot disk i can select a distribution which i would be interested in for example i could be going for public images and i can choose for example ubuntu and then i can choose a version so it shows me 16.04 and you also have latest versions such as ubuntu 20. you can choose one of these and here it asks you to choose the boot disk type so that could be standard persistent
disk which is hdds which are low in performance and you can say low in cost in comparison to ssds so ssds give you better performance but then they are little expensive than using standard persistent disk we can choose this and we can give a disk size for example 20 gigabyte and i can click on select before clicking on select i can click on custom images and that shows me if you have any other images created in your project you could use those you could also learn about images by clicking on this link so we'll click
on public images we have chosen this distribution let's do a select and now here you have identity and api access management so let this be default you can say allow default access and what you need is depending on the services we can choose it allows http and https traffic now we would also need some way to connect to these machines so when you set up an instance by default you can do ssh into it using the google cloud console or from cloud shell you can also give a private and a public key so here you
have the option where you can give all these details so when it comes to management it tells you what would you want to go for reservations and you can say automatically use created reservation you could also say no you would not want to use a reservation you can also set up or provide a startup script which you would want to give whenever your machines come up and here you have the options of preemptability so compute offers preemptable virtual machines so that is mainly when you would want to run bad jobs and fault tolerant workloads on
these machines and you can benefit with a reduced cost for your virtual cpu and memory by 80 so these are virtual machines which would be lasting less than 24 hours now by default this is off and it purely depends on your workload what you would want to run on these instances i could just go ahead and select preemptable being on that and use this feature of compute you also have on host maintenance which talks about what would be happening with this compute engine instance so when compute engine performs periodic infrastructure maintenance it can migrate your
vm instances to other hardware and this is one of the features which compute engine offers which we call as live migration for vms here your compute engine can live migrate between host systems that is the underlying systems on which these vm instances are based on without rebooting which will keep your application running even when host system require maintenance and here it says migrate vm instance recommended and let that be as it is you can also say that if there is a maintenance happening you can terminate the vm instance now it also talks about automatic restart
which is on which basically means compute engine can automatically restart vm instances if they are terminated from non-user initiated reasons so these are all the settings which are available in management and it also tells us the different features which we have there is also a feature called sold tenancy which basically means sold tenant sold tenant nodes which can be chosen so you can have physical compute engine servers dedicated exclusively for your use and this is usually good when you're talking about bring your own license applications so sole tenant nodes give you access to same machine
types and vm configuration options as regular compute instances however these might be a little expensive we can choose this one we can also look into networking which basically shows the default setup which goes for auto subnet you can also choose a particular ip if that is required but that would cost more you can click on disks which talk about what do you want to do with the boot disk when the instance is deleted what is the encryption mechanism you would want to use and here finally you have security so basically as i said you can
ssh into instance using the cloud console option and you can also provide a public ssh key so one way of doing that if you intend to use an external ssh client such as putty to connect you can create a key so for example i can go into puttygen and here i can say generate just move my cursor over here and that will create a key i can give a name to this one so i'll say sdu will be username i'll give a simple password which i will use to login to this machine and then i
can save this private key which will get saved so let's say hdu new key and this gets saved in a dot ppk file which is usually used when you use an external ssh client to connect so save this one and that saves a ppk file on your desktop what you can also do is you can copy this public key content from here and this is what we would want to give in our instance here so so that the public key gets stored on the instance and private key is what we will use to connect so
once i paste it here it resolves the name to htu and we have given the public key now in certain cases you may want to use a software which uses ssh to connect to the machine and that software might not be such as your putty so in that case you may want a pem file or a private key which is saved as a dot pem file so you can also do that by going to conversions and do a export open ssh key and then save it so i'll say hdu new key but then this one
will be saved as a pem file on my machine so you have ppk file which allows you to use putty to connect to the instance you have a pem in case a software needs directly ssh to these machines you also have the public key which we have already provided in the machine now once this is done i can close puttygen i can go back to this page where i'm creating an instance template and then i can just click on create so this has created a instance template which i can use to spin up my instances
any number of instances using the theme template only thing which i would have to do is change the region where i would want the instance to run so now that we have created instance and that's my this one the third option i can go back to vm instances now i can click on create and here i can either create an instance from scratch by giving all the details again or i can just use my template so i can click on new vm instance from a template choose my template click on continue and once this is
done i can give my instance name so let's say c1 i can choose the region so i will choose frankfurt and then rest everything is auto populated based on the template what you have given and you click on create now this basically allows you to spin up an instance and you can create any number of instances using your template you could have already created a new instance right from scratch now once the instance is created this has a public ip and a private ip private ip will not change unless you would want to set up
a new machine but public ip will change every time you stop and start the machine now this is what we need to connect to this machine i can also do a ssh from here by doing this in open in a browser window let's click on this so this is an internal way of connecting to your instance using ssh let's wait because it would be transferring the ssh keys to the vm it establishes a connection and i'm connected to my machine what we can easily do is to confirm if we are in the right machine we
can just do a ls to look at the file system what we can also do is we can basically login as root by doing a sudo su and that allows you to get into the machine as root and from here i can switch to hdu user which will have a dot ssh directory in home and that basically has authorized keys and if we would want to see if this one contains my public key i can just do a cat dot ssh and then look into authorized keys and this shows me the public key which we
had initially added to our instance so this confirms that we are logged into the machine which we created now i can close this and what i can also do is i can copy this public ip so let's do a copy to clipboard now go to putty and here i will say hostname where i'll say hdu give my public key i'll click on ssh and in ssh i'll go to authentication now here we need to give our ppk file so the ppk file was sdu new key select this one come back here come to the session
give it a name for example c1 save it and then you can just say open and say yes and you are logged in to your machine now once you're logged in you can always do a minus ssh and that shows you the file so this is how you have just used the compute engine to spin up an instance but we used a template which basically allowed me to create this instance and then i can connect to this instance and then i can start working on this so when we talk about features of compute engine it
has predefined machine types as we saw so compute engine offers different predefined virtual machines and this they have configurations for every need from small general purpose instances to large memory optimized instances with up to 11.5 terabytes of ram you can have fast compute instances optimized up to 60 virtual cpu cores you also have custom machine types so you can create a vm that best fits your workload and by tailoring a custom machine type to your specific need you can realize significant savings there are preemptable vms which we saw there is also a facility which allows
you to take the benefit of live migration for vms you have durable high performance block storage for virtual machine instances in the form of persistent disks where data is stored redundantly for integrity flexibility and to resize storage without interruption and you could be choosing hdds or sdds for your instances now you also have option such as gpu accelerators so for example if i just click on create instance and i can look into that so here let the instance name be instance one and what i would be interested in looking at this one which says cpu
platform and gpu so cpu platform configuration is permanent you can also do add gpu so gpus can be added to accelerate computationally intensive workloads like machine learning simulation medical analysis and virtual workstation applications so you can add and remove gpus to a vm when your workload changes and pay for gpu only while using so these are some of the features which basically compute engine offers and we already know that google builds in second level increments so we only pay for the compute time now there are different savings which are possible so you have commitment savings
which basically means you can save up to 57 percent without upfront cost per instance type lock-in you have container support so you can basically run manage and orchestrate docker containers or compute engine instances so here when we are setting up our instances there is an option which basically allows you to use or deploy docker images now that can be done you can also benefit from sustained use savings that is sustained use discounts which are automatic discounts for running compute engine resources for significant portion of the billing month you can create a reservation for vm instance
in a specific zone which is basically seen under your management section here and you can ensure your project has resources for future increases in demand and if no longer needed delete the reservation so these are some of the features of compute engine what we have done is we have created compute engine using the console now you can go back you can also do that using cloud shell and you can click on this one which activates cloud shell you could also have the cloud sdk support on setup on your machine which can be used now i
can just open this in a new window and from here i can start giving commands if i would be interested in setting up an instance from my command line so here you have different options now to begin with you can just do a g cloud and just hit enter and that will show you different options which are available which can be used so you have g cloud compute here which shows an option to create and manipulate compute engine resources now i can just do a queue to quit i can do g cloud compute and that
basically will again show me different options which are available if you would be interested in setting up instances from the command line so here i can say g cloud compute and then go for instances and if you do not know the commands you can just hit enter that will show you all the different options which we have so here we have different options such as list or create or start or update for example i can just do a list here to see what instances i have and the instance which we just created shows up here
it says that status is running i can just stop this instance i can delete this instance i can even create an instance by using a create command here and you can just do a create help which will show you what are the different options you can give so it says instance name is what you need you can choose an accelerator you can choose the boot disk and various other options so i can just say create and then i can give a name for example c2 and once i click on this one it says did you
mean europe west 4 zone so it is asking me for the region and the zone and i can say yes and these settings are coming from my default profile i can always change those by changing the metadata so now we have created an instance it says running if we do a list again to see we see two instances created one was in europe west three one was only europius west four both of them have internal and external eyepiece now you can do a describe to look at the different options here so g cloud basically allows
you different commands which you can use to work with your instance to create instances to change the metadata if you would want to change the region if you would want to add a startup script all those options are possible from the command line which we can learn in detail in later sessions so this is your compute engine as a service now that we have learned about the compute domain and g compute services which is offered by gcp let's also learn about storage and databases that's again within the storage domain and the services which are offered
by google cloud platform now you can go back to console and here you can click on this one and here you can just scroll down to see what are the different options in storage so you have options such as bigtable you have data store firestore file store you have sql based services you have storage which is object storage and then you have other options which are available so google cloud platform offers different storage based services out of which storage which is your object storage is quite popular one click on storage and that basically shows you
an option which talks about storage browser so this is your google cloud's object storage so when we talk about object storage it is basically a storage where you could store any kind of data and when we talk about object storage it is a bunch of bytes which we address wherein every object will have a unique key these unique keys are in the form of url which allows you to access the object so cloud storage is comprised of what we call as buckets which are used to store and hold your storage objects the storage objects are
immutable and every change creates a new version now you can also have control access via iam that is identity access management or via access control list so there is also an option called object versioning which basically says if it is on every time you try to store the same object a new version of the object would be created otherwise newer option will override old one as we cannot archive the old version so let's see how we work with this object storage so here you can click on create bucket now once you click on create bucket
it needs a name so let's say test bucket and here i can just give say number one so that says this is the name of my bucket now i can directly click on continue or it would be good to look at different options which are available here so when you click on choose where to store your data so it already gives me an option it says the bucket name is already taken so let me give it a unique name so let's say test buck and let's call it aua so that should be unique now here
it says choose where to store your data so this one gives you location type so you can have region specific buckets which give you lowest latency that is fastest response time within a single region however it does not make your storage highly available you can make a dual region which is basically allowing your bucket or storage to be accessible across regions you can also make it multi-region which is highest availability offered as of now we can choose region specific and now it asks you to choose a location now as always i will choose frankfurt now
i can click on continue and rest all the rest let all the storage options be default or you can click on a default storage class now that tells you based on your storage class there are variant costs when it comes to storing retrieving or doing any kind of operations so you have a standard option which says best for short-term storage and frequently accessed data you can also go for cold storage such as near line which is best for backups and data access less than once a month you can go for freezing storage such as cold
line best for disaster recovery and data accessed less than once a quarter or you can go for archiving where the data is accessed less than once a year let's go for standard as of now and now you can choose how to control access to objects so you have fine grained or uniform let it be fine grained wherein you can give additional permissions at bucket level using iam or object level permissions using access control list in advanced settings you can choose the encryption and you can also choose a retention policy so a retention policy to specify
the minimum duration that this bucket's objects must be protected from deletion or modification after they are uploaded you can always learn about this more by clicking here now once i have chosen all the relevant options i can click on create and that basically will create a bucket by the name i have given i can click on overview to basically see brief details about my bucket such as region what is the default storage class and it also shows you the link url which can be used to access your bucket it also gives you the link for
gsutil now gsutil is a command which can be used in your cloud shell to basically work with your buckets you can click on permissions to basically see what kind of permissions are already in place and you can then make changes you can basically add members you can view by different roles so for example here by default it shows other services such as data proc or your bucket owner or bucket reader related permissions which have been already granted now once i have looked at my bucket i can start using it i can drag and drop and
push in files here so as of now there are no live objects in my object storage that is in my bucket what i can do is i can click on upload files and then i can choose a location from my machine for example i'll go into data sets and what i can do is i can choose some of the files here in any format let's choose csv or text and just to open so this one will basically upload my data sets here now once i've uploaded the files i can basically close this one i can
look into options here which says edit permissions and edit metadata if you would want to download it if you would want to copy move or rename it if you would want to export to a different service called cloud pub sub which is publish subscribing messaging system you can scan the data now you can click on a particular file and that basically shows you the url which basically allows you to access this file you can try copying this you can click on download and download this file you can even try accessing this from public and that
basically shows you the content of this file based on the permissions so this is basically your object storage which is one of the service which is offered what you can also do is you can create folders and within folders you can then upload your data so this is your google cloud storage option which is for your object storage that is you can add different items you can give different permissions and you can use this google cloud platforms storage service offering now you also have other options such as big table and we can go into that
by clicking here and click on big table so big table is one of the service which kickstarted nosql databases today in market we see different nosql databases such as cassandra hbase mongodb couchdb neo4j and many others so you can basically use bigtable which was the pioneer when it comes to your no sequel databases or not only sql databases so the problem initially faced by google was that of the web indexes behind search engine were taking too long to build so company wanted to build database that would provide real time access to petabytes of data and
that's where bigtable began so big table powers different other google services such as gmail google maps and other services and in 2015 it was launched as a service for customers so when it comes to scalability with use of bigtable you can increase your machine count without any downtime and you can handle admin tasks like upgrades restarts and so on which are basically taken care by the cloud provider data present in cloud bigtable is encrypted and you can use im roles to specify access so data written to or from big table is through data service layers
such as managed virtual machines hbase rest servers java services hbs client and so on here if i would want to use big table i can click on create instance and that basically tells me cloud bigtable instance is a container for your clusters now here you can give a instance name so for example i will say a u a and then say for example test and let's say big table so that will be the name of instance this will be permanent you can choose the storage type again you can go for lower latency more rows red
per second typically used for real-time serving use cases or you can go for sdds which have higher latency for random reads good performance on scans and typically used for batch analytics so let's go for ssds as of now here you have cluster id which is auto populated you can choose a region so let's go for our favorite one here where i can say europe west 3 i can choose a zone here and i can then choose how many nodes would i want to use for my big table so when you talk about big table service
it will have a cluster underlying which will have various nodes which will control your data throughput storage and rows read per second so as of now let it be just one node and that's enough for our demo when we talk about performance it basically tells you based on the current node and storage type it tells you how many reads can happen at milliseconds so it says 10 000 rows per second at 6 millisecond you have rights which are 10 000 rows per second or you have scans which are 220 megabytes per second storage which is
taken care here would be 2.5 terabytes and i can then basically click on create now there is also some option called replication guidance which basically says replication for cloud table big table copies your data across multiple regions enabling you to isolate workload and increase the availability and durability of your data depending on your use case you can have big table which can be used to have your data across regions now you can click on create with all your specifications chosen and that's going to set up a cluster or you can say a fully managed nosql
database which will give you low latency and replication for high availability now once we have a new instance you can connect to it with the cbt command line tool and for instructions you can click on learn more here you can just click on this instance id to see the details again if you would want to look into your big table setup so it tells me here that we have one instance what is the cpu utilization time how many rows for red or written what is the throughput and this is auto populated based on your usage
you can click on monitoring and that basically will give you different widgets which will display information for your cpu utilization what is your hottest node depending on how many nodes you have system errors automatic failovers storage utilization and so on you can click on key visualizer which will allow you to look into your table if you have already created some data here and you can click on tables to see how many tables you have added here so that's in brief about your big tables which is one thing which we need to remember is it's not
good for all use cases so should be used for low latency access that is fast access and good if at least data is greater than one terabyte for smaller amount of data the overhead is too high so big tables performance will suffer if you store individual elements larger than 10 megabytes if you want to store bigger objects such as images video files then go for object storage and that would be a better option so always remember big table is not a relational database it's a no sql database and when you talk about multi-row transactions or
online transaction processing big table is not the right choice so it can be used for wide range of applications especially when you talk about your olap that is online analytical processing so it is designed to store key value pairs and there can be different use cases so for example if you are using something like cloud data flow or cloud data proc where you would want map reduce kind of operations big table can act as a good storage because it has very high throughput and scalability and the best thing is that it supports hbase api thus
allowing easy integration with apache hadoop and spark clusters which you can bring up using one more service which google cloud platform offers which is called cloud data proc so bigtable is good for real-time analytics it's commonly seen in financial services iot and others and if you are thinking of running interactive sql then bigtable would not be the right choice but the other choice would be bigquery you should also remember that this has a cluster running and you would be charged if this cluster kept on running so you have to be very careful in your free
account when you're using such services now we have clicked on this one so i can select this one and basically i can look at the permissions i can look at the labels i can look at inherited permissions here i can also click on my instance which we created and either you can edit or you can just do a delete so as of now we will just delete this which needs you to given that name so we'll say aua test bt and then click on delete so whenever you are trying out different services the first approach
should be setting up these different services seeing how they work basically trying connecting to them and once you are satisfied with the initial test then you can plan your actions and come back and use the service for a longer duration now that's your big table which is one of the offerings we can go back into storage and here you have other options which are available so for example we were in big table you also have an option such as cloud data store now that's one more service which is offered by google cloud platform when it
comes to storage domain so google added software on top of bigtable which supports more than simple key value pairs when you talk about secondary indexes instead of just having one primary index when you talk about asset properties for reliable transactions such as sql query like language so these features or these services were added to or on top of your bigtable which gave birth to a new service which was released as cloud data store so this is where you have cloud data store and you can select a cli cloud firestore mode so you can go for
a native mode enable all cloud firestores features with offline support or you can go for clouds data store system on top of clouds firestore so these are different options here and here we look at the api or scalability engine support how many rights it supports and so on you can choose one of these and then you can choose where to store your data so for example if i click on this one then it choose asks me to choose a location so it says the location of your database affect its cost availability and durability choose a
regional location lower right latency lower cost or multi-region location here i can basically choose for example europe and then i can go ahead and create a database so it says initializing cloud firestore in data store mode services in eu3 this usually takes a few minutes you will be redirected to your database once it is ready so if we compare the pricing structure between cloud data store and cloud bigtable always remember cloud data store you pay for monthly storage which is also in the case of big table however here you are paying for monthly storage for
reads and writes but in case of big table you are paying for cluster when it is running so cloud data store is a good option for small data infrequent access and it acts cheaper when you talk about large amount of data or big data and frequent access then you are talking about cloud bigtable so big table is cheaper when you talk about larger amount of data so here it says since your database is empty you can still switch to cloud firestore in native mode to get more features you could do that you could learn you
could say query by gql as of now we don't have any data here let's look at the dashboard which says since your database is empty you can still switch to cloud firestore in native mode to get more features so this is your cloud data store and it has many features which help you to work with your data however some features or important features of rdbms were still missing and that's where google created yet another big table based service called cloud spanner now we can continue working on cloud data store which basically gives you one option
to work with your data you can create an entity here by clicking on create entity and that basically gives you options such as default namespace you can give a kind you can give numeric id and you can start adding properties but to learn more about data store we will learn about this in further session so as of now i'm going to click on cancel i'm going to go back to my data store and this basically let's go to data store option or i can go into admin here which basically says if you have entities you
can import or export them so let's go back and let's again look into storage options so as of now we were in data store let's click on this one so as i mentioned your cloud data store which basically gives you some additional features on top of your big table but what google also did was it realized that there was a need for rdbms feature support that is there were a lot of features now here we have created a data store and it says your database is ready to go just add data you can create entities
start putting in data and then you can go ahead and query this so if you would want to learn more about your data store you can just click on this one and that takes you to the complete documentation of native mode and data store mode what is firestore in native mode what is in data store mode what are pricing and locations how you choose a database mode what are the feature comparisons what you can do what you are allowed to do here what programming languages can be used different regions pricing and so on as of
now we'll click on this one and let's look at one advanced service which google cloud came up with when it comes to your additional features of rdbms so google created yet another big table based service called cloud spanner now that you might not see here but if you scroll down you should be able to see your cloud spanner in the options here or did we miss it on the top so let's again look here here so it is here and you can click on spanner so cloud spanner was released in 2017 it basically supports relational
schema so it offers strong consistency for all queries which can be sql based you can have multi-region deployment now when it comes to massive scalability requirement and strong consistency cloud spanner is a good option so it says cloud spanner is a managed mission critical globally consistent and scalable relational database so if you would want to use this then you will have to enable this api which shows you an option here it says try this api so it's a managed service so it is one of the google's most expensive database services there is also one more
database service which is cloud sql which can be used so when we talk about your cloud spanner it is fully managed relational database service it is massively distributed you can have millions of machines across hundreds of data centers with support automatic sharding and synchronous replication it gives you low latency and schema updates without down time making data high availability and reliability so we'll learn about cloud spanner with more details later in other sessions now we can also go back to storage and we can look at the different options which we have here so you have
object storage you have spanner you also have your sql based service which is yet another managed service offered by cloud or google cloud i would say so that's called cloud sql so this is basically a service which allows you to have fully managed relational mysql postgres and sql server databases google handles replication patch management database management and other things which are related to this managed or fully managed database service it can allow you to handle terabytes of storage capacity with 40 000 iops with huge amount of ram per instance so you can click on create
instance here and then you can choose one of the database which you would want to use so cloud sql is a managed service you can choose one of your mysql or postgres and sql server say for example i choose mysql now that basically tells me what is the instance id it sets up a password you can always change the password you can change the region you can choose your database version and then you can also look at other configuration options which talk about machine type which will be used what is about your backup and recovery
maintenance and all that and if you click on create this will basically create a fully managed sql service which can allow you to straight away start using mysql on cloud thus allowing you to store your relational data when we talk about storage options how can we not talk about a data warehouse solution or basically an option which allows you to run your queries by directly updating data so you can use a data warehouse service which google cloud platform offers and that's called bigquery so basically you can be looking into the big data section here and
here you have an option called bigquery so this basically brings ease of implementation and speed so building your own data with house can be expensive time consuming and difficult to scale so with bigquery you just load data and pay only for what you use so it when it comes to features you have features such as capability to process billions of rows in seconds and if you would want to do real time analysis of streaming data that is also possible here so here we have clicked on bigquery which basically shows you the option where you can
start typing your query and test your data access for example if i have uploaded some data so it shows me there are some queries which are saved here now i can schedule your query i can basically choose the format of a particular query by clicking on this more here i have an option which says add data so i can pin it to a particular project i can explore public data sets i can create a connection so if i click on explore public data sets then it takes me to a page from where you can get
different kind of data sets which are already available which you can put into your bigquery and start querying your data by default it shows that it is aligned to my project and i don't need to worry about it i can look at saved queries if i have already saved a particular query i can look at job history i can look at transfers scheduled queries and reservations so big query basically initially had its own version of sql which was slightly different from standard sql but in 2016 bigquery 2 was released that supports sql 2011 standard you
can always select standard sql now bigquery when it comes to pricing we need to remember that the storage is where the storage cost is very less here approximately 0.02 cents per gb per month it is almost similar to near line where you can also have low cost that is 0.01 cents per gigabyte per month there is no charge for reading data from storage when it comes to querying and that's where the cost is incurred so one terabyte per month is free and after that it costs few cents per gigabyte so this is mainly for high
volume customers there is a flat rate by pricing which can be used so when you talk about querying you can save your query results you can create data set to store the results now results are put in a temp table in cache and after you're done with that you can delete data set and delete all the data so when you talk about loading data into bigquery you can get the data from cloud storage google drive cloud data store stack driver stack driver other options you have cloud bigtable or other web interfaces so you can download
from url such as csv json avro you can create data set create table and create from source by doing a file upload so 10 megabyte or less files can be uploaded using web interface as an option you can also use command line and then you can start working with your bigquery so here you can also work with streaming data by basically pushing in streaming data into your bigquery which allows you to add one record at a time now you could use something like cloud data flow which allows you to use a particular pipeline about cloud
data flow we will learn later so you can benefit from different features of bigquery and thus use google's offering to work on your structured data or i would say data which suits well in data warehouses now these are some of the storage related services which google cloud offers although we will learn about using bigquery and running or uploading some data by creating add data set here by say creating connection or using a public data set so as of now i have this you can also access the command line to work with this but we will
learn in detail about bigquery in later sessions now here you can scroll down and you also have an option within big data space and that is your data proc so when you talk about data proc this is again a managed service which allows you to run spark or hadoop jobs so especially if you are interested in big data workload so for big data processing for machine learning you can always use cloud data proc this uses compute engine instances under the hood but takes care of management of these instances so it's a it's a layer on
top to spin up clusters it's a managed service it's cheaper pay when your jobs are running only it's fast because it is integrated with other google cloud services you have open source components pre-installed and data proc is integrated with yarn to make cluster management easier when you talk about data proc you can click on create cluster and that basically allows you to set up your cluster by choosing a particular region which you would want to use for example i'll choose europeast4 here it tells what kind of machines you would want to use and by default
it has populated as the machine here which is 4 cpu and 15 gigabyte now since you might be using a free account let's not go for the high end machine let's go for n1 standard 2 and then you can scroll down it says what is the primary disk what is the disk type and this one was for your master machine that is machine which will have the master processes running then you have your worker node configuration which tells this will be let's choose a lower end machine and we can choose how many worker nodes you
would have so it says minimum 2 you can choose the ssds and their capacity it talks about yarn cores and yarn memory which will be allocated and here we have then option of clicking on create so once you click on create this will basically spin up a cluster wherein you can straight away start submitting jobs to this you can basically go ahead once your cluster is ready you can go into the cluster you can submit a job you can choose a type which is spark or any other application which you would want to run and
then basically you can use this fully managed service which allows you to run your big data clusters you can obviously control access via roles or access control list and you can have access as it is at project level or based on your data proc cluster or even at your worker notes so we'll learn about data proc in later sessions so as of now you see here the cluster is getting created it says the cloud storage bucket which it is using is this one now i can click on this and open link in a new tab
it still takes me into console but now it is showing me the bucket which is being used by your data proc cluster this is the bucket which is being used and it is being used for underlying metadata which gets stored here so you look at the cluster related folders you can click in these folders and then it can see what is the script output what it is doing and so on now i can come back here to my bucket which will basically show me what kind of buckets have been created so you see data proc
service automatically created some buckets which will be holding some data you also have some other buckets which were created based on other services which we used the access control is fine grained in all cases plus it also shows our bucket so underlying it is using compute instances it is basically using let's go here and let's go to compute engine and let's look into vm instances so data proc which has spun up a cluster is also using the vm instances which we see here are running it is using the buckets and it has made a cluster
ready to use so you can click on this cluster and that shows me my cluster related details if there are any jobs running what are vm instances what kind of configurations it has used and you can look at different details here you can look at the logs you can click on jobs and that will show you if you have basically run a job on this ready to use cluster so it says there is this particular job which was run which was a spark job you can click on submit and this one tells you what is
the job id what is the region you would choose for example we will again choose for example europe quest 4 it tells what is the job type so you can run all these type of jobs in this cluster such as hadoop spark pi spark hive pig presto you can give your jar file so if you are packaged your application is a jar you can mention that here you can pass in some arguments you can also say some other jar files add some properties and then click on submit which will run your job on this ready
to use cluster now since we have tested it i can basically go ahead and do a delete i don't want to incur any cost on these manage services which are running so there are lot more information about using these services which google cloud platform offers and we can continue learning about these services as we explore your google cloud console or even your command line option now we can come out of this one by clicking on this menu and then we have other options such as kubernetes you have cloud functions you have your networking related services
monitoring related services different kind of tools what you have you have other big data specific services which you can learn about and for each of these servers google has a very good documentation available for example when you talk about publish subscribe messaging system it is a real-time managed service which basically is a pioneer which was used and now you have a famous service such as kafka which is being used for your publish subscribing or messaging system kind of requirements so to conclude about google cloud platform services you can always go to cloud.google.com and look into
the document section now here you have list of different featured products you also have list of your different domains and services which google cloud offers and you can learn about all the different services which are offered by google cloud here you can click on featured products and that basically shows you compute engine cloud run cloud storage you have cloud sql bigquery vision ai you can scroll down and look at your artificial intelligence and machine learning related services platform and accelerators api management so google cloud platform offers different services in mainly in your compute storage and
databases you have networking related services big data specifics developer tools cloud ai identity and security iot management tools api platform and so on so basically learn about the gcp services which google cloud platform offers and in detail you can play around with different services which are offered by creating a free account and as i demonstrated you can use any one of these services quick start them basically connect to them put in your data or use a managed service to manage your data and benefit from google cloud platform thus having modernized infrastructure for your different use
cases all the best happy learning take care hi there if you like this video subscribe to the simply learn youtube channel and click here to watch similar videos turn it up and get certified click here
Related Videos
Google Cloud Platform Full Course | Google Cloud Platform Tutorial | Cloud Computing | Simplilearn
3:42:31
Google Cloud Platform Full Course | Google...
Simplilearn
472,963 views
Kubernetes Crash Course for Absolute Beginners [NEW]
1:12:04
Kubernetes Crash Course for Absolute Begin...
TechWorld with Nana
2,759,708 views
ChatGPT Tutorial: How to Use Chat GPT For Beginners 2024
27:51
ChatGPT Tutorial: How to Use Chat GPT For ...
Charlie Chang
3,162,066 views
Associate Cloud Engineer Certification | Google Cloud (GCP) | First 25 Steps
1:38:02
Associate Cloud Engineer Certification | G...
in28minutes - Get Cloud Certified
141,458 views
What Is Google Cloud Platform? | What Is GCP? | Introduction To Google Cloud Platform | Simplilearn
1:31:54
What Is Google Cloud Platform? | What Is G...
Simplilearn
43,568 views
Google Cloud Platform Full Course 2023 | GCP Full Course For Beginners | Simplilearn
3:11:30
Google Cloud Platform Full Course 2023 | G...
Simplilearn
29,045 views
AWS Tutorial For Beginners | AWS Certified Solutions Architect | AWS Training | Edureka
2:00:23
AWS Tutorial For Beginners | AWS Certified...
edureka!
2,304,166 views
Day-1 | Basics of Cloud Computing | Fundamentals of Azure #freeazurecourse
50:08
Day-1 | Basics of Cloud Computing | Fundam...
Abhishek.Veeramalla
137,743 views
Google Cloud Digital Leader Certification Free Full Practice Exam
2:13:42
Google Cloud Digital Leader Certification ...
Exam Tests
3,084 views
An Introduction to GCP for Students
42:37
An Introduction to GCP for Students
Google Cloud Tech
893,274 views
Google Cloud Platform Full Course | Google Cloud Platform Tutorial | Cloud Computing | Simplilearn
3:42:20
Google Cloud Platform Full Course | Google...
Simplilearn
453,815 views
Introduction to AWS Services
38:54
Introduction to AWS Services
AWS with Chetan
2,236,423 views
GCP Interview Questions | Top 50 Google Cloud Interview Questions & Answers | GCP Training | Edureka
41:12
GCP Interview Questions | Top 50 Google Cl...
edureka!
92,877 views
Hands-On Power BI Tutorial 📊 Beginner to Pro [Full Course] 2023 Edition⚡
3:02:18
Hands-On Power BI Tutorial 📊 Beginner to ...
Pragmatic Works
2,705,807 views
What is generative AI and how does it work? – The Turing Lectures with Mirella Lapata
46:02
What is generative AI and how does it work...
The Royal Institution
977,590 views
Google Cloud Tutorial for Beginners | 50 Services in 50 Minutes | Cloud Computing for Beginners
39:20
Google Cloud Tutorial for Beginners | 50 S...
in28minutes - Get Cloud Certified
7,088 views
Cloud Computing For Beginners | What is Cloud Computing | Cloud Computing Explained | Simplilearn
24:38
Cloud Computing For Beginners | What is Cl...
Simplilearn
2,005,823 views
Google Cloud Platform Tutorial | What is Google Cloud Platform | GCP Training | Edureka
1:36:56
Google Cloud Platform Tutorial | What is G...
edureka!
113,065 views
Cloud Digital Leader Certification | Google Cloud (GCP)
1:07:37
Cloud Digital Leader Certification | Googl...
in28minutes - Get Cloud Certified
44,299 views
Copyright © 2024. Made with ♥ in London by YTScribe.com