welcome to Docker 101 if your goal is to ship software in the real world one of the most powerful Concepts to understand is containerization When developing locally it solves the age-old problem of it works on my machine and when deploying in the cloud it solves the age-old problem of this architecture doesn't scale over the next few minutes we'll unlock the power inside this container by learning 101 different concepts and terms related to computer science the cloud and of course Docker I'm guessing you know what a computer is right it's a box that has three important
components inside a CPU for calculating things random access memory for the applications you're using right now and a disc to store things you might use later this is bare metal hardware but in order to use it we need an operating system most importantly the OS provides a kernel that sits on top of the bare metal allowing software applications to run on it in the olden days you would go to the store and buy software to physically install it on your machine but nowadays most software is delivered via the Internet through the magic of networking when
you watch a YouTube video your computer is called the client but but you and billions of other users are getting that data from remote computers called servers when an app starts reaching millions of people weird things begin to happen the CPU becomes exhausted handling all the incoming requests disio slows down Network bandwidth gets maxed out and the database becomes too large to query effectively on top of that you wrote some garbage code that's causing race conditions memory leaks and unhandled errors that will eventually grind your server to a halt the big question is how do
we scale our infrastructure a server can scale up in two ways vertically or horiz ially to scale vertically you take your one server and increase its RAM and CPU this can take you pretty far but eventually you hit a ceiling the other option is to scale horizontally where you distribute your code to multiple smaller servers which are often broken down into microservices that can run and scale independently but distributed systems like this aren't very practical when talking about bare metal because actual resource allocation varies One Way Engineers address this is with virtual machines using tools
like hypervisor it can isolate and run multiple operating systems systems on a single machine that helps but a vm's allocation of CPU and memory is still fixed and that's where Docker comes in the sponsor of today's video applications running on top of the docker engine all share the same host operating system kernel and use resources dynamically based on their needs under the hood docker's running a demon or persistent process that makes all this magic possible and gives us OS level virtualization what's awesome is that any developer can easily harness this power by simply installing Docker
desktop it allows you to develop software without having to make massive changes to your local system but here's how Docker Works in three easy steps first you start with a Docker file this is like a blueprint that tells Docker how to configure the environment that runs your application the docker file is then used to build an image which contains an OS your dependencies and your code like a template for running your application and we can upload this image to the cloud to places like Docker Hub and share it with the world but an image by
itself doesn't do anything you need to run it as a container which itself is an isolated package running your code that in theory could scale infinitely in the cloud containers are stateless which means when they shut down all the data inside them is lost but that makes them portable and they can run on every major Cloud platform without vendor lock in pretty cool but the best way to learn Docker is to actually run a container let's do that right now by creating a Docker file a Docker file contains a collection of instructions which by convention
are in all caps from is usually the first instruction you'll see which points to a base image to get started this will often be a Linux drro and may be followed by a colon which is an optional image tag and in this case specifies the version of the OS next we have the working directory instruction which creates a source directory and CDs into it and that's where we'll put our source code all commands from here on out will be executed from this working directory next we can use the Run instruction to use a Linux package
manager to install our dependencies run lets you run any command just like you would from the command line currently we're running as the root user but for better security we could also create a non-root user with the user instruction now we can use copy to copy the code on our local machine over over to the image you're halfway there let's take a brief [Music] [Applause] intermission now to run this code we have an API key which we can set as an environment variable with the EnV instruction we're building a web server that people can connect
to which requires a port for external traffic use the expose instruction to make that Port accessible finally that brings us to the command instruction which is the command you want to run when starting up a container in this case it will run our web server there can only be one command per container although you might also add an entry point allowing you to pass arguments to the command when you run it that's everything we need for the docker file but as an Added Touch we could also use label to add some extra metadata or we
could run a health check to make sure it's running properly or if the container needs to store data that's going to be used later or be used by multiple containers we could mount a volume to it with a persistent disc okay we have a Docker file so now what when you install Docker desktop that also installed the docker CLI which you can run from the terminal run Docker help to see all the possible commands but the one we need right now is Docker build which will turn this Docker file into an image when you run
the command it's a good idea to use the T flag to tag it with a recognizable name notice how it builds the image in layers every layer is identified by a Shaw 256 hash which means if you modify your Docker file each layer will be cached so it only has to rebuild what is actually changed and that makes your workflow as a developer far more efficient in addition it's important to point out that sometimes you don't want certain files to end up in a Docker image in which case you can add them to the docker
ignore file to exclude them from the actual files that get copied there now open Docker desktop and view the image there not only does it give us a detailed breakdown but thanks to Docker Scout we're able to proactively identify any security vulnerabilities for each layer of the image it works by extracting the software bill of material from the image and Compares it to a bunch of security advisory databases when there's a match it's given a severity rating so you can prioritize your security efforts but now the time has finally come to run a container we
can accompl accomplish that by simply clicking on the Run button under the hood it executes the docker run command and we can now access our server on Local Host in addition we can see the running container here in Docker desktop which is the equivalent to the docker command which you can run from the terminal to get a breakdown of all the running and stop containers on your machine if we click on it though we can inspect the logs from this container or view the file system and we can even execute commands directly inside the running
container now when it comes time to shut it down we can use dock or stop to stop it gracefully or dock kill to forcefully stop it you can still see the shutdown container here in the UI or use remove to get rid of it but now you might want to run your container in the cloud Docker push will upload your image to a remote registry where it can then run on a cloud like AWS with elastic container service or it can be launched on serverless platforms like Google Cloud run conversely you may want to use
someone else's Docker image which can be downloaded from the cloud with Docker pull and now you can run any developers code without having to make any changes to your local environment or machine congratulations you're now a Bonafide and certified Docker expert I hereby Grant you permission to print out this certificate and bring it to your next job interview but Docker itself is only the beginning there's a good chance your application has more than one service in which case you'll want to know about Docker canose a tool for managing multicontainer applications it allows you to Define
multiple applications and their Docker images in a single yaml file like a front end a backend and a database the docker compose up command will spin up all the containers simultaneously while the down command will stop them that works on an individual server but once you reach massive scale you'll likely need an orchestration tool like kubernetes to run and manage containers all over the world it works like this you have a control plane that exposes an API that can manage the cluster now the cluster has multiple nodes or machines each one containing a cubet and
multiple pods a pod is the minimum Deployable unit in kubernetes which itself has one or more containers inside of it what makes kubernetes so effective is that you can describe the desired state of the system and it will automatically scale up or scale down while also providing fa tolerance to automatically heal if one of your servers goes down it gets pretty complicated but the good news is that you probably don't need kubernetes it was developed at Google based on its Borg system and is really only necessary for highly complex hight trffic systems if that sounds
like you though you can also use extensions on Docker desktop to debug your pods and with that we've looked at 100 concepts related to containerization Big shout out to Docker for making this video possible thanks for watching and I will see you in the next one