docker stack is my new favorite way to deploy to a VPS

111.56k views5513 WordsCopy TextShare
Dreams of Code
Get your own VPS with 2 vCPU Cores, 8 GB RAM and 100 GB NVMe Disk space for just $5.99 per month wit...
Video Transcript:
for the past few months now I've been using a VPS when it comes to deploying my applications initially I achieved this by using Docker compose defining my entire application stack inside of a Docker compose yaml file similar to what I showed in my previous video on setting up a production ready VPS for the most part this has been working well however there have been a couple of things that I found to be a little unintuitive one of these things is when it comes to redeploying my service stack which I've been doing by manually sshing into my VPS and running the docker compose up command whilst I've managed to make this work it's not exactly the best developer experience especially when compared to the experience of using other platforms such as versel netlify or rway DOA not only is this a bad developer experience but because of the way that Docker compose works it also comes with some undesired side effects the most major of these is that when you go to redeploy your application stack using the docker compose up command it does so in a way that can cause downtime this is because when Docker compose redeploys it begins by shutting down your already running Services before attempting to deploy the upgraded ones but if there's a problem with your new application code or configuration then these Services won't be able to start back up which will cause you to have an outage additionally by needing to ssh in and copy over the composed. yaml in order to redeploy I find this prevents me from being able to ship fast due to the fact that it's a manual process in order to perform upgrades instead I'd much rather have a solution that allowed me to easily ship remotely through either my local machine or via cicd similar to what those other platforms I mentioned earlier provide however rather than throwing in the towel and using one of these or pivoting to an entirely different solution such as coolify I instead decided to do some research and look for some other Solutions and I ended up finding one one that not only solves my issues with Docker compose but also allows me to use the same Docker compose file I already have set up that solution is Docker stack which has quickly become my favorite way to deploy to a VPS the way Docker stack works is that it allows you to deploy your Docker compose files on a node that has Docker swarm mode enabled which is much better suited for Production Services compared to doer compose this is because it has support for a number of different features that I think are important when it comes to running a Production Service such as Blu green deployments rolling releases secure Secrets service rollbacks and even clustering not only this but when combined with Docker context I'm able to remotely manage and deploy multiple VPS instances from my own workstation all in a secure and fast way for example let's say I want to make a change to my guestbook web application service stack by adding in a valky instance this service is running on a VPS and is deployed via docker stack in order to do so all I need to do is open up my Docker compose yaml file and add in the following lines to add the valky service then in order to deploy this I can run the docker stack deploy command passing in the docker composed file that I want to use and the name of my stack which in this case is called guestbook then once it's finished I can use the following command to check the running Services of this guestbook stack which I can see now has valky up and running order this deployed remotely on my VPS from my local local machine in addition to this I'm also able to manage and monitor my application from my local machine as well such as being able to view these Services logs or add Secrets securely as well as this I've also managed to set up Docker stack to work with my cicd pipeline using GitHub actions meaning whenever I push a code change to the main branch of my repo it'll automatically deploy my entire stack so yeah it's a much bigger Improvement compared to using Docker compose when it comes to working on a VPS but you may be wondering how difficult is it to get set up well fortunately it's actually pretty simple in fact I'm going to walk you through these steps of getting it set up on a VPS from scratch before showing you some of the ways you can use it to go along with this video I've created a simple web application using go that we're going to deploy this app is a simple visitor guest book that tracks the number of visits to the web page and presents that information to the user as well as a motivational quote the code for this app is available on on GitHub which you can easily pull down yourself there's a link in the description down below if we go ahead and open up this code you can see that there's already a Docker file inside as well as a Docker composed. yo if we open up this file we can see that we've defined two distinct Services inside the web application and a postgres database in addition to having the docker file and Docker compose already defined the application also has a giup action setup as well currently this performs two different automations the first of which is running the automated tests which if they pass moves on to the second step which is building and pushing a new Docker image of the application to the GitHub container registry the interesting thing to note here is that the docker image itself is tagged with both latest and the same commit hash that is found at the repo at the time the image is built this makes it incredibly easy to correlate the docker image with the code that it was built from this is going to be important later on when it comes to automated deployments now that we know what the application looks like let's go about getting it deployed in order to do that we're going to need a VPS instance to deploy the application on fortunately that's where the sponsor of today's video comes in hostinger who have not only provided me with a VPS instance to use throughout this video but they also have a Black Friday sale going on until the 15th of December meaning you can pick up a long-term VPS for an incredibly low price up to 67% off in my case I have the KVM 2 instance which not not only boasts two vcpus and a comfy 8 gigs of RAM but it also includes a 100 GB of SSD storage and a huge 8 terab a month of bandwidth which would set you back over $1,000 if you were using something like verel you can pick up a KVM 2 instance yourself for only $5.
99 a month when you purchase a 24-month term or if you like to go a little larger you can put my instance to shame and get yourself a big daddy KVM 8 which boasts a massive 8v CPUs and 32 gigs of RAM all for just $19. 99 a month on a 24mon term additionally if you use my coupon code dreams of code you'll also receive an additional discount on any of these instances which is incredibly good value if that wasn't enough however hostinger or also throwing in some premium features as well for Black Friday including free real-time snapshots of your VPS and free automatic weekly backups making it incredibly easy for you to recover your instance in case something goes wrong so to get your own VPS instance visit hostinger. com dreams of code and use my coupon code dreams of code to get that additional discount a big thank you to hostinger for sponsoring this video with our VPS in hand let's go about setting it up in order for us to be able to deploy our Docker stack remotely to begin you'll want to make sure that you're using the same operating system that I am You' been to 2404 then you'll want to go through the additional steps of securing your root user by adding in a strong password and setting up your SSH public key next if you have a spare domain name lying around then you may want to add a dnsa record to your VPS if not then you can buy a pretty cheap one from hostinger if you like myself I actually bought the zenful site domain to use for this video for only a single dollar either way once your VPS is set up with your optional a record pointing to it go ahead and ssh in as your root user one thing to note is if you're going to use this VPS as a production machine then I would recommend going through the same steps I mentioned in my previous video on setting up a production ready VPS such as adding in a user account hardening SSH and enabling a firewall for this video I'm going to skip all that just so we can get into the good stuff however if you don't feel like watching another video in addition to this one then I've created a step-by-step guide on the steps I would normally take which you can find a link to in the description down below for this video however now that we're logged in the next thing we want to do is INS inst the docker engine this is pretty easy to do so all we need to do is head on over to the docker website and copy and paste the following two commands into our terminal the first of these is used to add Docker into the APT sources and the second one is used to install the docker engine you'll notice that in the second commands I'm ignoring both the build X and compos plugins this is because we're not going to need them so I'm reducing the amount of bloat installed on my system once stalker is installed we can check that it's working by running the following Docker PS command which shows us that we're good to go with our VPS set up let's go ahead and now exit out of SSH as we're going to deploy our application remotely in order to do so we first need to change our Docker host to be that of the VPS which we can do a couple of different ways the first and easiest way is to just set the docker host environment variable pointing it to the endpoint of our VPS whil this approach Works instead I prefer to use the docker context command which works in a similar way but allows you to store and manage multiple different Docker hosts making it easy to switch between them when you have multiple machines to create a new Docker context we can use the following Docker context create command passing in the name we want to give it then we can Define the docker endpoints by using the-- Docker flag as for the value we want to use here I'm going to Define this as follows which sets my host to that of an SSH endpoint here is how I have my SSH endpoint set up for my own context you can see we're specifying the SSH protocol with the SSH colon sl/ as well as the username of my user which is rout and the host name of my VPS if you don't have a domain name set up then you can just use the vpss IP here instead now if I execute this command my Docker context should be created the last thing to do is to make use of it by using the docker context use command passing in the name of the context we just created now whenever we perform a Docker command instead of this taking place on our local machine it will instead take place on the docker instance of our VPS allowing us to configure it remotely with our context defined we're now ready to set up our node to use Docker stack in order to do so we first need to enable Docker swarm mode on our VPS which we can do using the following Docker swarm init command upon running this command you should then receive a token that will allow you to connect other VPS instances to this machine in order to form a Docker swarm cluster whilst this is really cool and something we'll take a look at another time we're not going to do that in this video so you can safely ignore this token or just save it somewhere else if you want to but don't worry too much about losing it as you can easily obtain this token again if you need with swarm mode enabled we can now deploy our application using the docker stack deploy command passing in the path to our Docker compose doyo using the- C flag the last argument of this command is to name the stack which in my case I'm going to call zenful Stat now when I go and execute this command we should see some output letting us know that the stack is being deployed and once it's completed if I open up a browser window and head on over to my domain name of zen.
site I can see that my app is now deployed additionally this remote deployment also works when it comes to private images like the one that I have here if I go ahead and change my composed. yaml file to make use of this private image followed by running the docker stack deploy command we can see that it's deployed successfully however there is the following warning message that we receive which is only really an issue if you're running a Docker swarm cluster which in this case we're not however to resolve this you just need to use the-- with registry or flag with the docker stack deploy command with that the application is now up and running and we didn't even need to copy anything over onto our VPS now to be fair you can actually use both Docker context and Docker host to deploy Docker compos remotely as well in fact this is what I did initially once I discovered it however when doing so I kept running into an issue that would cause my deployments to fail whenever I ran Docker compose up this was because of how Docker compose manages Secrets which is that they need to be available on the host system in a file whilst this in itself wasn't too difficult to set up the issue that I had was related to defining the file inside of the composed. yo initially I used a relative path which cause problems when running the commands remotely as it would instead resolve to my local machines path instead of the local path on the remote therefore I needed to instead use the absolute path to the secret file as it related to my host but this meant I couldn't use Docker compose locally additionally there was no easy way to manage the file on the machine without resorting to SSH and having this secret stored in plain text on the machine felt bad from a security perspective and not very production ready all of these issues were actually the main reason I started looking into Docker stack and Docker swarm as they have a much better approach when it comes to managing Secrets the docker secret command this command will allow us to create the secret inside of our Docker host in a way that's both encrypted at rest and encrypted during transit to show this in action let's quickly open up the docker compose yaml file and scroll down to where our database is defined here you can see I've been kind of naughty as I've set the database password in both the web application and my database Service as an environment variable shame on me let's go ahead and change this to instead use a Docker secret to do so let's first create a new secret using the docker secret create command the first argument of this command is the name of the secret that we want to create which in my case is going to be db- password then we need to specify the actual Secrets value itself as I mentioned before Docker secret is very secure so we can't just enter in the value of the secret that we want to create instead we need to either load this in from a file or load this in through the standard input using just a dash to add a secret via STD in you can use something such as the print F command when it comes to Mac OS or Linux piping this into the docker secret create command as follows when I go ahead and execute this command it will then return the ID of the created Secrets which we can also view if we run the docker secret LS command one thing to note is that this secret is now secret there's no way for us to retrieve this from Docker for example if I run the docker secret inspect command you can see it gives us a bunch of information about the secrets but not the actual secret value itself this is a good thing when it comes to security but you'll want to make sure that you're securely keeping the secret somewhere else as well with our secret deployed we can now use it similar to how we would with Docker compose however rather than setting the secret as a file instead we Define it as external then in order to use this secret it's pretty much the same as if we were using Docker compose adding it to the relevant services that need access to it and then setting it in the associated environment variables when it comes to the database this is the postgress password file environment variable which needs to be set to/ run/ Secrets db- password then when it comes to my web application I've created the same environment variable that we can use which will load the secret from this file now when I go to run this code we can see that our database redeploys and our application is up and running sort of actually it's not the database itself is working fine but if I go ahead and run docka PS you can see that the new version of the web application is actually failing this is because I'm accidentally using an old image version that doesn't have the database password file environment variable set up so it's unable to connect to the database and exiting early however you'll notice that if I open up a web browser and head on over to my application it's still running this is because Docker stack has support for Rolling releases which means it's still running the old configuration of my application that works whilst it tries to spin up a new instance in order to switch over the traffic it basically acts as a very simple blue green deployment but out of the box this behavior is configured using the following three lines setting the start first value of the deployment upgrade order configuration personally I think this is a great option to enable as it allows you to have rolling releases when it comes to upgrading your production ready Services which is especially important if you have automated deployments let's go ahead and quickly fix this deployment by changing the image tag to one that supports the DB password file environment variable then I'm able to redeploy it using the docker stack deploy command now when I go ahead and check this service we can see that it's running successfully one thing to note is that whilst this start first configuration is available in the docker compose specification it doesn't actually work when you use it with Docker compose or at least it didn't when I tried it this is because both the docker compos specification and Docker stack specification are shared which means there are documented configuration options that either one or the other don't support for example Docker stack when it comes to the build configuration and Docker compose with the start first update ordering in fact another feature that Docker compose doesn't have support for is built in load balancing which both Docker stack and Docker swarm do to show this in action let me first scale up the web application to three replicas using the docker service scale command next if I go ahead and tail the logs using the following Docker service logs command using the- F flag you can see that the built-in load balancer is Distributing these requests against each replica in a round rubbin way whilst you are able to scale up replicas when it comes to Docker compose you're only able to bind a single instance on a given Port this means in order to effectively use load balancing you need to use an internal proxy such as traffic or engine X now to be fair when it comes to my own Production Services I actually still make use of traffic in order to perform load balancing mainly because it provides the ability for https and does a better job at forwarding client IPS this is typically what my traffic implementation looks like which sets up load balancing for my web service and automatically generates SS as well as I just mentioned there is one issue that I found when using Docker swarm because of the the way that Docker compose handles load balancing it prevents the original client IP from being forwarded to your services for some situations this is pretty annoying and given by the length of this GitHub issue a number of other people have also encountered there is however an unofficial solution called Docker Ingress routing demon which is used in production by a few companies to solve this problem I want to take a look at how well this solution Works in another video probably when I take a look at clustering with Docker swarm in any case when when it comes to my own personal needs using a load balancer such as traffic works pretty well as I mentioned at the start of this video another production ready feature that swarm provides is the ability to roll back a service to a previous deployment this is useful in the event where a bug is deployed but isn't severe enough to fail the health check to show this in action if I go ahead and change the image of this deployment to be one called broken quote followed by then deploying it when I open up a web browser you can see that the quote feature of my web app is broken as the name implies fortunately I'm able to roll this back pretty easily by using the docker service rollback command passing in the name of the service that I want to roll back now if I open up my web application again you can see that the quotes are now fixed that covers the basic overview of how I use Docker stack with my applications on a VPS however there's one last thing that I think is worth showcasing which is how I use it for automated deployments using GitHub actions to do so let's take a look at the pipeline workflow file found inside of my guestbook web app where I have automated deployments set up here you can see that I have the two same jobs that we saw before in order to test and build and push a Docker image in addition to these I also have another job called deploy which is where the actual Docker stack deployment takes place let's go ahead and add this exact same job into my zen stats project here you'll notice that I'm defining the build and push step in the needs field which means it's required to pass in order for this job to run then for the actual steps themselves inside of this job there are two the first is to check out the code at the current commit which is pretty standard in GitHub actions the second however is a thirdparty action to deploy the docker stack you can find the documentation for this action on the GitHub actions Marketplace which provides a full list of the inputs that you can set let's go ahead and take a look at these values whilst I configure it for the zenful do site first of all let's go ahead and change the name of the stack from gu book to zenful Stats next you'll notice for the file property this is actually set to Docker stack.
yo this file name is commonly used to differentiate between a Docker compos configuration and a Docker stack configuration this to me seems like a pretty good idea so I'm going to go ahead and rename my Docker compos file to be Docker Das stack. yo underneath the file value we then have the host name for our Docker stack deploy which I'm going to change to be zenful do site underneath this we have our remaining two values the first of which is a user named deploy and the second is an SSH private key which is set to a GitHub Secret in order for this to work we need to set both of these up inside of our VPS so let's take a look at how we can do this securely first of all we need to create a new user on our VPS the reason I prefer to create a new user for deployments is so that I can easily limit the permissions this user has this is a good security measure to limit the amount of damage if the SSH private key happens to be compromised we can add a new user to this VPS by using the following add user command setting the users name in my case I like to use the name of deploy when it comes to my deployment users then with the user created the next thing to do is to add them to the docker group using the following user mod command this will allow them to perform any Docker actions without needing elevated privileges from using pseudo next we then need to create an SSH key pair for this user this is achieved by using the following SSH key gen command you'll notice that I'm doing this on my local machine rather than using the VPS once the key pair has been created it should generate two files one being our private key and the second being our public let's go ahead and add this public key into our users authorized keys to do so first change into the new user on the VPS using the Sue command entering in the users password afterwards we can then create the SSH folder inside of their home directory using the following make de command then in order to add the SSH public key to their authorized Keys copy it to your clipboard and then run the following command to paste it into the file with that we should now be able to SSH into this machine as our deploy user which we can test using the following command next we then want to restrict what commands this user can actually perform via SSH this is another good security measure which again will reduce the amount of damage if the SSH key is accidentally compromised to do so open up the authorized keys file that we just created and add in the following text before the actual key itself this will restrict the user to only being able to perform the docker stack deploy command when using SSH with this key now we can test that this is the case by attempting to ssh in as our deploy user which should be rejected however when I go to run the docker stack deploy command this one should work with that we're now ready to add the private key to our GitHub repository to do so navigate over to the actions Secrets page found inside of the re post settings then inside of this page click the Big Green new repository secret button which will bring us to a form we can create a secret wi here you'll want to set the name of this to be the same value as defined in the GitHub action which in my case is deploy SSH private key then for the actual secret value itself go ahead and paste in the contents of the private key before saving our secret now if I go ahead and commit my code followed by pushing it up when I navigate over to my GitHub repo we should see that deployments start to work with the deploy job and it completes successfully with that our automated deployment is already set up however there's one last thing I like to do which is to specify which image to use when it comes to deployments if you remember for each image that we build in this pipeline we're tagging it with both the latest tag and the get commit hash of the code that this image is built off so in order to make our deployments more deterministic we want to make sure to use the same image tag to achieve this we can use an environment variable replacing the reference to our Docker image tag in our compos yaml with the following syntax which will cause it to be loaded from an environment variable named git commit hash if I go ahead and set this environment variable to the last images hash and run the docker stack deploy command we should see our service deployed with the image hash we specified in addition to this we can also specify a default value for this environment variable in the case that it's not set this is done using the following syntax which in this case will set the default value of the environment variable to be latest with that the last thing to do is to set this environment variable in our deployment pipeline this is done by setting the EMV file option in the stack deploy action then we can go ahead and create this EMV file using a step before this one here we're creating a file with a git commit hash environment variable set to the current GitHub sh value now if I commit and push up this code we should see the pipeline working without issue and we can test that it's using the correct image by making the follow code change to the page title committing it and pushing it to the GitHub repo then once the pipeline completes if I check out my deployed web application we can see it's now running the new version with the updated title with that we've covered the basics of how I use Docker stack when it comes to deploying on a VPS including how I have it set up for automated deployments personally as somebody who has come from Mostly working with kubernetes I've been impressed with how lightweight yet functional Docker stack has been whilst it's certainly not perfect and does come with its own caveats so far I found it to be a really good solution when it comes to running my own applications one thing I'm still yet to try is setting this up to work as a cluster but given how easy everything else has been I don't expect this to be too difficult however that's going to be a video for another time so if you're interested then please let me know in the comments down below otherwise I want to give a big thank you to hostinger for sponsoring this video if you're interested in obtaining your own VPS instance for a super low price then either visit the link in the description down below or head on over to hostinger.
Copyright © 2024. Made with ♥ in London by YTScribe.com