recently I've been working on a brand new micro SAS and I've been having a lot of fun doing so one thing that I've really appreciated is how easy it is to now deploy our applications to the cloud with a huge number of platforms as a service making it easy to do so whilst these platforms can be pretty great they're not always perfect because of the underlying business model they're not well suited for long running tasks or transferring a lot of data and doing so can sometimes result in an unexpectedly High bill this contrasts to using something such as a VPS which stands for a virtual private server these often provide much more consistent billing whilst also mitigating some of the caveats that comes from using a serverless platform despite these benefits however I've always been rather hesitant to use a raw VPS when it comes to deploying my Production Services this hesitancy is due to the perceived difficulty of setting up a VPS to be production ready but is that actually the case well in order to find out I decided to give myself a challenge a challenge to see how difficult it would be to set up a production ready VPS from scratch as it turns out it's actually a lot easier than I thought to go along with this challenge I've built a simple guestbook web app with the goal of having it deployed on a VPS before I went about deploying this however I decided to write out a list of requirements in order to Define what production ready meant to kickoff these requirements I first wanted to make sure I had a DNS record pointing to the server followed by by making sure that I had the application up and running afterwards was adding good security practices to the Box such as making sure that all hdtp communication took place over TLS and that the Box itself would automatically provision and renew its certificates as well as TLS there would also need to be additional security measures in place such as hardening op SSH to prevent unauthorized access whilst also sending up a firewall to block any unnecessary ports in addition to ensuring that the application was cure I also wanted to make sure that the deployment had a good user experience this meant setting up high availability as much as you can on a single node to do this I would need to set up a load balancer so that I could distribute traffic across multiple instances in case one went down as well as a good user experience I also wanted to make sure I had a good developer experience this meant setting up automated deployments so that whenever I pushed up a change it would automatically be deployed onto the VPS within a couple of minutes all whilst keeping the service available lastly I wanted to make sure that if the website did become unavailable at any point I would be notified this meant I would need to set up some sort of website monitoring with my requirements defined the next thing to do was to consider some of the technical approaches I could take the first condition I had was that I wanted to achieve this using simple tooling that didn't require too much domain expertise this ruled out using something such as a lightweight kubernetes deployment such as k 3s or micro k8s additionally I didn't want to use a full-featured solution such as coolify instead focusing on setting this up without any additional layers of abstraction this also meant I didn't want to use any infrastructure as code such as terraform palumi or open tofu although I may consider migrating to one of these in the future with my requirements and Technical considerations to find it was now time to start setting up my production environment which meant I needed to obtain a VPS fortunately that's where the sponsor of today's video comes in hostinger who have kindly provided a VPS instance for me to use throughout this video if you're considering getting a VPS for your own applications then hostinger has some amazingly affordable rates in my case I have an instance size of KVM 2 which has two vcpus and a whopping 8 GB of memory costing only $6. 99 a month when you sign up for a 24-month contract it also comes with up to 8 tab of bandwidth a month if you tried to transfer this much data on versale it would end up costing you over $1,000 not only this but a VPS instance this size for $699 a month is incredibly affordable especially when you compare it to other options additionally when checking outs if you use my coupon code dreams of code then you'll receive a further discount on the already low price so to get your own VPS instance today make sure to check out the link in the description below and use my dreams of code coupon code after obtaining the VPS instance from hosting a I was then prompted to set it up using the hostinger UI the first thing to do was to choose an operating system template for my server here I had a choice between plane operating systems operating systems with panels or applications the application tab provides a number of preconfigured environments with specific commonly used applications already installed these include Docker olama and even one for vs code if that's more your Vibe whilst this is pretty cool to get started quickly in my case however I wanted to install everything I needed from scratch so I ended up going with a base operating system installation as for which operating system as much as I'd have loved to Ed Arch either way instead I decided to use Ubuntu 244 which is an LTS or long-term support distribution and perhaps one of the most common operating systems when it comes to setting up a VPS on the next screen I made sure to disable the Monarch's maare scanner as I wanted my server to be as base as possible then it was time to set up a strong password for the root user before adding in an SSH public key to allow me to log in over SSH hostinger does provide some instructions on how you can both generate and add a new SSH key however in my case I decided to use my existing one as it's tied to my UB key this was done by first copying the public key to my clipboard using the following command and then pasting it into the SSH key box with everything set up I went and deployed my new VPS instance and waited for it to start before testing the login over SSH which was working correctly whenever I log into a new VPS the first thing I do is to always make sure to add a new user account as working as the root user is generally not advised adding a new user account is done by using the add user command followed by passing in the user's name in my case I set this to be Elliot upon doing so it then prompted me to add the user's password and to fill out any additional information before then creating the user with my new user account the next thing to do was to ensure that this user user also had pseudo permissions so that I could Elevate its access whenever I needed to this was achieved by using the following user mod command adding the user to the pseudo group afterwards I could then test that everything was working correctly by first switching to this new user using the Sue command before running a test pseudo command with my user account added to the VPS it was now time to move on to my first requirement pointing a DNS record at the server surprisingly enough I didn't have any spare domain names lying around so I needed to purchase a new one fortunately you can buy a domain name from hostinger at an incredibly low price so I decided to purchase the z. Cloud domain name for the low price of $1.
99 for the first year once the purchase went through I then went about adding a DNS record pointing to the VPS using the hostinger UI to do this the first thing I did was clear both the existing A and C Name Records that were automatically generated as they were no longer needed then I was able to add a new record for the rout domain pointing this to the IP address of my server which you can pull out of the actual node Itself by using the IP Adder command all that then remained was to save the record and wait for it to propagate across the global DNS Network this can take a few hours so whilst I waited I went about making myself a warm beverage before deciding to add some more security to the VPS if you're following along at home you may want to install tarx on your VPS and work inside of it by doing so if your SSH connection happens to drop you're able to easily reattached to the t-mo session when you SSH back in in my case I'm using t-mo on my local machine with a pretty reliable internet connection but if SSH did drop then this can be a little Annoying however it didn't do so during the filming of this video which took place over a couple of days if you want to learn more about t-o and how to use it I actually have another video on it which you can check out after this one in order to harden the VPS the first thing I wanted to do was to remove password authentication on SSH which generally isn't a good idea to leave enabled as there are many automated Bots out there performing SSH brute forcing before I was able to do that however I first needed to make sure that my non-root user also had a copy of my SSH public key the easiest way I find to do this is to use the SSH copy ID command from my local machine this command requires existing authentication in order to copy the SSH public key so I had to run it before disabling password or with my SSH public key copied over I then tested that everything was working by SSH Hing in which allowed me to log in using key based orth now I was ready to disable password authentication to do so I first opened up the sshd config file using Vim then changed the yes value of password authentication to no whilst I was in this file I also disabled the ability to login as the root user and disabled Pam authentication as well which are both recommendations for hardening SSH additionally on hostinger there was another file called 50. cloud in it where password authentication was also set to enabled so I needed to either modify this file as well or just remove it to apply these changes all I had to do was first reload the SSH service then I tested that everything was working correctly by first running the following command to make sure I couldn't log into the root user at all with that my VPS was now hardened from SSH attacks especially password Brute Forces one additional approach that you can take is to change the port that SSH listens on from Port 22 onto something else this can help to reduce the attack surfice from automated scanners however personally I feel like this is more security by obscurity so in my case I decided to leave SSH on the default Port of 22 if you do decide to change this however then make sure to take note of it as you'll need to make some other changes later on once SSH had been hardened I then used NS lookup to check if the DNS record had propagated which showed me that my zen. cloud domain name was now resolving to the vpss IP I then confirmed this by using the domain M name to ssh in with my DNS record set up the next requirement was to get the web application up and running as I mentioned before I had built a web app to go along with this project which is a simple guest book application written in go where visitors to the website can leave a hopefully nice message if you want to deploy this project yourself or are just interested in the code then you can find a link to the GitHub repo in the description down below to get it running on the VPS I decided to First Take A naive approach which was to clone down the project onto my hostinger instance and build it in order to do so I first needed to install the go compiler which was done through Snap using the following command once the go tool chain was installed I could then build the project using the go build command which did take a little bit of time but was fortunately much quicker than say if I'd been using rust after it was built I then had a binary called guestbook which I could then run however this unfortunately failed as the application needs a database URL environment variable in order for it to connect to a postgres database in order to store the data later on I actually set up an instance of postgres on the VPS which makes a lot of sense given how much memory it has coupled with the fact that hostinger also provides a 100 GB of SSD storage in a true production environment however there's a lot more you need to do when it comes to setting up a database just keep that in mind before installing postgres on the VPS however I first decided to test that the app was working by setting the database URL environment variable to a spare postgres instance I had had lying around then I ran the app again which this time showed that the server was listening on Port 8880 to test that everything was working I opened up a web browser to the zen.
cloud domain at Port 880 which brought me to my applications homepage so far so good whilst this approach of cloning down the repo and building it on the box does work personally I'm not a big fan of compiling applications on my production server instead I prefer to use containerization which allows me to build an immutable image of my application for distribution which also happens to be versioned and configurable this means rather than running the application binary directly instead I decided to run it using Docker through a Docker image this project comes with a Docker file in it already which you can view on GitHub additionally I also had built and pushed a Docker image to the GitHub container registry found in the Project's packages in addition to this the project also contains a compose doyl which allows me to easily use Docker compose to deploy this project as well as any other dependencies such as which also has a service reference inside of the compos file this meant I could use Docker compos to deploy the entire production ready application stack in order to do so however I first needed to install Docker and Docker compose onto the VPS the best way to do this is to follow the instructions found on the docker website which gives a list of commands to enter these commands add the docker repository to apt which allows you to install both Docker and Docker composed plugin via the APT install command once these had both been setup the docker service was now up and running however if it's not in your own case you can go ahead and run the following system control commands in order to enable it the next thing to do was to then add the user to the docker Group by using the user mod command this prevents the user from needing to use pseudo in order to interface with Docker next it was time to get the application stack deployed in order to do so I first needed to set up a password for my postgres instance rather than hardcoding this into the composed. yaml I decided to instead set this using a Docker composed secret which is defined on the following line to do so I needed to write out my password into the specified file which was done by first creating the DB directory using make deer followed by echoing the password into the file using the following command then I could go ahead and deploy my stack using the docker compose up command which began by creating an instance of postes followed by creating an instance of the guestbook service now when I headed on over to our web browser and typed in the zen. cloud main name followed by Port 880 I was greeted by a running instance of the guestbook web app as well as seeing a corresponding line in the container Logs with that I had my containerized service running with a domain name pointing to it which meant I had completed three of my eight requirements the next production Ready Step On My List was to enable a firewall whilst this is generally a good idea to improve your VPS security I did run into a slight caveat when doing so which I'll talk about in a bit to set up a a firewall for This Server the only ports that I wanted to be enabled were the open SSH ports which was Port 22 for me the port for HTTP which is Port 80 and Port 443 which is used for https to achieve this I decided to use the simple yet powerful ufw application which stands for uncomplicated firewall not only is ufw really simple to use but it also comes and stalled by default on your buntu distributions making it a great choice for this VPS in in order to use it I began by defining the firewall rules the first of which was to disable all inbound Network requests by default which was done using the following command next for all outgoing requests I enabled these by default using this next command lastly I then needed to enable inbound requests to the open SSH server so that I was still able to SSH into the VPS just to be aware this is a really important step if I had forgotten to do this then I'd no longer be able to access the VPS so it's really important to get this right before enabling the firewall to allow connections to the open SSH server I used the following command which allows any inbound traffic to Port 22 if I had changed the SSH port to be something else such as 800 however then I would have needed to use the following command instead with that I did one last check to make sure that I had configured the firewall correctly and then enabled it by running the following ufw enable command with that the firewall was now up and running however when I went and checked if the website was still accessible it actually was despite being on a port I hadn't explicitly allowed this was the caveat that I mentioned earlier and is caused by the fact that exposing a port with Docker will actually overwrite the IP tables rules defined by ufw this is unfortunately a well-known issue and whilst there are some workarounds there's not really one great solution one such workaround is to only expose the port on your loop back IP address of 127 0.
0. 1 by using the following syntax and whil this does work it's not exactly ideal instead the best way that I found to make sure that these ports aren't exposed is to just not Define them in the docker composed file but then how can anyone access the web application on Port 880 well the best way to do that is to set up a reverse proxy which is what will be exposed onto the internet instead in the past I've typically used engine X as my go-to reverse proxy however this time after doing some research I ended up going with another option traffic which is confusingly spelled as traic spelling issues aside traffic is pretty awesome and was probably one of the two biggest reasons as to why setting up this production ready VPS was much easier than I thought it would be to begin setting up the reverse proxy I wanted to make sure I had traffic listening on Port at forwarding any HTTP requests containing the zv. cloud host header to my guestbook service in order to add traffic to the stack I added a new service into my composed.
yaml called reverse proxy pointing it to the traffic image with tag 3. 1 followed by passing in two arguments to the command field then I specified the ports of 80 for HTTP and Port 880 for the web UI so that I could access the traffic dashboard for debugging the last thing to add was to set the docker socket as a volume for the container so that traffic can listen to any Docker events such as new Services being spun up and is why traffic works really well with docker compon with the new service added I then restarted Docker compose and added both Port 80 and 880 to the ufw allow list whilst this step may not be required due to the fact that Docker was overwriting the IP tables it was worth playing it safe in my opinion just in case there were any race conditions between the two Services writing to the IP tables to test that everything was working I opened up a new browser window and headed over to zen. Cloud where I received a 404 not found whilst this is an HTTP error code it showed me that traffic was set up correctly which I could then confirm by heading on over to port 8080 which brought me to the traffic web UI dashboard upon confirming that traffic was working the next step was to set it up so that the HTTP requests would resolve to the guestbook service I achieved this by adding in the following line to the guestbook service inside the docker compose file this line adds a label to all the guestbook docker containers telling traffic to Route any HTTP requests that contain the zen.
cloudhost to This Server this because traffic listens to events on the docker socket then this was all I needed to do in order to set up reverse proxying pretty simple after redeploying Docker compose I was then able to visit the zenful cloud domain in my browser and it now resolved to my guestbook web app with that traffic was now working as my reverse proxy and my DNS record was now resolving to the guestbook web app but traffic's abilities don't just sto there and I was able to use it to mark off another requirement on my checklist setting up a load balancer to show this in action I first scaled up the number of guestbook instances I had to three replicas using the following Docker compose command then when I went and refreshed the page a number of times I could see in the logs that each of the requests were being balanced across the three instances all of this was being handled under the hood by traffic without needing to add in any additional configuration this is really nice now you may be thinking what's the point of a load balancer on a single node as it doesn't exactly improve performance whilst this is the case it does do something else however which is improves the reliability of the service through increased availability for example if one of the guestbook server instances goes down due to say a panic traffic can then route the requests to the other two Services whilst the failing instance comes back online this helps to improve the overall user experience keeping availability high and therefore having three instances up and running is usually a good idea especially as it doesn't cost anything extra given how much memory we have on this VPS so in order to make this change persistent I added in the following replicas block to the guestbook service in the docker compos file which also meant I could now check off the load balancer for my production ready requirements as well as low balancing traffic also produces another feature that is on our list TLS certificates in the past I've typically achieved this for engine X using let's encrypt with Search bot which is a service that both obtains certificates and automatically use them for you for this VPS however I didn't need to use it as traffic already has this functionality built in as I mentioned before traffic is really doing a lot of the heavy lifting when it comes to this production ready VPS in order to enable automatic generation of TLS certificates all I needed to do was add in the following lines into the reverse proxy Services command list the first line prevents any Docker containers from being exposed by default this means that in order for traffic to expose a service through its proxy the service will need an explicit label added to it next I then defined an entry point called Web secure pointing this to Port 443 which is the port used for https underneath this I then added in the necessary arguments to define a certificate resolver called my resolver in order to generate the TLs certificates the first of these arguments defines the challenge type which in my case was TLS next I defined the email address to be associated with the Certificate request followed by the location of where to store the certificate data afterwards I then replace the reference to Port 80 with Port 443 meaning that the load balancer would only be accessible over https lastly I then defined a volume mapping of let's encrypt to the/ lets encrypt directory of the container before adding in three more labels to the guest book service the first of these is the explicit label to enable traffic to proxy to this service which was now needed given that I had just disabled automatic exposure the next label told traffic that as well as needing the host of zen. cloud requests would also need to come in on the web secure Port I. E Port 443 in order to be routed to this service and the third and final label told the router to use the certificate resolver that we just defined the last thing I needed to add to this file was to define the let's encrypt Docker volume that I was using to store the certificate data with everything added in order to configure TLS I restarted the docker compose and waited for about 5 minutes for the Certificate request to take place one it was completed I was then able to access my guest book over https unfortunately however this setup did have another issue because I had disabled the HTTP Port I could no longer access the website by using its domain name on the browser which by default will attempt to resolve to the HTTP protocol on Port 80 because of this any attempts would eventually result in the time out to fix this I needed to set up an HTTP redirect which would redirect any HTTP requests on Port at to https I.