in this video we look at most asked terraform interview questions along with answers and also learning resources to help you out all right so I'm here on my computer screen and I have all this different terraform interview questions along with answers and also learning materials to help you understand each topic properly now in this document I have mostly realtime scenario based terraform interview questions but I've also included some Advanced terraform interview questions like cicd with terraform automated testing in terraform upgrading terraform version and lot more so make sure you watch this video till the end
and if you want me to share this document with you comment down terrafirm interview questions and answers and I will share this document as a PDF on LinkedIn and also share the link of this document in the video description now with that being said let's get started with this terraform interview questions and answers video let's go so whenever you sit for a terraform interview the first typical question that you will get is going to be what is terraform and how it works so we all know terraform is an infrastructure ased code tool that lets you
write the code for the infrastructure that you want to create in the cloud when you write the code you then run the terraform commands and terraform will create this infrastructure in the cloud terraform does this using the state file so a state file in terraform is a file that will store the data of all the infrastructure managed through terraform and terraform uses the state file to compare with the actual infrastructure on the cloud to tell us what is going to be created or deleted according to the configuration you have in these files so this is
how terraform works and if you look at the answer here terraform is an IC tool that lets you write code to Define and manage your infrastructure you describe your desired infrastructure in the configuration files which are these files here the TF files and then terraform figures out what needs to be done to achieve the state and then make it happen by interacting with Cloud providers or other infrastructure platform here is the workflow you you first write the code you then plan to check what is going to be created or deleted and then you apply to
have that infrastructure on the cloud so this is how terraform Works moving on the next question is a scenario based question where it says a devops engineer manually created infrastructure on AWS and now there is a requirement to use terraform to manage it how would you import this resources in the terraform code okay so according to this question engineer has created some infrastructure on AWS manually and now they want this infrastructure to be managed through terraform so how can you do that so let's consider either this is an infrastructure or an ec2 instance created manually
through AWS console and now you want this to be managed through terraform so for an infrastructure to be managed through terraform it has to be present inside the state file which is this state file here so we will start with creating the configuration first it will create the code which is going to be a resource block for this particular instance specifying the instance type specifying uh the Ami used and all the other things once you write the code for this to be inside the state file we will run the terraform import command so if you
look at the answer here we we first write the terraform configuration for the resources that we want to be managed through terraform once we write the code then we run the terraform import command and here is the command here so terraform import with the resource type and the unique ID so we run terraform import command for each resource specifying the resource type and its unique identifier once you follow these two steps then the infrastructure will be managed through terraform and not manually so we have to repeat these steps or repeat this process for each resource
that we want to manage through terraform now if you want to learn more about how terraform import works here is a great blog by terraform itself where they will tell you how you can use terraform uh import command to manage infrastructure through terraform rather than manually moving on the next scenario based question is you have multiple environments Dev stage fraud for your application and you want to use the same code for all these environments how can you do that so according to the question there are different environments for an application and you want to use
the same terraform code for all these different environments how can you do that to achieve this we have actually two different approaches the first is using terraform modules and the second is using terraform workspaces so what is terraform module a terraform model is a block of code or a code template for infrastructure components and you define them once and then you can use them with different configurations for various environments by passing in different parameters or different variables so this is the first approach where you can use the same code for different environments the second one
is using terraform workspaces using workspaces you will have different state files for different environments using the same code so you terraform workspace provides a way to manage separate states for same set of configuration file each workspace maintains its own safe file allowing you to work on different environments concurrently without interfering with each other here is a great block that will help you understand how you can use the same code with with different multiple environments using uh the terraform module approach and also using the terraform workspace approach I recommend you checking this out if you want
to learn more about how you can use the same code with different environments moving on the next question is what is terraform State file and why it is important so we already know State file is something that is going to have information about the resources managed through terraform and this is usually a Json file so if you look here terraform State file is a Json or a binary file that stores the current state of the managed infrastructure State file is like a blueprint that stores information about the infrastructure you manage and it is crucial because
it helps teram understand what's already set up what changes need to be made by comparing the desired state with the current one in the state file terraform can make accurate updates to your infrastructure so this means terraform State file is very important for terraform to work because using State file terraform actually compares the configuration with what you have on the cloud and then it will actually create to delete stuff so terraform State file is a Json file which is very important for terraform to work so next question is a scenario based question a junior devops
engineer accidentally deleted a state file what step should we take to resolve this so we already know State file is very very important and in this question a junior devops engineer has accidentally deleted a state file what step should we take to recreate it or resolve this so as we know state is import State file is important it is always recommended to take backups so the first step is going to recover backup if there is a backup available try to restore the state file from the backup if there is no backup then you need to
manually create State file which is very timec consuming and you can do this by using terraform import command so you will check what resources are there on the cloud and using terraform import command you will import every single resource and put them into the state file to recreate the state file here is another great blog where the author accidentally deleted the state file and then using terao import command they recreated the state file so here is the terraform import command on how they use terraform import to recreate State file so make sure you always take
backups if there are no backups you will have to recreate it manually using terraform import command checking each resource on the cloud once you do that make make sure to review and monitor if everything is working fine if you get this question in the interview the followup question would be what are some best practices to manage terraform State file so as we know terraform State file is important and I've been continuously saying this here are some best practices to make make sure that you properly manage your state file the first best practice is to use
remote storage so rather than storing your state file locally in your local machine like this you can store your state file on remote backends like S3 Azure blob storage console and lot more so you should always store the state file remotely for safety collaboration and Version Control second is State locking when you store your state files on remote backends you will obviously have collaboration which means multiple people will constantly update the state file by running different terraform commands so you need to enable State locking to make sure there are no conflicts whenever multiple people are
changing the state file uh so State locking will prevent conflicts in concurrent operations next is to access control make sure only authorized people or Services have access to the state file to avoid deletion or Corruption of the state file next is automated backups make sure to set up automated backups to prevent data loss for the state file and lastly environment separation maintain separate State files for each environments or utilize terraform workspace to manage multiple State file so here are some different best practices to manage terraform State five let let's move on the next question is
your team is adopting a multicloud strategy and you need to manage resources on both AWS and Azure using terraform so how do you structure your terraform code to handle this so according to the question your team is adopting multicloud strategy where you want to create code on AWS as well as on azure so how can you do that so we already know terraform is a cloud agnostic which means terraform can work on different clouds at once so you can also create resources on AWS also on Azure also on gcp so you will do that by
first defining the providers if you want to create resources in AWS you will have to Define AWS as a provider in this case we are creating resources in AWS and Azure so we will Define provider for AWS and Azure as well once you define the provider you will then start with writing the code for the resources you want to create in AWS and Azure so here are two steps first you define the provider then you write the code for different resources you want to create moving on the next scenario based question is there are some
bash scripts that you want to run after creating your resources with terraform how how would you achieve this so you have some bat scripts that you want to run after the resources are created you can do this using provisioners in terraform so there are three different types of provisioners in terraform there is file provisioner local ex provisional and remote exact provisional for us to run bash scripts we can use local exact provisional and remote exec provisional the difference between these two are local exec provisioners are used to run commands in your local machine remote exec
provisioners are used to run commands or run scripts inside your remote machines so here is a simple example we are using a provisioner which is a remote exec provisioner and we using we are running some commands here the first command is to give permissions for the script that you want to run and the second command is to actually run the script when you use remote or file provisional you also need to define the connection block which is how you will connect to the remote machine where you want to run your scripts so you can see
the explanation here in this configuration we using a remote exit provisioner which executes A bash command on a remote machine via SSH you need to provide the necessary connection details such as SSH user private key and the host so this is how you can run bash scripts uh using remote exact provisioner if you want to run it on remote machine if you want to run it toally you can use local exact provision as well moving on the next question is your company is looking ways to enable High availability how can you perform blue green deployments
using terraform this is a very nice question and I have got this question a lot of times when sitting for interviews so what is Blu green deployment a Blu green deployment is a strategy where you have two identical environments a blue one and a green one like this according to the question the question is asking how can you use terraform to set up a blue green environment so you can use terraform you can actually you can obviously use terraform to do this so terraform facilitates this by defining two sets of infrastructure Resources with slight variation
so maybe you can have two different Autos scaling groups or two different Azure virtual machine scale sets what you can do is you can create a new environment alongside with the existing one which is going to be the green one and you will test if everything is working in the new environment if everything works properly you can then switch the traffic either using uh load balances or using DNS records so you can see you first create the new environment alongside with the existing one you validate if everything is working fine in the green environment and
if it is working fine you can then use application Road balancer or DNS records to switch traffic between these two again to understand this more clearly I also have another uh blog or documentation by by terraform itself and they have defined how you can use terraform for bluprint deployment or for Canary and rolling deployment as well so go through this to understand more on how you can use terraform to set this up moving on the next question is your company wants to automate terraform through cicd pipelines how can you integrate terraform with cicd pipelines this
is again very very important because every company is using cicd pipelines to automate the terraform infrastructure provisioning so make sure you go through this answer very properly I have also explained this very thoroughly in this particular video where we have used gitlab cicd to automate terraform resource creation and deletion so inside this video we have created a cicd pipeline using gitlab in this cicd pipeline we have different stages for in it for a plan apply and Destroy so similarly for you to set up or automate your terraform through cicd pipeline you need to First push
the terraform code this code onto uh remote repository like GitHub or gitlab wherever you want to set up your cicd pipeline once you have your code there you can then start with writing the cicd script uh which will have different stages for in it for validate for plan for apply for apply you can either do it manually where you will create uh where you will create merge request every time there's a change in the terraform code and once it is approved only then resource should be created also you can set up approval for deletion as
well so here are the different steps you first commit the code to Version Control System set up a cicd pipeline that will check every time you make a change inside your terraform code and you push it to this repository terraform the CD pipeline will run in the pipeline you can execute terraform commands such as init validate and plan to ensure the configurations are valid and generate an execution plan you can do this by defining different stages in the pipeline use Terra forly command to create or modify infrastructure based on approved changes so whenever whenever you
make a change you can then create a merge request when this merge request is accepted only then the apply stage should be done optionally you can also use testing and verification tool to validate and deploy infrastructure so you can include different uh different Frameworks or different tools to include testing or to make sure your your terraform syntax is correct finally trigger additional pipeline stages for application deployment and testing and release optionally you can also have other cicd pipelines where you can use you can also add other cicd pipelines for application deployment testing and release so
this is how you can integrate cicd pipelines with terraform I highly recommend you checking out this video where I've explained step by step on how you can use cicd with terraform to automate deployment on the cloud moving on the next question is describe how you can use terraform with infrastructure deployment tools like anible or Chef so in this question it is asking how you can use terraform with configuration management tools like Terra like anible or Chef so we all know terraform is an infrastructure provisioning tool which is used to create infrastructure whereas anible or Chef
are configuration management tools which are used to configure stuff so using terraform you can create E2 instances but using anible you can install something on it so terraform can be used in conjunction with infrastructure deployment tools like anible or Chef to manage infrastructure provisioning and configuration anible and Chef can handle tasks such as installing a software configuring servers managing Services while terraform focuses on infrastructure provisioning and orchestration so it is a very good practice to use these two tools together to create or achieve a comprehensive infrastructure automation solution now if you have confusion between what
what is terraform and what is anible and how they are different you can check out this video where I've explained how terraform and anible are different from each other and what is what do they actually do so you can use terraform with anible and that we have answered here moving on the next question is your infrastructure contains database passwords and other sensitive information how can you manage secrets and sensitive data in terraform this is a very very important question compan companies are very concerned about how you manage Secrets when working with terraform so you need
to know how to manage Secrets like database passwords credentials when working with terraform here are some best practices that you can follow to manage Secrets when working in terraform the first best practice is never hard Cod secrets in your terraform code never put your uh database passwords never put your credentials inside terraform code because anyone will be able to see it even if you push it on uh repositories everyone will be able to see it so never hardcoded in your code you can store Secrets outside Version Control files never push them on GitHub as well
and you can rather than rather than storing it inside the code you can use tools like hashiko Vault or also Cloud specific secret Management Services like Secrets manager in AWS also opt you can also use terraform input variables or environment variables where you can pass this information while you run the commands rather than storing them in the code so here are few uh different best practices that you can use to make sure that you can protect sensitive information and minimize the risk of exposing Secrets intentionally or unintentionally so moving on the next question is you
have RDS database and ec2 instance created using terraform ec2 instance should be created before RDS how can you specify dependencies between two resources so you have RDS database and you also have ec2 instances you want this ec2 instance to be created before the RDS is created so to do that you can use a meta argument which is a depends on meta arguments using this depends on met argument you can Define that the ec2 should be created first and then the Rd should be created so in terraform you can specify dependencies between resources using dependon attribute
within the resource block and I've explained this again in this video where I've shown how you can use dependon argument to create resources first so make sure to watch this video if you want to learn how you can use dependon meta argument to create resources before any other resource moving on the next question is you have 20 servers you have 20 servers created through terraform but you want to delete one of them is it possible to destroy a single resource out of multiple resources using terraform so this is again a very tricky question whenever you
want to delete stuff through terraform you will run the terraform destroy command but when you run terraform destroy command everything that is defined inside your configuration will be deleted so let's say you have 20 servers how can you delete one of them so if you want to delete just one of the resources defined among the 20 resources in the terraform you can use terraform destroy command with a Target attribute here so it is possible we can use terraform destroy Target command followed by the resource type and the name to destroy a specific resource so you
can use Target with the instance that you want to create that you want to delete and it will only delete the particular instance rather than deleting all the 20 servers and again there's a documentation by ashiko which will tell you how you can use terraform destroy with a Target attribute here so make sure you go through this if you want to understand more on how you can destroy a particular resource rather than destroying everything now moving on the next question is what are advantages of using terraforms count feature over resource duplication so in this question
they're asking you what is the advantage of using count instead of resource duplication so a count feature or count meta argument is used to define how many resources you want to create for example you can check here here is the code to create AWS instance in this block of code we are defining count as four which means it will create four instances of these configuration other way to do this is to rewrite the code four time with this I'm not writing this code four times so this is the benefit using count you are you can
create as many resources as you want without actually writing the code so using using terraforms count feature provides advantage over resource duplication by allowing you to dynamically create multiple instances of a resource based on given condition or a variable with count you can define a resource block with the count value that evaluates an expression such as variable or conditional statement this reduces code duplication and enable more efficient resource management and scalability so it is always always advised to use count rather than defining your code blocks multiple times you can also add some Advanced Expressions to
enhance your terraform code here is the official documentation by terraform on count met argument I also have a short video on how you can use count and count. index so if you want you can check it out moving on the next question is what is terraforms module registry and how can you leverage it this is again something that you need to know if you are working on terraform module registry is a place where you can find different modules that you can use rather than writing the code yourself so here is the module registry where are
different where you have different modules there is a module for IM IM there is a module for VPC for S3 bucket for eks everything so let's say you want to create eks cluster rather than writing the code yourself you can just go ahead and use this module if you want to check the code for the module you can also check the code here this module is created by Anton uh and then the code is present here on the GitHub so module registry is like a place where you can find all the different modules that you
can reuse rather than writing the code yourself so terraforms module registry is a central repository for sharing and discovering terraform modules the module registry allow users to publish their modules which are reusable and sharable components of terraform configuration by leveraging the module register you can easily discover existing modules that address your your infrastructure need reducing the duplication of f word and you can reference modules in your terraform code using their registry URL andion for you to use any particular module you can see a block here inside the module block you can define a source as
the module that you want to use now moving on the next question is how can you implement automated testing for terraform code this is again very important security is something that you need to put everywhere even in your terraform code so how can you implement automated testing by default terraform provides you with terraform validate and terraform format command which will check the syntax and also format the code according to terraform best practice but along with this you can also use other code other tools like terat test uh TF lint there's also kitchen terraform and a
lot more I have defined all the tools here so for you to a for you to enable or Implement automated testing uh you can use all these different tools so why do we use automated testing we use automated t tting to validate if the syntax is correct or if there are any infrastructure changes to validate if the syntax is correct to detect issues earlier and also to ensure the desired State matches the actual State uh this involves creating text fixtures test fixtures defining test scenarios executing terraform operations and all this so different tools that are
used for unit testing you can use terraform Terra compliance or Terra test for integration testing you can use Terra Test Kitchen terraform for linting you can use TF lint or check off and then other tools as well for static analysis for mocking for everything else here is another great block on medium uh for testing on terraform so you can go and check it out if you want to know how to enable automated testing moving on the last question we have is you are tasked with migrating your existing infrastructure from terraform version 1.7 to version 1.8
which is the latest version as of now so what kind of considerations would you take so whenever you are tasked with migration how would you do the migration from a particular version to another version so whenever upgrading a terraform version from a particular version to another version always the first step is to always review the upgrade guide whenever terraform releases a new version there is always going to be a documentation similar to this so you can see you can find how you can upgrade a terraform to a particular version which is version 1.8 the latest
one as of now so first step is to review upgrade guide to understand what changes are there what what has been deprecated and then new features once you do that you will then update your configuration files according to the changes of the new syntax and to handle deprecated features as well ensure thorough testing monitor after you make the change monitor if everything is working fine always check on the nonpr environments before you shift to the prod environment obviously and then document any changes and also provide training to your Terra to your team members this this
is how you should be upgrading a terraform version from one version to another so now in this video we have now covered a very important terraform interview questions I hope this video was informative if you have any questions any doubt do let me know in the comment section also comment down if you want me to share this document with you and also comment down what should we cover next should we cover Docker interview questions or kubernetes interview question let me know in the com let me know in the comment section thank you and have a
good day