Kubernetes Interview Questions | Scenario Based K8s Interview Questions and Answers for Devops

34.49k views5010 WordsCopy TextShare
Cloud Champ
Kubernetes Interview questions and Answers for freshers and experienced devops engineers | Kubernete...
Video Transcript:
kubernetes is a very popular container orchestration tool and there are so many jobs for kubernetes admins kubernetes developers so in this video we look at most asked kubernetes interview question along with answers providing you with real world examples and practical Solutions so make sure you watch this video till the end let's start okay so we are here on my computer screen and I have this document which includes most asked kubernetes interview questions along with answers now this document mostly scenario based question but there are also questions on troubleshooting kubernetes clusters how to scale monitor or
secure your kubernetes cluster also giops cicd and lot many Advanced topics so make sure you watch this video till the end along with this this document also includes learning resources or projects to help you understand topics more deeper so if you want me to share this document with you comment down kubernetes interview questions and answers and I will share this document as a PDF on LinkedIn or share this notion Link in the video description so with that being said now let's start with the first kubernetes interview question so the first few questions in your kubernetes
interview will always be the questions to test your kubernetes knowledge or to check your past experiences working on kubernetes so you will get questions like how kubernetes work what is kubernetes what does it do and all similar questions and to help you answer them all I have this video which explains what is kubernetes and how it works it is around 28 minutes so it will help you clear all your kubernetes con Concepts and also help you answer these kind of questions so I would recommend you checking out this video but to answer this particular question
what is kubernetes and what does it do kubernetes as we all know is an open-source container orchestration or container management tool that automates the deployment Skilling and management of containers along with this you will also have a question on kubernetes architecture so make sure you go through the kubernetes architecture properly understand what is inside Master node what is inside worker node how they work together so I also have this video which explains kuber architecture in 7 minutes I have drawn out the architecture so it is very easy to understand and to answer this question the
main components of kuber is architecture are the master node and the worker node Master node is something that will handle everything inside the cluster and it consists of cube API server hcd scheduler and controller manager whereas worker node is something which is running your application in the cluster so it consists of cubelet cube proxy container runtime and the pods on which your containers are running so these are two different questions that you will obviously get now that we have covered this let's move on to the scenario based questions so we have first scenario based question
which says you have an application deployed on kubon netes that is experiencing increased traffic how would you scale the application to handle the increased load so you have an application running on kubernetes which is getting a lot of traffic and you want to scale up so you can scale the pods either horizontally or scale it vertically to scale the application I would follow these steps first I identify the bottle NE so I will check the cluster to analyze resource utilization including CPU memory Network to determine the limiting factor once I know this and if the
bottleneck is CPU or memory I would scale the application horizontally by increasing the number of replicas using horizontal pod autoscaler but if the bottom leg is resource specific I would vertically scale the application by upgrading the resources allocated to each Port so this is how you can scale your application either horizontally or vertically once you scale it you can then Monitor and validate to check the application performance if everything is working fine after scaling other than this in kubernetes you can also scale your nodes and that can be done using autoscaler or also tools like
K now that you have understood this let's move on to the next question which is a troubleshooting question so while troubleshooting a network issue in the cluster you noticed Cube proxy in the logs what is the role of Q proxy so this is as I said a kubernetes architecture kind of question asking you what is Q proxy proy so Q proxy is a component that runs on every node and it is a network component that handles TCP and UDP packets forwarding between the backend Network Services it is very important for Reliable communication between pods and
the services within the cluster it does that by routing traffic to the right destination and if you want to go deep understand how Q proxy handles TCP UDP traffic using IP tables you can check out the official documentation where it explains what is Cube proxy and how it actually works Works moving on the next scenario based question is your team is planning a high availability kubernetes cluster describe the process and considerations for designing a highly available kubernetes cluster so this is a scenario based porion to create a high availability kubernetes cluster and we do this
in production for our critical application so whenever we create a highly available cluster we need to deploy the nodes and the pods in three different or many different ACS so to create a highly available cluster we first do the multimaster setup where we deploy multiple Master nodes across three different AES to ensure redundancy and fall tolerance this is the diagram here so we have three different Master nodes which is known as multimaster setup if we have three different Master nodes we also need to distribute the hcd which is the data storage that stores all the
information about the cluster so we distribute hcd members across the as's in the similar fashion as the master node this ensures data redundency and resilience against Zone failures now if you have have three different Master node to distribute the traffic we also need a load balancer so we configured a TCP load balancer such as AWS Network load balancer to evenly distribute API request among the API server this setup eliminates the risk of single API server becoming a bottleneck or a point of failure along with this we also enable node Auto Repair which is a feature
by kubernetes engine to make sure that it automatically detects and replaces the unhealthy nodes so this is how you can set up a highly available kubernetes cluster something you should definitely know and you can also check out this blog which explains how to create a kubernetes highly available cluster now moving on the next question is again a scenario based question in your kubernetes environment a master or a worker node suddenly fails what happens when the master or worker node fails so this is a question that I got recently in my kubernetes interviews so make sure
you understand this properly what will happen if one of the nodes fail if the master node fails what happens if worker node fails what happens so we all know Master node is something that is used to handle everything inside the cluster and if the master node fails the cluster continues to operate normally but the Pod management is lost so if the master node fails there's no management for the pods there are no nude pods going to be created whereas if the worker node fails worker node is something that is running your application so if the
worker node fails obviously the application running on it will also be failed so you will get DNS failures on the worker node but the master node is going to run properly kubernetes when finds out that the worker node is failing it will Mark that node as not ready so that the new pods are not going to be created on that and the scheduler is going to remove that pods which are on the failed worker node and it will launch it somewhere else on the other nodes so this is what happens when the worker or the
master node fails now that we know this let's move on to the next question which is how does ingress help in kubernetes so Ingress is a kubernetes object that is used to expose the services to the external world so Ingress is a resource that that defines rule to expose services to external traffic and it also provides you all these features it Expos the services to the internet using a single IP address or a domain name it routes the traffic to your services based on host name or host path it also provides features like load balancing
SSL termination name based virtual hosting and it simplifies the management and configuration of your services so this is what Ingress does moving on the next question is you are selecting a service to expose your application hosted on kubernetes list the different types of services in kubernetes so service is a kubernetes object that is used to expose your application running on the nodes to the outside world and there are different types of services like cluster IP node Port load balancer and external name so cluster IP here is the default kubernetes service which exposes the application on
cluster's internal IP address which means the application is only accessible inside the cluster within the cluster next is node port in this service type you expose your application on the node's IP address colon some particular Port so worker node IP address some Port it exposes a service on a static port on each node in the cluster making it accessible from the outside cluster and we also have projects which are using nodeport the project microservices python application to convert video to audio is using node Port as a service next service is load balancer in this you
provision an external load balancer in the cloud uh directing traffic to your the kubet service so when you use load balancer as a service type you will be creating a load balancer in the CL Cloud that will be used to expose our application lastly we have external name in this you map the service to an external DNS name enabling referencing of external Services by the name from within the cluster so these are different service type in kubernetes once you understand this or you answer this the followup question could be what do you know about headless
service so this is again another type of service which where the type is cluster IP but there is no IP address attached to the service so headless service in kubernetes is the type of service that doesn't allocate a cluster IP so here the spec cluster IP set to none and the spec type is cluster IP to understand this more properly I can show you the docu the Manifest for this particular headless service so this is what the Manifest looks like you have the service and the type is cluster IP but it said To None which
means the service is not going to have any IP and if you want to access anything you can directly access the parts behind the services we have also used this type of service in this particular project so if you want to see how it works you can go through and do this Pro project as well now this type of services are used for databases as we have used inside our project for mongod DP databases and the next question we have is your manager has instructed you to run several scripts before starting the main application in
your kubernetes SP and suggested using init containers what are init containers so init containers and kubernetes are something that will run before your main application runs so you have two containers inside a pod one is init container another is application container this init container will run before before the main application container is running and we use this for different use cases like running a script setting up network configuration or downloading configuration file or anything that needs to be done before the application starts so init container is a type of container in kubernetes that runs before
the main application container in the PA runs and the purpose of init container is to perform initialization stask or setup procedures that are not present inside the application container images so this is how unit containers are used apart from this there are also other types of containers like Ambassador or sidecar containers I've explained sidec car containers in one of the shots recently so you can also check that out moving on the next question is a critical application running on one of the node is not working properly how do you monitor applications in kubernetes so this
question is on how you monitor applications for monitoring application you can use tools like Romas grafana spunk and there are also other monitoring tools for kubernetes uh which you can use but here I've mentioned some few points that you can explain in your interviews so to collect container metrics you can use tools like C advisor or container run times you can also use tools like you can also use commands like Cube CTL top or cube CTL stat next for getting cluster level metrics you can use kubernetes monitoring tools like Cube State metrix metrix server to
get the logs you can use efk uh or low Kei or other tools you can also use Readiness and livess pro for health check to monitor the application you can also set up alerting to get alerts on on slack teams mails Etc using Prometheus alert manager and optionally you can also use third party tools like all the different uh monitoring tools that we have so this is how you can monitor your application running on kubernetes next we have another scenario based question on gitops so your manager read an article on gitops and want you to
do a POC on it what is gitops and how do you implement it so we all know giops is something that is used to automate your deployments on kubernetes so how it works is you the you push your manifest files on a repository like GitHub and once you push this every anytime you make a change the giops tool will check get that and update it in your kubernetes cluster so I've mentioned here for kubernetes giops is a practice where all the configuration include manifest file or Helm charts are stored in Gate repository changes to this
repository will trigger a configuration using deployment pipelines ensuring that the cluster desired State matches the code in the repository and this approach streams line management promote coll collaboration and provides a clear audit Trail you can do this using tools like Argo CD flux CD I have this video which explains kops very properly with project to deploy Tetris game on kubernetes using Argo CD so I recommend you checking it out if you have no clue about gitops next the company is very concerned about securing cluster least some list some list some security measures that you can
take while using kubernetes so this question is on securing kubernetes cluster very important thing thing that a devops engineer should know so how can you secure your kubernetes clusters you can secure using all these different ways first is rbac so you restrict users access based on their roles then you can use Network policies to determine or to handle the traffic inside the P or even to external external communication as well your container security ensure secure run times and image scanning uh Secrets management you should always secure your secrets either using kubernetes Secrets or using other
tools like hashiko wal AWS Secrets manager audit logging always track activities for security monitoring update and patching make sure your clusters are up to date and maintain the component and notes current version as well you can also use third party security tools which are lot of them right now in the cncf landscape so I don't want to name all of them but this is how you can secure your kubernetes clusters now the followup question could be explain what is kubernetes rback so RB stands for role based access control and it is a kubernetes uh way
to restrict access to different resources like deployments SPS manifest in rbac you can Define what are the different roles attached to a particular user so let's say this user can only create deployments but it cannot access nodes or it cannot access pods so this is what you can do using arbac you define roles with specific permissions like create delete and bind them to users or Ro or groups using role bindings arbac ensures users and application have access only to necessary resources following the principle of least privilege and enhancing security so this is what our back
is and I've also explained or ran the commands to create roles and role binding in the CK exam question so if you want to know how to create roles and attach them you can check that out moving on the next question is how do you perform maintenance on the kubernetes noes so to perform the maintenance you first need to make sure that the application running on that node is removed so you first uh make the node unschedulable which means no New Port should be running on it and you can do that using Cube CTL Cod
and command once you do this next step is to remove all the existing pod using the cube CTL drain command then perform your maintenance optionally you can reboot the a node once everything is done properly you can then uncoding it to make sure that the pods are pods are deployed on the nodes again and you can also verify the status using Cube CTL get nodes or cube CTL describe command okay moving on the next question is explain demon sets so demon sets in kubernetes are used to deploy pods on every node inside the cluster so
demon set ensures that the specific part runs on every nodes in the cluster and they are used for system level tasks like logging or monitoring that needs to be deployed on each node so when you create a demon set manifest like this and you run it this pod which is fluent D image is going to be running on every node and this is what demon said does it will make sure that the Pod is running on every node each matching node has exactly one instance of the Pod created by demon set running on it and
you can run learn more about demon set here from the official documentation you can also see demon set ensures that all nodes run a copy of the Pod that is defined inside this team ins set here moving on the next question is a junior engineer working with database on kubernetes is confused and asks you to differentiate between config maps and secret again a very important question people are confused about config maps and secrets what does config Maps hold what does secret hold so config maps are generally used to save the application configuration whereas secrets are
used to store credentials like database passwords usernames AP keys and Etc so config map ideally storees the application configuration in the plain text whereas Secrets store sensitive data like passwords in an encrypted format both config map and secrets can be used as a volume and mounted inside the through a p definition file which is true so here's an example of how you can create a config map so we are creating a config map that is holding a value environment equals to Dev here we have a secret that is first encrypted and then stored inside the
secret uh this is how you can create it and you can then use both of them inside the Pod manifest as a volume moving on the next question is what is the purpose of operators again a very important topic that a devops engineer should know you will get this question in advance kubernetes interviews so the purpose of operators we use operators to package complex application that needs to be run on kubernetes cluster so a kubernetes operator is a method of packaging deploying and managing kubernetes application operators uses the kubernetes API to automate tasks such as
deployment scaling implementing custom controllers enabling selfhealing systems supporting declarative management practices so in short if you want to deploy something on kubernetes you can use operators which can help you deploy complex applications you can find all the different operators simil on operator Hub so if I show you uh this is operator hub.io where you can use you can use operators to deploy them on your clusters so if you want to deploy Aro CD you can directly do it you can also use promethus uh if you want to monitor your applications you can use promethus operator
which is given by Red Hat here so for example the Prometheus operators automate the deployment and management of Prometheus monitoring instances on kubernetes handling task such as configuration scaling and sell feeling so you can customize them as well you can customize the customer resource definitions and then deploy them on your kubernetes clusters to to need to learn more about it you can go through this cncf blog which explains what are C what are operators and how to use them along with some examples as well moving on the next question is how can you run a
pod on a specific node so you have a pod that you want to run on a specific node how can you do that you can do that using node Affinity or node selectors and you can also do that using node name so how this works is you first label the node so you can label the node using label so I'm labeling a node which is node 01 uh the label is node location equal to Germany once you do that you can then set the Affinity Rules by saying that the part should be running on the
Node which has the label as Germany whereas you can also use node name where you can specify that the node name equals to node 01 and the Pod will run on that particular node so this is how you can run a pod on a specific node now here is also an example so node Affinity said as node name equals to app worker node and this is the node which has the label app worker node this has app web node so the Pod will be running on this particular node this is using Affinity rules moving on
the next question is suppose a pod exceeds its memory limit what signal will be sent to the process so if you work on kubernetes you will obviously see an error something like o killed or out of memory killed you get this and if you get this the application is stopped so when a pod exceeds its memory limit the kernels out of memory Killers may send the S kill signal to process and terminates it immediately so in kubernetes when a container in the Pod is terminated to memory limits uh in the container runtime which is Docker
or container D may send the signal two main process with the container leading to immediate termination so this is what you you get whenever the memory is out of uh out of the limit that is mentioned inside the Pod manifest the next question is you need to ensure that the specific pod remains operational at all times how to make sure that the Pod is always running to do this you can use probes so here is an example of how you can use liveness probes to make sure that the application is running all times so this
is one of the most frequently Asked kubernetes interview questions and the answer here is we can use probes a liveness probe within the Pod is ideal in this scenario liveness probe always checks if an application in the Pod is running if this checks fails the container gets restarted and this is ideal in many scenarios where the container is running but somehow the application inside the container crashes so you can make sure that the application is running all the time using livess probes now let's move on the next question is what will you do to upgrade
a kubernetes cluster this is a very important question and you should know how to upgrade a kubernetes cluster because kuber releases new version every 4 months and this is something a devops engineer does in their day-to-day activities so you can upgrade a kubernetes cluster using the help of the official documentation if you search for upgrade kubernetes cluster you will find the official documentation which has the steps on how you can upgrade it so if you check this it will show you that you need to first start with upgrading the master node and then go with
upgrading the worker nodes and that's what mentioned in this answer here as well so to update the kubernetes cluster one needs to update the components of the cluster and The Specific Instructions to upgrade the same depends on the specific setup and configuration used that you can check in this year so documentation has different pages for you to upgrade from one version to another and the steps mentioned here is first find the desired Target version and the upgrade plan save all the data always take backup now upgrade the control plane which is the master node components
also upgrade the worker node components one by one through a rolling upgrade process which is the default process and finally when the upgrade is is done verify whether the cluster and the components are working properly or not so this is the step by step on how you need to upgrade a kubernetes cluster again if you want to see in Practical how to upgrade a cluster I have shown that in the CK questions video you can check how to upgrade a kubernetes cluster moving on the next question is why should we use custom name spaces so
by default whenever you create a kubernetes cluster you get some name spaces and you can also launch your applications in that but why should we use our own custom name spaces we should use this to properly distribute our application according to their uh actual use case so by creating custom name spaces you can logically group your resources based on your needs such as separating production and development environments or separating applications by team and Department this makes it easier to manage and maintain your resources within the cluster and also provides better security and resource isolation in
this diagram here you can see we have some resources deployed in default namespace some in Dev namespace some in QA namespace using this we understand that this belongs to Dev Nam dep environment this belongs to QA environment so using custom name space you can logically group them and also uh also separate them moving on the next question is can you shedu the Pod to a node if the node is Tainted so what do you mean by a tainted node and how can you shedule the pod on it so if you look here in the diagram
this node is Tainted which means no pod is going to be deployed on this particular node but the question is asking how can you shed will a pod to the node if the node is Tainted so you can do that by using tolerations so if you want a pod to be deployed here it should have the Toleration for this particular tint so you can see if the node is tinted pods will not be scheduled on it by default but you can use tolerations in the Pod spec to allow specific pods to be scheduled on that
tainted node tolerations are used to specify the Pod uh tolerations are used to specify that the Pod can tolerate or ignore a certain taint allowing it to be scheduled on that tainted so how it works you can taint a node using the command Cube CTL taint with this particular uh taint information once you add a taint and if you want to add a pod on that you can add Toleration like this if the Pod has this Toleration it can be deployed on the Node which is Tainted so this can be useful in scenarios where you
want to reserve certain nodes for specific types of workloads or to Mark nodes as unsuitable for certain workloads so this is how you can deploy a pod on a tainted node so yeah that that is all we have so I hope this all these questions that we have discussed in this particular video are helpful and this will help you in your kubernetes interviews if you have any questions any doubts do let me know and if you want me to share this document with you let me know in the comment section I hope this video was
informative thank you and have a good day bye [Music]
Related Videos
Kubernetes Explained - What is Kubernetes and How it works?
28:26
Kubernetes Explained - What is Kubernetes ...
Cloud Champ
22,040 views
Kubernetes Scenario Based Interview | Kubernetes Interview Questions and Answers for Experienced
29:43
Kubernetes Scenario Based Interview | Kube...
LogicOps Lab
22,264 views
6 Years Experienced DevOps Engineer Live Interview and Feedback #devopsinterview #devopsengineer
28:01
6 Years Experienced DevOps Engineer Live I...
DevOps and Cloud Labs
7,547 views
TOP Docker Interview Questions and Answers | DevOps Interview [2024]
24:15
TOP Docker Interview Questions and Answers...
Cloud Champ
23,761 views
Top Real time Devops Interview Questions And Answers For 2024 | For Fresher to Experienced
29:56
Top Real time Devops Interview Questions A...
Ninotronix
3,070 views
Terraform Scenario Based Interview Questions and Answers | DevOps Interview
25:29
Terraform Scenario Based Interview Questio...
Cloud Champ
43,500 views
Common Kubernetes Real Time Challenges | 3 Production Scenarios
33:08
Common Kubernetes Real Time Challenges | 3...
Abhishek.Veeramalla
29,081 views
Kubernetes Scenario Interview Questions | Kubernetes Interview Questions and Answers for Experienced
33:47
Kubernetes Scenario Interview Questions | ...
LogicOps Lab
58,950 views
DevOps Interview Questions and Answers for Freshers and Experienced in 2024
42:33
DevOps Interview Questions and Answers for...
Cloud Champ
213,489 views
Kubernetes interview questions & Answers
1:22:23
Kubernetes interview questions & Answers
Deekshith SN
73,625 views
Kubernetes Architecture in 7 minutes | K8s explained
7:05
Kubernetes Architecture in 7 minutes | K8s...
Cloud Champ
24,501 views
Kubernetes Interview Questions |  DevOps Interview Questions | Kubernetes Scenario Based Questions
30:30
Kubernetes Interview Questions | DevOps I...
LogicOps Lab
29,476 views
Mastering Kubernetes: Advanced Interview Questions & Answers | Kubernetes Interview Prep
18:17
Mastering Kubernetes: Advanced Interview Q...
DGR Uploads
5,261 views
Kubernetes Interview Questions | Kubernetes Interview Questions And Answers | Intellipaat
33:45
Kubernetes Interview Questions | Kubernete...
Intellipaat
110,067 views
Excellent Senior DevOps AWS Cloud Engineer Interview | Every DevOps Engineer MUST watch
44:25
Excellent Senior DevOps AWS Cloud Engineer...
DevOps and Cloud Labs
73,791 views
Do NOT Learn Kubernetes Without Knowing These Concepts...
13:01
Do NOT Learn Kubernetes Without Knowing Th...
Travis Media
276,028 views
Excellent DevOps Cloud Interview. Must watch! #devops #awsinterview #cloudinterview  #devopsengineer
32:46
Excellent DevOps Cloud Interview. Must wat...
DevOps and Cloud Labs
51,343 views
Docker Interview Question and Answers for experienced and freshers | Docker tutorial | Code Decode
26:16
Docker Interview Question and Answers for ...
Code Decode
29,947 views
Real-Time DevOps Interview Questions | DevOps Interview
27:55
Real-Time DevOps Interview Questions | Dev...
DevOps Shack
14,712 views
Kubernetes Interview Tips | Kubernetes Interview Questions And Answers | K8S Interview | Hi-Tech
18:53
Kubernetes Interview Tips | Kubernetes Int...
Hi-Tech Institution
3,017 views
Copyright © 2024. Made with ♥ in London by YTScribe.com