Senior Devops Engineer
100+ Senior Devops Engineer Interview Questions and Answers
Q1. What are Terraform life cycles ? and how do we use them ?
Terraform life cycles are a set of stages that define how resources are created, updated, and destroyed.
Terraform life cycles include create, read, update, and delete (CRUD) operations.
They are defined in the provider's resource configuration.
They can be used to control the order in which resources are created or updated.
Examples of life cycle hooks include pre-create, post-update, and pre-delete.
They can be used to perform custom actions before or after resource creation or ...read more
Q2. How do you trigger a pipeline from a specific version of application code?
Trigger pipeline from specific version of app code
Use version control system to track code changes
Configure pipeline to trigger on specific branch or tag
Pass version number as parameter to pipeline
Use scripting to automate version selection
Integrate with CI/CD tools for seamless deployment
Senior Devops Engineer Interview Questions and Answers for Freshers
Q3. What is ingress in Kubernetes, and how does it help us when deploying an application in AKS?
Ingress is a Kubernetes resource that manages external access to services in a cluster.
Ingress acts as a reverse proxy and routes traffic to the appropriate service based on the URL path or host.
It allows for multiple services to share a single IP address and port.
In AKS, we can use Ingress to expose our application to the internet or to other services within the cluster.
We can configure Ingress rules to specify which services should handle which requests.
Ingress controllers,...read more
Q4. what are stages involved in release pipeline ? Explain the code
Release pipeline involves stages for deploying code changes to production.
Stages include build, test, deploy, and release.
Code is built and tested in a development environment before being deployed to staging.
Once tested in staging, code is released to production.
Continuous integration and delivery tools automate the pipeline.
Examples include Jenkins, GitLab CI/CD, and AWS CodePipeline.
Q5. What are TF provisioners? Describe their use cases.
TF provisioners are used to execute scripts or commands on a resource after it is created.
Provisioners are used to configure resources after they are created
They can be used to install software, run scripts, or execute commands
Provisioners can be local or remote, depending on where the script or command is executed
Examples include installing packages on a newly created EC2 instance or running a script to configure a database
Provisioners should be used sparingly and only when ...read more
Q6. How do we ensure high availability of VM and AKS worker nodes?
Ensure high availability of VM and AKS worker nodes
Use availability sets for VMs to distribute them across fault domains and update domains
Use node pools in AKS to distribute worker nodes across multiple availability zones
Implement auto-scaling to add or remove nodes based on demand
Monitor node health and set up alerts for failures
Regularly update and patch nodes to ensure security and stability
Share interview questions and help millions of jobseekers 🌟
Q7. What are the default inbound/outbound NSG rules when deploying a VM with NSG? Explain them.
Default inbound/outbound NSG rules when we deploy VM with NSG
By default, all inbound traffic is blocked except for traffic that is explicitly allowed by a rule
By default, all outbound traffic is allowed
Inbound rules are evaluated before outbound rules
Default rules can be modified or deleted as per requirement
Q8. How do we test connectivity to our app in AKS from Azure Front Door?
Test connectivity to AKS app from Azure Front Door
Create a test endpoint in AKS app
Add the endpoint to Front Door backend pool
Use Front Door probe feature to test endpoint connectivity
Check Front Door health probes for successful connectivity
Senior Devops Engineer Jobs
Q9. Which deployment strategy have you used?
I have used blue-green deployment strategy in previous projects.
Blue-green deployment involves running two identical production environments, with one active and one inactive.
Switching between the two environments allows for zero downtime deployments and easy rollback in case of issues.
I have implemented blue-green deployment using tools like Kubernetes and Jenkins in past projects.
Q10. If storage is full, what steps do you take on on-premises servers?
When storage is full on on-premises servers, consider deleting unnecessary files, archiving old data, expanding storage capacity, or optimizing storage usage.
Identify and delete unnecessary files or logs to free up space
Archive old data that is not frequently accessed
Expand storage capacity by adding more disks or upgrading existing ones
Optimize storage usage by compressing files or moving them to a different location
Q11. What are all the DevOps tools you have used in your application deployment?
I have experience with a variety of devops tools including Jenkins, Docker, Kubernetes, Ansible, and Terraform.
Jenkins
Docker
Kubernetes
Ansible
Terraform
Q12. What are node affinity and pod affinity in Kubernetes (K8s)?
Node affinity and pod affinity are Kubernetes features that allow you to control the scheduling of pods on nodes.
Node affinity is used to schedule pods on specific nodes based on labels or other node attributes.
Pod affinity is used to schedule pods on nodes that already have pods with specific labels or attributes.
Both features can be used to improve performance, reduce network latency, or ensure high availability.
Examples include scheduling pods on nodes with specific hardwa...read more
Q13. What will be the tenancy of an EC2 instance if the launch configuration specifies dedicated tenancy and the VPC specifies default tenancy?
The EC2 instance will have dedicated tenancy regardless of the VPC setting.
EC2 instance tenancy is determined by the launch configuration, not the VPC setting
Dedicated tenancy means the instance runs on single-tenant hardware
Default VPC setting does not impact instance tenancy
Q14. Which Azure cloud services have you worked on? Discuss their use cases in detail at your workplace.
I have worked on Azure App Service, Azure Functions, and Azure DevOps.
Azure App Service was used for hosting web applications and APIs.
Azure Functions were used for serverless computing and event-driven scenarios.
Azure DevOps was used for continuous integration and deployment.
We used Azure DevOps to automate the deployment of our applications to Azure App Service and Azure Functions.
We also used Azure DevOps for source control, work item tracking, and build pipelines.
Q15. Monitoring tool experience? explain the kind of monitors you might have set for monitoring infra?
I have experience with various monitoring tools and can set up monitors for infrastructure health, performance, and security.
I have experience with tools like Nagios, Zabbix, and Prometheus.
For infrastructure health, I set up monitors for CPU usage, memory usage, disk space, and network connectivity.
For performance, I set up monitors for response time, throughput, and error rates.
For security, I set up monitors for unauthorized access attempts, failed login attempts, and susp...read more
Q16. Write a shell script to check if a file exists. If it does not exist, the script should create it.
Shell script to check for a file and create it if it does not exist
Use the 'test' command to check if the file exists
If the file does not exist, use 'touch' command to create it
Q17. What is the difference between pipeline variables and variable groups in Azure DevOps?
Pipeline variables are scoped to a single pipeline, while variable groups can be shared across multiple pipelines.
Pipeline variables are defined within a pipeline and can be used in tasks within that pipeline
Variable groups are defined at the project level and can be used across multiple pipelines
Variable groups can be linked to Azure Key Vault for secure storage of sensitive information
Pipeline variables can be overridden at runtime using runtime parameters
Variable groups ca...read more
Q18. Write an Ansible playbook to install and start Datadog.
Ansible playbook to install and start Datadog
Use Ansible's package module to install Datadog agent package
Use Ansible's service module to start the Datadog service
Ensure proper configuration settings are applied in the playbook
Q19. How do you check installed software on an Ubuntu machine?
To check installed softwares in Ubuntu machine, you can use the dpkg command.
Use dpkg -l to list all installed packages
Use dpkg -l | grep
to search for specific packages Use dpkg -l | less to view the list page by page
Q20. How do you configure a static IP address for an on-premise server?
To keep a static IP for an on-premise server, configure the network settings on the server and the DHCP server.
Assign a static IP address to the server within the network range
Configure the DHCP server to reserve the static IP address for the server's MAC address
Ensure that the server's network settings are set to use the static IP address
Update DNS records if necessary to reflect the new static IP address
Q21. How do you partition a CentOS Linux machine?
To partition a CentOS Linux machine, you can use tools like fdisk or parted to create, delete, and manage partitions on the disk.
Use fdisk command to create, delete, and manage partitions on the disk
Use parted command for more advanced partitioning options
Make sure to backup important data before partitioning
Q22. How would you protect your web application from public traffic?
Protecting web application from public traffic involves implementing security measures such as firewalls, access controls, and encryption.
Implementing a Web Application Firewall (WAF) to filter and monitor HTTP traffic
Using access control lists (ACLs) to restrict access to certain IP addresses or ranges
Enforcing HTTPS encryption to secure data in transit
Regularly updating and patching software to address vulnerabilities
Implementing rate limiting to prevent DDoS attacks
Q23. In Docker, how do containers communicate?
Containers in Docker can communicate through networking using bridge networks, overlay networks, or user-defined networks.
Containers can communicate with each other using IP addresses and port numbers.
Docker provides default bridge networks for communication between containers on the same host.
Overlay networks allow communication between containers across multiple hosts.
User-defined networks can be created for custom communication requirements.
Containers can also communicate ...read more
Q24. How would you manage drift in Terraform if services are added manually?
To manage drift in Terraform due to manually added services, use Terraform import, state management, and version control.
Use Terraform import to bring manually added services under Terraform management.
Regularly update Terraform state file to reflect the current state of infrastructure.
Utilize version control to track changes made outside of Terraform.
Implement automated checks to detect and reconcile drift in infrastructure.
Q25. What is the process for writing Terraform code to create an Azure Kubernetes Service (AKS) cluster, including the use of state files and methods to lock the state file?
The process for writing Terraform code to create an Azure Kubernetes Service (AKS) cluster
Define the Azure provider in the Terraform configuration file
Specify the AKS cluster resource with necessary configurations such as node count, VM size, etc.
Use Terraform state files to store the current state of the infrastructure
Implement state file locking to prevent concurrent modifications using backend configurations like Azure Blob Storage or Azure Key Vault
Q26. Discuss the architecture of Kubernetes in detail.
K8s is a container orchestration platform that automates deployment, scaling, and management of containerized applications.
K8s architecture consists of a master node and worker nodes.
Master node manages the cluster state and schedules workloads on worker nodes.
Worker nodes run the containers and communicate with the master node.
K8s uses etcd for storing cluster state and API server for communication.
K8s also has various components like kubelet, kube-proxy, and controllers for...read more
Q27. What do you know about Auto Scaling and Load Balancing in AWS?
Auto scaling and load balancing are AWS services that help in managing traffic and scaling resources automatically.
Auto Scaling helps in automatically adjusting the number of EC2 instances based on traffic demand.
Load Balancing helps in distributing traffic across multiple EC2 instances.
Auto Scaling and Load Balancing work together to ensure that the application is highly available and can handle sudden spikes in traffic.
Auto Scaling can be configured to use different scaling...read more
Q28. What happens to the deployed workload within a Kubernetes cluster if the master node goes down?
In case the master goes down in a Kubernetes cluster, the deployed workload continues to run as the worker nodes are still operational.
The worker nodes in the Kubernetes cluster continue to operate and manage the deployed workload even if the master node goes down.
The worker nodes are responsible for running the containers and maintaining the desired state of the cluster.
The master node being down may affect the ability to make changes or updates to the cluster, but the exist...read more
Q29. Explain the pipeline process in Jenkins.
Pipeline process in Jenkins automates the software delivery process.
Pipeline is defined as code in a Jenkinsfile
It consists of stages, steps, and post actions
Each stage can have multiple steps like build, test, deploy
Pipeline can be triggered manually or automatically based on events
Q30. Can you explain the core components of Kubernetes and their roles?
Kubernetes core components include Pods, Nodes, Services, Deployments, and ConfigMaps.
Pods: Smallest deployable units in Kubernetes, can contain one or more containers.
Nodes: Individual machines in a Kubernetes cluster where Pods are deployed.
Services: Abstraction that defines a logical set of Pods and a policy by which to access them.
Deployments: Manages the deployment and scaling of a set of Pods.
ConfigMaps: Decouples configuration artifacts from image content to keep conta...read more
Q31. What are the different functionalities of individual components of a Kubernetes cluster?
Individual components of a Kubernetes cluster have different functionalities such as scheduling, networking, storage, and monitoring.
Kubelet: Responsible for communication between the master node and worker nodes, managing containers on the node.
Kube-proxy: Manages network routing for services within the cluster.
Kube-controller-manager: Ensures that the desired state of the cluster matches the actual state.
Etcd: Key-value store for storing cluster data.
Kube-scheduler: Assigns...read more
Q32. What is the difference between HPA and VPA? Explain their use case.
HPA is Horizontal Pod Autoscaler for scaling pods based on CPU utilization, while VPA is Vertical Pod Autoscaler for adjusting resource requests based on resource usage.
HPA scales the number of pods in a deployment based on CPU utilization, ensuring optimal performance and resource utilization.
VPA adjusts the resource requests of pods based on resource usage, allowing for efficient resource allocation within a cluster.
HPA is suitable for applications with varying traffic patt...read more
Q33. What is the approach for migrating from on-premises systems to AWS?
The approach for migrating from on-premises systems to AWS involves planning, assessment, migration, and optimization.
Conduct a thorough assessment of current on-premises systems and workloads to determine what can be migrated to AWS.
Create a detailed migration plan outlining the steps, timeline, resources, and potential challenges.
Utilize AWS Migration Hub to track the progress of the migration and ensure a smooth transition.
Implement best practices for security, compliance,...read more
Q34. Jenkins CI-CD Pipelines how to declare that and how to integrate which plung'ins to integrated with jenkins actually plung'ins name.
Jenkins CI-CD pipelines are declared using Jenkinsfile and can be integrated with various plugins for additional functionality.
Declare Jenkins CI-CD pipelines using Jenkinsfile in the root directory of the project.
Integrate plugins like Git, Docker, Slack, SonarQube, etc., for specific functionalities.
Use declarative syntax or scripted syntax in Jenkinsfile based on requirements.
Configure stages, steps, post actions, and notifications in the Jenkinsfile.
Leverage Jenkins Pipel...read more
Q35. Write Terraform code to create a resource.
Terraform code for creating an AWS EC2 instance
Define provider and resource block in main.tf file
Specify the AMI, instance type, key pair, and security group in the resource block
Run 'terraform init', 'terraform plan', and 'terraform apply' commands to create the EC2 instance
Q36. Explain the migration process from GitHub to Azure Repos.
Migration process of Github to Azure Repos involves exporting repositories from Github and importing them into Azure Repos.
Export repositories from Github using tools like Git or Github API
Prepare repositories for migration by cleaning up and resolving any dependencies
Import repositories into Azure Repos using tools like Azure DevOps Services or Git commands
Update any references or configurations to point to the new Azure Repos location
Test the migrated repositories to ensure...read more
Q37. What is your experience with core services of cloud computing platforms such as Azure and AWS, including how they operate and their similarities?
Experienced in AWS and Azure core services, focusing on their operations, similarities, and practical applications.
Both AWS and Azure offer Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) solutions.
AWS EC2 and Azure Virtual Machines provide scalable compute resources.
AWS S3 and Azure Blob Storage are used for object storage with similar APIs.
Both platforms support container orchestration with AWS ECS/EKS and Azure AKS.
Identity management is handled by AWS ...read more
Q38. Why is Global Load Balancing used?
Global load balancing is used to distribute incoming network traffic across multiple servers in different geographic locations to ensure high availability and optimal performance.
Ensures high availability by distributing traffic across multiple servers
Improves performance by directing users to the closest server
Helps in disaster recovery by rerouting traffic to healthy servers
Allows for scalability by adding or removing servers easily
Examples: Google Cloud Load Balancing, AWS...read more
Q39. what is azure devops what are the projects that you worked on Ci/Cd pipeline what are the branching strategies that you follow in your project
Azure DevOps is a cloud-based platform for managing the entire DevOps lifecycle.
Azure DevOps provides tools for project management, version control, continuous integration and delivery, testing, and deployment.
I have worked on projects that involved setting up CI/CD pipelines using Azure DevOps, managing releases, and automating testing.
For branching strategies, I have used GitFlow and Trunk-based development depending on the project requirements.
Q40. What is a persistent volume, and can the same volume be attached to different pods?
Persistent volume is storage that exists beyond the lifecycle of a pod and can be attached to different pods.
Persistent volume is a storage resource in Kubernetes that exists beyond the lifecycle of a pod.
It allows data to persist even after the pod is deleted or restarted.
Persistent volumes can be dynamically provisioned or statically defined.
Yes, the same persistent volume can be attached to different pods as long as they are in the same namespace.
Q41. What are the key considerations for security in architecture?
Key considerations for security in architecture include network security, data encryption, access control, and monitoring.
Implement network security measures such as firewalls, VPNs, and intrusion detection systems to protect against external threats.
Utilize data encryption techniques like SSL/TLS to secure data in transit and at rest.
Implement access control mechanisms to ensure that only authorized users have access to sensitive resources.
Set up monitoring tools to detect a...read more
Q42. What is the difference between scan VIP and node VIP?
Scan VIP is used for load balancing traffic to multiple nodes, while Node VIP is assigned to a specific node for direct access.
Scan VIP is a virtual IP address used for load balancing traffic across multiple nodes in a cluster.
Node VIP is a virtual IP address assigned to a specific node in the cluster for direct access.
Scan VIP is typically used for services that need to be highly available and distributed across multiple nodes.
Node VIP is used when direct access to a specifi...read more
Q43. How can one troubleshoot the crashloop error in Kubernetes (k8s)?
Troubleshooting crashloop error in Kubernetes involves checking pod logs, examining resource limits, and verifying configuration files.
Check pod logs to identify the cause of the crashloop.
Examine resource limits to ensure the pod has enough resources to run.
Verify configuration files for any errors or misconfigurations.
Use kubectl commands like describe, logs, and exec to troubleshoot further.
Consider checking for issues with persistent volumes or network connectivity.
Q44. What is your strategy to migrate from on-prem deployment to the Cloud?
A strategic approach to migrate on-prem deployments to the cloud involves assessment, planning, execution, and optimization.
Assess current infrastructure and applications for cloud compatibility.
Choose the right cloud provider (e.g., AWS, Azure, GCP) based on needs.
Develop a migration plan that includes timelines and resource allocation.
Consider a phased migration approach, starting with less critical applications.
Implement automation tools (e.g., Terraform, Ansible) for depl...read more
Q45. Do you have any experience with PCI/DSS compliance?
Yes, I have exposure to PCI/DSS compliance.
I have experience implementing security controls to meet PCI/DSS requirements.
I have worked with teams to ensure compliance during audits.
I am familiar with the 12 requirements of PCI/DSS and how to implement them.
I have experience with tools such as vulnerability scanners and log management systems to ensure compliance.
I have worked with payment gateways and understand the importance of secure payment processing.
Q46. What are the various services offered by Amazon Web Services (AWS)?
AWS offers a wide range of cloud services including computing, storage, databases, machine learning, and more.
Compute Services: EC2 (Elastic Compute Cloud) for scalable virtual servers.
Storage Services: S3 (Simple Storage Service) for object storage and EBS (Elastic Block Store) for block storage.
Database Services: RDS (Relational Database Service) for managed databases like MySQL, PostgreSQL, and DynamoDB for NoSQL.
Networking: VPC (Virtual Private Cloud) for isolated network...read more
Q47. How do you scale an application in Kubernetes?
Scaling applications in Kubernetes involves horizontal scaling, using tools like HPA and cluster autoscaler.
Use Horizontal Pod Autoscaler (HPA) to automatically adjust the number of pods based on CPU or memory usage.
Implement Cluster Autoscaler to dynamically adjust the size of the Kubernetes cluster based on resource demands.
Utilize Kubernetes StatefulSets for stateful applications that require scaling with stable network identifiers.
Consider using Kubernetes Jobs for batch ...read more
Q48. What is the difference between readiness and liveness probes?
Readiness probe checks if a container is ready to serve traffic, while liveness probe checks if a container is alive and healthy.
Readiness probe is used to determine when a container is ready to start accepting traffic.
Liveness probe is used to determine if a container is still running and healthy.
Readiness probe is often used to delay traffic until the container is fully ready.
Liveness probe is used to restart containers that are not functioning properly.
Examples: Readiness ...read more
Q49. How would you pitch the implementation of a new queueing system to a client?
Pitching a new queueing system to a client involves highlighting benefits, addressing pain points, showcasing success stories, and offering a demo.
Highlight the benefits of the new queueing system such as improved efficiency, scalability, and reliability.
Address pain points of the current system like bottlenecks, delays, and resource wastage.
Showcase success stories of other clients who have implemented the new queueing system and seen positive results.
Offer a demo of the new...read more
Q50. How would you resolve the given technical situation?
I would analyze the technical situation, identify the root cause, and come up with a plan to resolve it.
Analyze the technical situation thoroughly
Identify the root cause of the issue
Develop a plan to resolve the issue
Implement the plan and test the solution
Document the solution for future reference
Interview Questions of Similar Designations
Top Interview Questions for Senior Devops Engineer Related Skills
Interview experiences of popular companies
Calculate your in-hand salary
Confused about how your in-hand salary is calculated? Enter your annual salary (CTC) and get your in-hand salary
Reviews
Interviews
Salaries
Users/Month