Kubernetes Interview Questions and Answers
Basic Questions
1. What is Kubernetes, and why is it important?
Kubernetes (K8s) is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It ensures that applications run consistently across different environments, making it essential for modern cloud-native development.
2. How are Kubernetes and Docker related?
Docker is a containerization platform that packages applications into lightweight, portable containers. Kubernetes is a container orchestrator that manages, scales, and distributes these containers efficiently. While Kubernetes initially supported Docker, it now works with multiple runtimes like CRI-O and containerd.
3. What is container orchestration?
Container orchestration is the process of automating the management of containerized applications across multiple hosts. It handles deployment, scaling, networking, and availability of containers. Kubernetes, Docker Swarm, and Apache Mesos are some popular container orchestration tools.
4. Why is container orchestration necessary?
Managing containers manually becomes complex when applications scale. Container orchestration provides:
- Automated deployment and scaling
- Self-healing (restarts failed containers)
- Load balancing and traffic distribution
- Resource allocation and monitoring
5. What are the key features of Kubernetes?
- Automated Scheduling — Allocates containers efficiently.
- Self-healing — Detects and restarts failed containers.
- Auto-scaling — Adjusts resources based on demand.
- Load balancing — Distributes traffic across pods.
- Service discovery — Simplifies inter-container communication.
- Security & Compliance — Implements access controls and policies.
Kubernetes Architecture and Components
6. What is a Kubernetes cluster?
A Kubernetes cluster is a group of interconnected nodes that run containerized applications in a fault-tolerant and scalable manner. It consists of:
- Master Node (Control Plane) — Manages scheduling, cluster state, and configurations.
- Worker Nodes — Execute containerized workloads.
7. What are nodes in Kubernetes?
A node is a physical or virtual machine in a Kubernetes cluster where containers run. Nodes contain:
- Kubelet — Manages pod lifecycle.
- Container runtime — Runs containers (Docker, containerd, CRI-O).
- Kube-proxy — Handles networking and service discovery.
8. What is the role of the Kubernetes Master Node?
The Master Node manages cluster-wide activities, including:
- API Server — The primary communication gateway.
- Scheduler — Assigns pods to worker nodes.
- Controller Manager — Manages controllers like Replication and Node Controllers.
- Etcd — A distributed key-value store for cluster state.
9. What is etcd in Kubernetes?
Etcd is a highly available distributed key-value store that stores cluster data such as configurations, secrets, and metadata. It is crucial for maintaining the desired state of Kubernetes clusters.
10. What is kube-proxy?
Kube-proxy is a networking component that routes traffic between pods and services. It ensures reliable communication inside the cluster.
Deployment and Management
11. How does Kubernetes handle containerized deployment?
Kubernetes automates deployment by:
- Scaling up/down pods based on traffic.
- Rolling out updates with rollback support.
- Self-healing failed containers.
12. What is a Pod in Kubernetes?
A Pod is the smallest deployable unit in Kubernetes. It encapsulates:
- One or more containers sharing networking and storage.
- Configuration files, environment variables, and secrets.
13. What are ReplicaSets in Kubernetes?
A ReplicaSet ensures that a specified number of identical pods are running at any given time, automatically replacing failed pods.
14. What is a DaemonSet?
A DaemonSet ensures that a specific pod runs on all or selected nodes. It is used for:
- Monitoring and logging agents (e.g., Fluentd, Prometheus).
- Networking components (e.g., Calico, Cilium).
15. What is a StatefulSet?
StatefulSets manage stateful applications, ensuring:
- Stable network identities.
- Persistent storage across restarts.
- Ordered scaling and deployment (e.g., for databases).
Networking and Load Balancing
16. What is Ingress in Kubernetes?
Ingress is an API object that manages external access to services, providing:
- Traffic routing rules
- SSL/TLS termination
- Load balancing
17. What are the different types of services in Kubernetes?
- ClusterIP (default) — Internal service within the cluster.
- NodePort — Exposes the service via a static port on each node.
- LoadBalancer — Creates an external load balancer.
- ExternalName — Maps services to an external DNS.
18. What is a Kubernetes Load Balancer?
A Load Balancer distributes incoming network traffic across multiple pods to:
- Optimize resource utilization.
- Enhance fault tolerance.
- Improve application availability.
19. How do pods communicate within a cluster?
Pods communicate via:
- Localhost (within the same pod).
- Cluster DNS Service (using Kubernetes Services).
- Environment Variables (auto-generated by Kubernetes).
Security and Access Control
20. What is Role-Based Access Control (RBAC) in Kubernetes?
RBAC restricts user access by defining:
- Roles (permissions for resources).
- RoleBindings (assign roles to users/groups).
- ClusterRoles (permissions for entire clusters).
21. How can you secure a Kubernetes cluster?
- Limit access to etcd (as it contains sensitive data).
- Use Network Policies to isolate workloads.
- Enable role-based access control (RBAC).
- Monitor and log activity using tools like Prometheus.
Monitoring, Scaling, and Maintenance
22. What are some Kubernetes monitoring tools?
- Prometheus — Metrics collection and alerting.
- Grafana — Interactive visualization dashboard.
- cAdvisor — Real-time container monitoring.
- Fluentd — Log processing and forwarding.
23. How do you ensure high availability in Kubernetes?
- Use multiple master nodes (HA setup).
- Enable Pod Disruption Budgets (PDB) to minimize downtime.
- Implement rolling updates instead of recreating pods.
24. How can you assign a Pod to a specific node?
By using node affinity or taints & tolerations, e.g.:
25. What happens when a worker node fails?
- Kubernetes detects failure and marks the node as NotReady.
- The scheduler moves pods to other healthy nodes.
- If running on the cloud, auto-scaling may provision a new node.
26. How do you perform maintenance on a Kubernetes node?
Use the following commands:
Kubernetes Advanced Interview Questions
Kubernetes Deployment and Management
1. What are the two types of Kubernetes pods?
- Single-container pods: Contain only one container (most common).
- Multi-container pods: Contain multiple containers that share storage and networking.
2. What is a Job in Kubernetes?
A Job ensures that a pod runs to completion and can restart failed tasks until completion. It is used for batch processing tasks.
3. What is a Persistent Volume (PV) in Kubernetes?
A Persistent Volume (PV) is a cluster-wide storage resource, separate from pods, that retains data even if a pod is deleted.
4. What is a Persistent Volume Claim (PVC)?
A Persistent Volume Claim (PVC) allows users to request storage resources dynamically from Persistent Volumes.
Networking in Kubernetes
5. How do you expose a Kubernetes service externally?
Use NodePort, LoadBalancer, or Ingress to expose services.
6. How does a Kubernetes Headless Service work?
A Headless Service does not assign a ClusterIP and provides direct DNS-based discovery to backend pods.
7. What is the difference between ClusterIP, NodePort, and LoadBalancer?
- ClusterIP: Accessible only within the cluster.
- NodePort: Opens a static port on all nodes for external access.
- LoadBalancer: Uses a cloud provider’s external load balancer.
Scaling and Performance Optimization
8. How does Kubernetes handle auto-scaling?
- Horizontal Pod Autoscaler (HPA): Scales pods based on CPU/memory usage.
- Vertical Pod Autoscaler (VPA): Adjusts resource requests and limits for existing pods.
- Cluster Autoscaler: Adds/removes worker nodes.
9. How can you optimize workload distribution in Kubernetes?
- Use Affinity and Anti-affinity rules for node placement.
- Implement Horizontal Pod Autoscaler (HPA).
- Use Resource Requests & Limits to optimize CPU/memory usage.
10. What happens when a Kubernetes pod exceeds its memory limit?
- Kubernetes terminates the pod with an OOM (Out of Memory) error.
- The container receives a SIGKILL signal.
11. How do you achieve zero-downtime deployments in Kubernetes?
- Use Rolling Updates to update pods incrementally.
- Deploy Canary releases to test updates on a small subset of users.
- Implement Readiness Probes to ensure traffic is only sent to healthy pods.
12. What security best practices should be followed in Kubernetes?
- Enable Role-Based Access Control (RBAC).
- Use Network Policies to isolate workloads.
- Enable Pod Security Policies (PSP).
- Restrict access to etcd.
- Scan container images for vulnerabilities.
13. What are some challenges of running Kubernetes in production?
- Security risks (misconfigured RBAC, exposed APIs).
- Complex networking (Ingress, Load Balancing, Service Mesh).
- Resource optimization (CPU/memory utilization).
- Monitoring and logging at scale.
14. What happens when the Master Node fails?
- If HA (High Availability) is not configured, the cluster becomes unresponsive.
- In an HA setup, another master node takes over.
15. How do you upgrade a Kubernetes cluster?
- Backup cluster data.
- Upgrade Control Plane (Master Node) first.
- Upgrade Worker Nodes using a rolling update.
- Verify all components after the upgrade.