Deploy the smallest possible, co-located workload unit.
→Define a `Pod` resource. Pods can contain one or more containers that share network and storage.
Why: The Pod is the atomic unit of scheduling in Kubernetes. Containers are always deployed within a Pod.
Reference↗
Ensure the cluster state continuously matches a desired state.
→Rely on the `kube-controller-manager`. It runs control loops that watch resources (e.g., ReplicaSets, Deployments) and reconcile differences.
Why: This is the core declarative, self-healing mechanism. If a Pod managed by a ReplicaSet dies, the controller automatically replaces it.
Automatically assign newly created Pods to the most suitable worker node.
→Rely on the `kube-scheduler`. It filters nodes based on Pod requirements (e.g., resource requests) and scores them to pick the best fit.
Why: The scheduler makes placement decisions based on policy, affinity, and availability, abstracting node selection from the user.
Ensure containers specified in Pods are running and healthy on a given worker node.
→The `kubelet` agent runs on every node, communicates with the API server, and manages the container lifecycle (start, stop, health checks) via a container runtime.
Why: Kubelet is the link between the control plane and the worker node; it executes the Pod specifications.
Persist the entire state and configuration of the Kubernetes cluster reliably.
→Use `etcd`, a consistent and highly-available key-value store. It serves as the single source of truth for the cluster.
Why: All cluster objects (Pods, Services, etc.) are stored in etcd. Only the API server communicates directly with it.
Implement network rules on each node to enable communication via Kubernetes Services.
→The `kube-proxy` component on each node maintains network rules (e.g., iptables, IPVS) that forward traffic from a Service IP to the correct backend Pods.
Why: Kube-proxy is the implementation detail behind the Service abstraction, handling load balancing and routing.
Logically partition a single Kubernetes cluster for multiple teams, projects, or environments.
→Create `Namespace` resources. Namespaces provide a scope for names and a way to attach authorization and policies (e.g., ResourceQuotas).
Why: Namespaces enable multi-tenancy and resource organization without the overhead of multiple clusters.
Provide a stable network endpoint (IP and DNS) for a set of ephemeral Pods.
→Define a `Service` resource that targets a set of Pods using a label selector.
Why: Pods are ephemeral and their IPs change. A Service provides a durable abstraction that load balances traffic to the correct Pods.
Reference↗
Expose an application running in Pods to different network scopes.
→Choose a Service `type`: `ClusterIP` (internal only, default), `NodePort` (exposes on each node IP:port), or `LoadBalancer` (provisions a cloud load balancer).
Why: The Service type determines the accessibility of the application, from purely internal to fully external.
Enable direct network discovery of individual Pods, bypassing the Service proxy.
→Create a `Service` with `clusterIP: None`. This creates DNS A records for each Pod, allowing clients to connect to Pods directly.
Why: Essential for stateful applications like databases (often with StatefulSets) where peer-to-peer communication or stable Pod identity is required.
Organize and select a subset of Kubernetes objects.
→Attach key-value `labels` to objects (e.g., `app: my-api`). Use `label selectors` in other objects (e.g., Services, Deployments) to target them.
Why: Labels are the core grouping mechanism in Kubernetes, enabling loose coupling between resources.
Decouple application configuration from the container image.
→Store non-sensitive configuration data in a `ConfigMap`. Mount it as a volume or inject keys as environment variables into Pods.
Why: This allows configuration to be managed independently of the application code, following 12-Factor App principles.
Store sensitive data like passwords, tokens, or API keys for application use.
→Use a `Secret` object. Mount as a volume or inject as an environment variable.
Why: Secrets are specifically for sensitive data and handled more securely than ConfigMaps (e.g., not shown in `kubectl describe` by default, can be encrypted at rest).
Provide stateful applications with storage that survives Pod restarts.
→A Pod creates a `PersistentVolumeClaim` (PVC) to request storage. An administrator provisions a `PersistentVolume` (PV) that fulfills the claim.
Why: This decouples storage consumption (PVC) from storage provisioning (PV), allowing for portable workload definitions.
Manage CPU and memory allocation for containers.
→Set `resources.requests` for guaranteed resources (used for scheduling) and `resources.limits` for the maximum allowed usage (enforced at runtime).
Why: Requests ensure Pods have enough resources to run; Limits prevent Pods from consuming too many resources and impacting other workloads.
Set aggregate resource constraints on a namespace.
→Create a `ResourceQuota` object to limit the total amount of CPU, memory, or number of objects (Pods, Services) that can be created in a namespace.
Why: ResourceQuotas are essential for multi-tenant environments to ensure fair resource sharing and prevent over-consumption.
Manage Kubernetes resources using version-controlled configuration files.
→Use `kubectl apply -f <filename.yaml>`. This command creates or updates resources based on the file content.
Why: `apply` is declarative, making it ideal for GitOps and CI/CD. It tracks changes and performs a three-way merge, which is safer than the imperative `create` or `replace`.
Diagnose why a Pod is not running correctly (e.g., stuck in Pending, ContainerCreating, or CrashLoopBackOff).
→Use `kubectl describe pod <pod-name>`. Check the `Events` section at the bottom for detailed messages from the scheduler, kubelet, or controllers.
Why: `describe` provides a chronological event log that is the primary tool for debugging resource lifecycle issues.
Provide networking functionality for containers, enabling Pod-to-Pod communication across the cluster.
→Use a Container Network Interface (CNI) plugin (e.g., Calico, Flannel, Cilium). The kubelet on each node uses the CNI plugin to configure networking for each Pod.
Why: CNI provides a standard interface, allowing Kubernetes to be integrated with various networking solutions without modifying core components.
Control access to Kubernetes API resources for users and applications.
→Use Role-Based Access Control (RBAC). Define a `Role` (namespaced) or `ClusterRole` (cluster-wide) with permissions, and bind it to a subject (User, Group, ServiceAccount) using a `RoleBinding` or `ClusterRoleBinding`.
Why: RBAC is the standard for securing Kubernetes, enabling the principle of least privilege for all API interactions.
Reference↗