Kubernetes

ยท

8 min read

What is Kubernetes and Why is it popular?

Kubernetes is an open-source platform for container orchestration, designed to automate the deployment, scaling, and management of containerized applications. It was originally developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF).

Kubernetes is popular for several reasons:

Scalability and Flexibility: Kubernetes is highly scalable and can handle large numbers of containers and services across multiple clusters. It is also highly flexible, allowing users to choose their preferred container runtime, networking solution, and storage solution.

Automation and Efficiency: Kubernetes automates many of the repetitive and time-consuming tasks involved in deploying and managing containerized applications, such as scaling, load balancing, and rolling updates. This can improve the efficiency of the development and operations teams and reduce the likelihood of human error.

Portability: Kubernetes is designed to be highly portable, meaning that applications and services can be easily moved between different cloud providers and on-premise data centers.

Ecosystem: Kubernetes has a large and growing ecosystem of tools and services, including monitoring, logging, and security solutions, which can be integrated with the platform to further enhance its capabilities.

Community: Kubernetes has a large and active community of developers and users who contribute to the project, provide support, and share best practices. This community helps to ensure that the platform remains up-to-date, reliable, and secure.

Overall, Kubernetes has become popular because it addresses many of the challenges and complexities of deploying and managing containerized applications at scale, providing a robust and flexible platform that can support modern application development and deployment practices.

Kubernetes Architecture:

Kubernetes is designed as a distributed system, consisting of a control plane and multiple worker nodes. The control plane manages the overall state of the cluster, while the worker nodes run the containerized applications.

Master Components: Master components are the control plane components of Kubernetes that manage the Kubernetes cluster. The master components include:

Kubernetes API Server:

The Kubernetes API Server exposes the Kubernetes API, which is used by clients to interact with the Kubernetes cluster. It is the primary management point for the Kubernetes cluster and is responsible for validating and processing API requests.

etcd:

etcd is a distributed key-value store that stores the configuration data of the Kubernetes cluster. It is used to store the state of the cluster, including the state of all objects (pods, services, deployments, etc...

Kube-Controller Manager:

The Kube-Controller Manager is responsible for running controllers that are responsible for maintaining the desired state of the Kubernetes cluster. The controllers include the node controller, the replication controller, and the endpoint controller.

Kube-Scheduler:

The Kube-Scheduler is responsible for scheduling the pods on the nodes in the Kubernetes cluster. It uses information about the node's available resources and the pod's resource requirements to determine the best node to schedule the pod on.

Node Components:

Node components are the worker components of Kubernetes that run on each node in the Kubernetes cluster. The node components include:

Kubelet:

The Kubelet is the primary node agent that communicates with the Kubernetes API server and ensures that the containers are running on the node as expected. It is responsible for starting, stopping, and monitoring the containers on the node.

Container Runtime:

The Container Runtime is the software that runs the containers on the node. Kubernetes supports several container runtimes, including Docker, CRI-O, and containerd.

kube-proxy:

The kube-proxy is responsible for providing network connectivity to the pods running on the node. It does this by creating network rules that allow traffic to be forwarded to the pods.

๐Ÿ“ Kubernetes Components:

Kubernetes components can be divided into two categories:

  1. Control Plane Components

  2. Worker Nodes Components.

Control Plane Components:

kube-apiserver:

The kube-apiserver is the main management point for the Kubernetes cluster. It exposes the Kubernetes API, which is used by clients to interact with the Kubernetes cluster.

kube-scheduler:

The kube-scheduler is responsible for scheduling the pods on the nodes in the Kubernetes cluster.

kube-controller-manager:

The kube-controller-manager runs controllers that are responsible for maintaining the desired state of the Kubernetes cluster.

๐Ÿ”น etcd:

etcd is a distributed key-value store that stores the configuration data of the Kubernetes cluster.

๐Ÿ”น cloud-controller-manager:

The cloud controller manager is responsible for managing the cloud provider-specific resources in the Kubernetes cluster. It provides a way to integrate with the cloud provider's APIs to manage the cloud resources.

Worker Node Components:

๐Ÿ”น Nodes:

Nodes are the worker machines that run the containers. They are managed by the Kubernetes master components.

Pods:

Pods are the smallest deployable units in the Kubernetes cluster. They contain one or more containers and are scheduled on nodes.

๐Ÿ”น Container Runtime Engine:

The Container Runtime Engine is responsible for running the containers on the node. Kubernetes supports several container runtimes, including Docker, CRI-O, and containerd.

kubelet:

Kubelet is one of the main components of Kubernetes responsible for managing individual nodes and their containers. It is essentially an agent that runs on each node in the Kubernetes cluster and communicates with the API server to ensure that the containers running on the node are healthy and running as expected.

The kubelet performs several functions, including:

  1. Fetching container manifests from the Kubernetes API server.

  2. Ensuring that the containers described in the manifest are running and healthy.

  3. Reporting the status of the containers back to the API server.

  4. Mounting and unmounting volumes as necessary.

  5. Executing container health checks.

kube-proxy

The kube-proxy component is responsible for managing the network proxy between the Kubernetes services and the pods that are running on the worker nodes. The kube-proxy uses various networking modes to ensure that the communication between the pods and services is efficient and reliable.

Container Networking

The container networking component is responsible for ensuring that all the containers running on the worker nodes can communicate with each other and with the external networks. Kubernetes provides several plugins for container networking, including Flannel, Calico, and Weave.

kubectl - Client

minikube - Single-node cluster

kubeadm - Multi-node cluster

Installing kubectl and minikube on Ubuntu

Docker also installs apt-get install docker.io

kubectl is the Kubernetes-specific command line tool that lets you communicate and control Kubernetes clusters. Whether you're creating, managing, or deleting resources on your Kubernetes platform, kubectl is an essential tool

Install kubectl:

apt-get install ca-certificates curl

apt-get install apt-transport-https

sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

To face some issues in the public key you can try the below command.

echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

apt-get update

apt-get install kubectl

Finally, successfully install the Kubectl

Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node

Install Minikube

apt-get update

apt-get install apt-transport-https curl

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - at <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF

apt-get update

apt-get install kubelet kubeadm kubectl

apt-mark hold kubelet kubeadm kubectl

Minikube Drivers

minikube can be deployed as a VM, a container, or bare metal.

https://minikube.sigs.k8s.io/docs/drivers/

minikube start --driver=docker

Minikube version check minikube version

Kubectl has two types of commands there

  1. Imperative configuration involves creating Kubernetes resources directly at the command line against a Kubernetes cluster. e.g. (run, expose create, delete, ...)

  2. Declarative configuration defines resources within manifest files and then applies those definitions to the cluster. e.g. (yaml file, ...)

Create a pod configuration file

vi first-pod.yml

kind: Pod

apiVersion: v1

metadata:

name: my-pod

spec:

containers:

- name: container1

image: ubuntu

command: ["/bin/bash","-c","while true; do echo I-am first-pod; sleep 5; done"] restartPolicy: Never

  1. kind: This field specifies the type of resources being used. In this case, the resource is a 'Pod', which is the most basic unit of deployment in Kubernetes.

  2. apiVersion: The version of the Kubernetes API that the manifest file is utilizing is specified in this field.

  3. metadata: Information about the resource being defined is contained in this field. In this case, the name field gives the pod a name.

  4. spec: The description of the pod, including the desired state and operating instructions, are contained in this field.

  5. containers: This field specifies the containers that should be run as part of the pod. In this case, there is a single container defined, with the name of container1, an image of ubuntu, and a command to execute when the container starts.

  6. command: This field specifies the command that should be run when the container starts.

  7. restartPolicy: This field specifies the restart policy for the pod. In this case, the restart policy is set to 'Never', which means that the pod will not be restarted if it exists of crashes

  8. To execute the command.

kubectl apply -f first-pod.yml

The above command is used to create or update resources in the Kubernetes cluster. The 'apply' subcommand tells 'kubectl' to apply the configuration specified in the specified file in our case which is first-pod.yml to the cluster. If the resources defined in the file do not already exist in the cluster, 'kubectl apply' will create them.

If the resources already exist 'kubectl apply' will update them with the new configuration.

The '-f' flag specifies the path to the file containing the resources configuration to apply. Now to see the pod apply

kubectl get pods

If you want to see, where or in which node pod is running

kubectl get pods -o wide

To see more detailed information about pods

kubectl describe pod pod name

To view the log generated by the pod in the Kubernetes cluster

kubectl logs -f my-pod

kubectl get pods --all-namespaces

To delete the pod from the Kubernetes cluster

kubectl delete pod pod name

Create a multi-container pod

vi multi-container.yml

kind: Pod

apiVersion: v1

metadata:

name: second-pod

spec:

containers:

- name: container1

image: ubuntu

command: ["/bin/bash","-c","while true; do echo I-am-container-1; sleep 5; done"]

- name: container2

image: ubuntu

command: ["/bin/bash","-c","while true; do echo I-am-container-1; sleep 5; done"]

POD Created

kubectl apply -f multi-container.yml

kubectl get pods

kubectl describe pods

kubectl get pods --all-namespaces

ย