k8s_concept
Kubernetes
Kubernetes is a software system that allows you to easily deploy and manage containerized applications on top of it. It relies on the features of Linux containers to run heterogeneous applications
without having to know any internal details of these applications and without having to manually deploy these applications on each host.
Because these apps run in containers, they don’t affect other apps running on the same server, which is critical when you run applications for completely different organizations on the same hardware. This is of paramount importance for cloud providers
, because they strive for the best possible utilization of their hardware while still having to maintain complete isolation of hosted applications.
Kubernetes enables you to run your software applications on thousands of computer nodes as if all those nodes were a single, enormous computer. It abstracts away the underlying infrastructure and, by doing so, simplifies development, deployment, and management for both development and the operations teams.
But before we go into Kubernetes, let’s first take a look at microservice that drives Kubernetes into our eyes.
Microservice
Today, big applications are slowly being broken down into smaller, independently running components called microservices. Because microservices are decoupled from each other, they can be developed, deployed, updated, and scaled individually. This enables you to change components quickly and as often as necessary to keep up with today’s rapidly changing business requirements.
Pros:
- developed individually with the language(c, C++, python, Go etc) you like.
- deployed individually, deploy multiple key service for performance and HA
- update individually, upgrade some microservices that needed.
- scale individually, for key microsevices that have the bottleneck.
Cons:
- With large microservices, it needs more effort to deploy, upgrade, scale, handle failure, we have to do this manually or write our own script.
Why needs Kubernetes:
We need automation, which includes automatic scheduling of those components to our servers, automatic configuration, supervision, and failure-handling
. This is where Kubernetes comes in, with kubernetes, you do not need to do it manually or write your own script.
Core Concept
A Kubernetes cluster is composed of many nodes, which can be split into two types:
- The master node, which hosts the Kubernetes Control Plane that controls and manages the whole Kubernetes system
- Worker nodes that run the actual applications you deploy
THE CONTROL PLANE
The Control Plane is what controls the cluster and makes it function. It consists of multiple components that can run on a single master node or be split across multiple nodes and replicated to ensure high availability. These components are
- The Kubernetes API Server, which you and the other Control Plane components
communicate with - The Scheduler, which schedules your apps (assigns a worker node to each deployable component of your application)
- The Controller Manager, which performs cluster-level functions, such as replicating components, keeping track of worker nodes, handling node failures, and so on
- etcd, a reliable distributed data store that persistently stores the cluster configuration.
The components of the Control Plane hold and control the state of the cluster, but they don’t run your applications. This is done by the (worker) nodes.
THE NODES
The worker nodes are the machines that run your containerized applications. The task of running, monitoring, and providing services to your applications is done by the following components:
- Docker, rkt, or another container runtime, which runs your containers
- The Kubelet, which talks to the API server and manages containers on its node
- The Kubernetes Service Proxy (kube-proxy), which load-balances network traffic between application components
Master and worker node talk by kubelet trough API server, one end is kubelet, the other end is API server
Actually, each components run in container as well.run an application by k8s
To run an application in Kubernetes, you first need to package it up into one or more container images, push those images to an image registry, and then post a description of your app to the Kubernetes API server
The description includes information such as the container image or images that contain your application components, how those components are related to each other, and which ones need to be run co-located (together on the same node) and which don’t. For each component, you can also specify how many copies (or replicas) you want to run. Additionally, the description also includes which of those components provide a service to either internal or external clients and should be exposed through a single IP address and made discoverable to the other components
When the API server processes your app’s description
, the Scheduler schedules the specified groups of containers onto the available worker nodes
based on computational resources required by each group and the unallocated resources on each node at that moment. The Kubelet on those nodes then instructs the Container Runtime (Docker, for example) to pull the required container images and run the containers
.
keep application running
Once the application is running, Kubernetes continuously makes sure that the deployed state of the application always matches the description you provided.
Similarly, if a whole worker node dies or becomes inaccessible, Kubernetes will select new nodes for all the containers that were running on the node and run them on the newly selected nodes.
scale
While the application is running, you can decide you want to increase or decrease the number of copies, and Kubernetes will spin up additional ones or stop the excess ones.