0%

Introduction

CRI and OCI

  • The OCI or Open Containers Initiative is an organization that creates container standards. The OCI runtime spec defines the API of a low-level container runtime and the OCI image spec defines what a “Docker image” actually is.
  • The Kubernetes project has also defined a number of standards. Relevant for this article is the CRI: the Container Runtime Interface. This interface defines how Kubernetes talks with a high-level container runtime.
Read more »

Introduction

Autoscaling allows you to dynamically adjust to demand without intervention from the individuals in charge of operating the cluster.

Kubernetes autoscaling helps optimize resource usage and costs by automatically scaling a cluster up and down in line with demand.

Kubernetes enables autoscaling at the cluster/node level as well as at the pod level.

Read more »

Introduction

One of the main advantages of Kubernetes is how it brings greater reliability and stability to the container-based distributed application, through the use of dynamic scheduling of containers. But, how do you make sure Kubernetes itself stays up when a component or its master node goes down?

Kubernetes High-Availability is about setting up Kubernetes, along with its supporting components in a way that there is no single point of failure. A single master cluster can easily fail, while a multi-master cluster uses multiple master nodes, each of which has access to same worker nodes. In a single master cluster the important component like API server, controller manager lies only on the single master node and if it fails you cannot create more services, pods etc. However, in case of Kubernetes HA environment, these important components are replicated on multiple masters(usually three masters) and if any of the masters fail, the other masters keep the cluster up and running.

Read more »

Introduction

Here let’s foucs on how Kubernets controll access to API server, When a request reaches the API, it goes through several stages, illustrated in the following diagram.

API Access Control

Read more »

Overview

When
Consider adding a Custom Resource to Kubernetes if you want to define new controllers, application configuration objects or other declarative API. it’s mostly used for complex stateful application.

How
Custom resources can appear and disappear in a running cluster through dynamic registration, and cluster admins can update custom resources independently of the cluster itself. Once a custom resource is installed, users can create and access its objects using kubectl, just as they do for built-in resources like Pods.

Operator pattern
The combination of a custom resource API and a control loop is called the Operator pattern, The Operator pattern is used to manage specific, usually stateful, applications.

Kubernetes provides two ways to add custom resources to your cluster:

  • CRDs are simple and can be created without any programming.
  • API Aggregation requires programming, but allows more control over API behaviors like how data is stored and conversion between API versions.

CRDs are easier to use. Aggregated APIs are more flexible. Choose the method that best meets your needs.

Typically, CRDs are a good fit if:

  • You have a handful of fields
  • You are using the resource within your company, or as part of a small open-source project (as opposed to a commercial product)
    Read more »

Introduction

CNI (Container Network Interface), a Cloud Native Computing Foundation project, consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of supported plugins. CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted. Because of this focus, CNI has a wide range of support and the specification is simple to implement.

A CNI plugin is responsible for inserting a network interface into the container network namespace(e.g., one end of a virtual ethernet (veth) pair) and making any necessary changes on the host (e.g., attaching the other end of the veth into a bridge). It then assigns an IP address to the interface and sets up the routes consistent with the IP Address Management section by invoking the appropriate IP Address Management (IPAM) plugin.

Main tasks

  • 🔴insert interface in container
  • 🔴assign ip address to container
  • 🔴setup routes or iptables rules
Read more »

Service

A Kubernetes Service is a resource you create to make a single, constant point of entry to a group of pods(selected by label selector) providing the same service. service has an IP address and port that never change while the service exists, but Pod address could change during upgrade, or pod is removed or deleted during scale, hence we SHOULD NOT access pod address directly for a service, we need a dedicated ip for the cased mentioned, that’s why service comes in.

More details about service, refer to k8s service

enable source ip persistence for a service
If you want to make sure that connections from a particular client are passed to the same Pod each time, you can select the session affinity based on the client’s IP addresses by setting service.spec.sessionAffinity to "ClientIP" (the default is "None"). You can also set the maximum session sticky time by setting service.spec.sessionAffinityConfig.clientIP.timeoutSeconds appropriately. (the default value is 10800, which works out to be 3 hours).

Read more »

Introduction

Helm is the best way to find, share, and use software built Kubernetes, a software in k8s may be one separate deployment or several deployments that works together to provides service to user, Helm manages these yaml files with concept Chart, Chart is a bundle of yaml files and other files related to the software, Chart helps you define, install, and upgrade, rollback even the most complex Kubernetes application, Chart likes deb package which creates package with xx.deb, while to create an application for k8s by Helm, you need to create Chart files with fixed layout and follow its syntax.

Read more »

Overview

Apache ZooKeeper is basically a distributed(cluster) coordination service for managing a large set of hosts. Coordinating and managing the service in the distributed environment is really a very complicated process. Apache ZooKeeper, with its simple architecture and API, solves this issue. ZooKeeper allows the developer to focus on the core application logic without being worried about the distributed nature of the application.

Apache ZooKeeper is basically a service that is used by the cluster to coordinate between themselves and maintain the shared data with the robust synchronization techniques.

Apache ZooKeeper is itself a distributed application providing services for writing the distributed application.

Read more »