k8s_cmd

K8s commands

Kubernetes resources are usually created/edited by posting a JSON or YAML manifest to the Kubernetes REST API endpoint, as with this manifest, you have more control of the resource, also it is a bit complex, hence K8S provides specific commands to create/edit limited properties of resources.
k8s_cheat_sheet

create/delete resource
you can create resource by generic API with yaml file or you can create it by specific command for specific resource

1
2
$ kubectl create configmap| deployment| namespace| secret| job $options
$ kubectl create -f xxx.yaml

modify resource
k8s_modify_resource

NOTE

  • apply manages applications through files defining Kubernetes resources. It creates and updates resources in a cluster through running kubectl apply. This is the recommended way of managing Kubernetes applications on production.

  • apply behaves like create if resource does not exist, otherwise, modify it, but the yaml must be full definition of a resource, so modification always needs kubectl get deploy kubia -o yaml, then modify the field, then apply it back.

1
2
3
4
$ kubectl apply -f ./my-manifest.yaml            # create resource(s)
$ kubectl apply -f ./my1.yaml -f ./my2.yaml # create from multiple files
$ kubectl apply -f ./dir # create resource(s) in all manifest files in dir
$ kubectl apply -f https://git.io/vPieo # create resource(s) from url

creating from json/yaml file
More control of resources, it’s a bit complex, but for each resource type, only a few properties are a must ,you only need to set that part, please refer to each resource type to see how to create each type from json/yaml.

get help for each resource, properties

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
$ kubectl explain pods
KIND: Pod
VERSION: v1

DESCRIPTION:
Pod is a collection of containers that can run on a host. This resource is
created by clients and scheduled onto hosts.

FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
...
$ kubectl explain pod.metadata
...
$ kubectl explain pod.metadata.labels
...

K8s mostly used resources

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
NAME                              SHORTNAMES      NAMESPACED   KIND
componentstatuses cs false ComponentStatus
configmaps cm true ConfigMap
namespaces ns false Namespace

nodes no false Node

persistentvolumeclaims pvc true PersistentVolumeClaim

persistentvolumes pv false PersistentVolume

pods po true Pod
replicationcontrollers rc true ReplicationController
resourcequotas quota true ResourceQuota
secrets true Secret
services svc true Service
daemonsets ds true DaemonSet
deployments deploy true Deployment
replicasets rs true ReplicaSet
cronjobs cj true CronJob
jobs true Job
events ev true Event
ingresses ing true Ingress
...

generic options for all resources

1
2
3
4
5
--all-namespaces
-n=kube-system
-n kube-system
-o wide
-o json

cluster

  • context related

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    # context is groups of running info, higher than ns
    # context has cluster and default ns for it
    $ sudo kubectl config current-context
    $ sudo kubectl config get-contexts
    CURRENT NAME CLUSTER AUTHINFO NAMESPACE
    * minikube minikube minikube default

    # create a new context and switch to it, must create test-ns before it
    $ sudo kubectl config set-context test-ctx --namespace=test-ns --user=minikube --cluster=minikube
    $ sudo kubectl config get-contexts
    CURRENT NAME CLUSTER AUTHINFO NAMESPACE
    * minikube minikube minikube default
    test-ctx minikube minikube test-ns

    $ sudo kubectl config use-context test-ctx
  • nodes related

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    # show all nodes(master and worker)
    $ sudo kubectl get no -o wide
    NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
    dev Ready control-plane,master 17h v1.20.2 10.116.5.201 <none> Ubuntu 18.04.5 LTS 4.15.0-139-generic docker://20.10.5

    # get details about node
    $ kubectl describe nodes

    $ kubectl describe node dev
    Name: dev
    Roles: master
    Labels: beta.kubernetes.io/arch=amd64
    beta.kubernetes.io/os=linux
    kubernetes.io/arch=amd64
    ...
    CPU INO /OS INFO/ Pod Running

    # label related
    # set and filter with label
    $ kubectl label node dev disk=ssd
    $ kubectl get node -l disk=ssd
  • cluster info

    1
    2
    3
    $ kubectl cluster-info
    Kubernetes master is running at https://192.168.1.1:8443
    KubeDNS is running at https://192.168.1.1:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
  • addon related

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    $ minikube addons list
    |-----------------------------|----------|--------------|
    | ADDON NAME | PROFILE | STATUS |
    |-----------------------------|----------|--------------|
    | ambassador | minikube | disabled |
    | dashboard | minikube | enabled ✅ |
    | default-storageclass | minikube | enabled ✅ |
    | efk | minikube | disabled |
    | freshpod | minikube | disabled |
    | gvisor | minikube | disabled |
    | helm-tiller | minikube | disabled |
    | ingress | minikube | disabled |
    | ingress-dns | minikube | disabled |
    | istio | minikube | disabled |

    # enable ingress plugin
    $ minikube addons enable ingress
    The 'ingress' addon is enabled.

    # after ingress is enabled, a ingress pod is running listen on 80/443 by nginx
    # and meanwhile docker-proxy starts as well on the node which ingress pod runs.
    # so that if you access that node on port 80, it will proxy the traffic to ingress container
    # which performs ingress rules and selects the proper endpoint depends on rule

    $ sudo kubectl get po -n kube-system -o wide -l app.kubernetes.io/name=ingress-nginx
    NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    ingress-nginx-controller-797b4b5ff7-jk8wp 1/1 Running 0 10h 172.17.0.6 dev <none> <none>

    $ ps -ef | grep docker-proxy
    ubuntu 19579 1156 0 01:44 pts/1 00:00:00 grep -i -n --color docker-proxy
    root 26472 6646 0 Mar25 ? 00:00:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 443 -container-ip 172.17.0.6 -container-port 443
    root 26485 6646 0 Mar25 ? 00:00:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.17.0.6 -container-port 80

    # docker-proxy just adds iptable rules based on parameters for ingress pod.

    Namespace

    create ns from yaml

    1
    2
    3
    4
    apiVersioin: v1
    kind: Namespace
    metadata:
    name: test-ns
1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ kubectl create -f ns.yaml

# create a namespace from command line
$ kubectl create namespace test-ns
# show all namespaces
$ kubectl get ns

# delete ns will delete resource in that namespace as well!! like Pod, replicaSet, deployment etc
$ kubectl delete test-ns

# delete all pods in default ns
$ kubectl delete pods --all
# delete all resource in default ns, after the delete, the ns is still there!!!
$ kubectl delete all --all

POD

create standalone pod(no replicaset, no deployment involved) from yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
apiVersion: v1
kind: Pod
metadata:
name: kubia-mannual
# speicify namespace of this pod
namespace: default
labels:
run: kubia
spec:
# without nodeSelector, pod can be scheduled to any node
nodeSelector:
# select scheduled node(s) with such label
# which has highest priority, select pod then create deployment(like Daemset on the selected node)
disk: "ssd"
containers:
- image: luksa/kubia
name: kubia-mannual
ports:
# it just says the container is listening on 8080
# even without it, you still can access 8080 for this pod
- containerPort: 8080
protocol: TCP

# add liveness probe
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 10
1
2
3
# this creates a standalone pod(no replicacontroller, no replicaset, no deployment)
# even for standalone pod, if it dies, will be restarted by k8s
$ kubectl create -f pod.yaml

create standalone pod from cmd

1
$ kubectl run kubia-t --image=luksa/kubia --port=8080
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
# get pods from default namespace
$ kubectl get pods

# show ip of a given pod/all pods
$ kubectl get pods -o wide
$ kubectl get pods kubia -o wide

# more detail about pods
$ kubectl describe pods
$ kubectl describe pods kube-apiserver-dev-alokb

# output pod conf with yaml or json
$ kubectl get pod kubia -o yaml
$ kubectl get pod kubia -o json

# get logs of a given pod
$ kubectl logs kubia-mannual
# add container as log prefix if pod multiple containers
$ kubectl logs kubia-mannual --prefix

# get logs of a container(add container name) in a given pod(pod may have multiple containers)
$ kubectl logs kubia-mannual -c kubia-mannual

# access service in pod(pod is listening on 8080) without service resource
# way 1:
$ curl http://$pod_ip:8080

# way 2:
# forward host port to container port, after run it hangs(added iptable rule on host)
$ kubectl port-forward kubia-mannual 8888:8080
# open another terminal
$ curl localhost:8888

# delete pod/pods
$ kubectl delete pods kubia
# delete all pods and service in current namespace
$ kubectl delete pods,svc --all

# delete pod with label selector
$ kubectl delete pods -l app=kubia

# run cmd in specific container of a pod
$ kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD}

# run bash of a pod (if only one container, container name can be omitted)
# same like $ docker exec -it $container -- bash
$ kubectl exec -it ${POD_NAME} -- bash

# modify any field of POD
$ kubectl edit po kubia

Label related

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ kubectl get po --show-labels
# modify an existing label of a pod
$ kubectl label po kubia-mannual run=jason --overwrite
# add a new label to a pod
$ kubectl label po kubia-mannual release=v1

# show pod with specific labels(filter)
$ kubectl get po -l run=jason
$ kubectl get po -l run!=jason
$ kubectl get po -l 'run in (jason, test)'
$ kubectl get po -l 'run notin (jason, test)'
$ kubectl get po -l run
# not have run label
$ kubectl get po -l '!run'

# similar like label, but used only for description
$ kubectl annotate pod kubia-manual author="jason"

Replica

create rc from yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
# Old type, deprecated!!!
kind: ReplicationController
metadata:
name: kubia
spec:
replicas: 3
template:
metadata:
# create pod with such label
labels:
app: kubia
spec:
containers:
- name: kubia
image: luksa/kubia
ports:
- containerPort: 8080
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# only way to create rc is from yaml
$ kubectl create -f rc.yaml
$ kubectl get rc
$ kubectl describe rc kubia
# scale a rc, run less or more pod of this rc

$ kubectl scale rc kubia --replicas=3
# change the label of a pod can move it in or out of scope of replication controller

# modify any field of rc
$ kubectl edit rc kubia
# delete rc, pod controlled by it will be deleted as well
$ kubectl delete rc kubia
# only delete rc, keep pod running
$ kubectl delete rc kubia --cascade=false

create rs from yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: kubia
spec:
replicas: 3
# label selector
selector:
matchLabels:
app: kubia
# expression selector
# matchExpressions:
# - key: app
# operator: In
# values:
# - kubia
# In—Label’s value must match one of the specified values.
# NotIn—Label’s value must not match any of the specified values.
# Exists—Pod must include a label with the specified key (the value isn’t important). When using this operator, you shouldn’t specify the values field.
# DoesNotExist—Pod must not include a label with the specified key. The values property must not be specified.
# If you specify multiple expressions, all those expressions must evaluate to true for the selector to match a pod.
template:
metadata:
# create pod with such label
labels:
app: kubia
spec:
containers:
- name: kubia
image: luksa/kubia
ports:
- containerPort: 8080
# optional, mount from html to /var/htdocs
volumeMounts:
- name: html
mountPath: /var/htdocs

# optional part
volumes:
# emptyDir type, shared by containers in the same pod!!!
- name: html
emptyDir: {}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ kubectl create -f rs.yaml
# expose from host, add iptable rules to rc
$ kubectl expose rc kubia --port=80 --target-port=8000
$ kubectl get rs
$ kubectl describe rs kubia
# scale a rs, run less or more pod of this rs
$ kubectl scale rc kubia --replicas=3
# change the label of a pod can move it in or out of scope of ReplicaSet

# modify any field of rs
$ kubectl edit rs kubia
# delete rs, pod controlled by it will be deleted as well
$ kubectl delete rs kubia
# only delete rs, keep pod running
$ kubectl delete rs kubia --cascade=false

DaemonSet

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: apps/v1
kind: DaemonSet
metadata:
# DaemonSet name is kubia
# pod name is kubia-xxx
name: kubia
spec:
# no replica as only one such pod for each worker node
# label selector
selector:
matchLabels:
app: kubia
template:
metadata:
# create pod with such label
labels:
app: kubia
spec:
containers:
- name: kubia
image: luksa/kubia
ports:
- containerPort: 8080
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ kubectl create -f ds.yaml
$ kubectl get ds
$ sudo kubectl get ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kubia 1 1 1 1 1 <none> 10s

$ kubectl describe ds kubia
# change the label of a pod can move it in or out of scope of DaemonSet

# modify any field of ds
$ kubectl edit ds kubia
# delete ds, pod controlled by it will be deleted as well
$ kubectl delete ds kubia
# only delete rs, keep pod running
$ kubectl delete ds kubia --cascade=false

Job, CronJob

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: batch/v1
kind: Job
metadata:
name: batch-job
spec:
# there two are optional, by default they are 1
# by default, run once and one pod runs
completions: 5
parallelism: 2

template:
metadata:
labels:
app: batch-job
spec:
restartPolicy: OnFailure
containers:
- name: main
image: luksa/batch-job
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ kubectl create -f jb.yaml

$ kubectl get jobs
NAME COMPLETIONS DURATION AGE
batch-job 0/1 11s 11s

# After a while job is done
$ kubectl get job
NAME COMPLETIONS DURATION AGE
batch-job 1/1 2m11s 2m24s

$ kubectl describe jobs batch-job

# modify any field of job
$ kubectl edit jobs batch-job

# after the job run successfully, job and pod are not deleted but show status completed
$ kubectl delete jobs batch-job
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: batch-job-every-15
spec:
schedule: "0,15,30,45 * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
app: batch-job
spec:
restartPolicy: OnFailure
containers:
- name: main
image: luksa/batch-job

schedule has the same format as linux cron job

┌───────────── minute (0 - 59)
┌───────────── hour (0 - 23)
┌───────────── day of the month (1 - 31)
┌───────────── month (1 - 12)
┌───────────── day of the week (0 - 6) (Sunday to Saturday;
7 is also Sunday on some systems)


* * * * *

* any value
, value list separator
- range of values
/ step values
1
2
3
4
5
6
7
8
9
10
11
$ kubectl create -f cjb.yaml
$ kubectl get cj
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
batch-job-every-15 0,15,30,45 * * * * False 0 <none> 11s

$ kubectl describe cronjobs batch-job-every-15

# modify any field of cjob
$ kubectl edit cronjobs batch-job-every-15

$ kubectl delete cronjobs batch-job-every-15

Service

ClusterIP service

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
apiVersion: v1
kind: Service
metadata:
name: kubia
spec:
# optional, default select pod randomly
sessionAffinity: ClientIP

# ClusterIP automatically assigned when created or set it manually like below!!!
# clusterIP: 10.2.2.1
ports:
# service port 80--->container port 8080
- port: 80
targetPort: 8080

# if multiple ports for a service, each port must have a name
# - name: http
# port: 80
# targetPort: 8080
# - name: https
# port: 443
# targetPort: 8443

# pods(in the same namespace) behind the service
selector:
app: kubia
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25

$ kubectl create -f service.yaml

# default service only has cluster ip, no external-ip
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubia ClusterIP 10.102.75.1 <none> 80/TCP 9s

# show endpoint Object of all service
# endpoint Object is create automatically when service is creating.
$ kubectl get ep
NAME ENDPOINTS AGE
kubia 172.17.0.11:8080,172.17.0.12:8080,172.17.0.7:8080 72s

$ kubectl describe ep kubia
$ kubectl edit ep kubia

$ kubectl describe svc kubia
$ kubectl delete svc kubia
$ kubectl edit svc kubia

$ kubectl get po -o wide
kubia-6555dff44f-h8kfd 1/1 Running 0 4m49s 172.17.0.11 dev <none> <none>
kubia-6555dff44f-k55fd 1/1 Running 0 4m49s 172.17.0.7 dev <none> <none>
kubia-6555dff44f-mvsqj 1/1 Running 0 4m49s 172.17.0.12 dev <none> <none>

let’s check the core (nat) rules generated on the node after a service is created

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# default/kubia is service name, 10.105.109.83 is service cluster ip
Chain KUBE-SERVICES (2 references)
pkts bytes target prot opt in out source destination
11 660 KUBE-SVC-L5EAUEZ74VZL5GSC tcp -- any any anywhere 10.105.109.83 /* default/kubia: cluster IP */ tcp dpt:http

# three pods target with random mode
Chain KUBE-SVC-L5EAUEZ74VZL5GSC (1 references)
pkts bytes target prot opt in out source destination
2 120 KUBE-SEP-PERL6L2Q363TRRKS all -- any any anywhere anywhere /* default/kubia: */ statistic mode random probability 0.33333333349
3 180 KUBE-SEP-5SWG4BFKF7ZYVF5R all -- any any anywhere anywhere /* default/kubia: */ statistic mode random probability 0.50000000000
6 360 KUBE-SEP-5ZBQGZX3PSGHKG5K all -- any any anywhere anywhere /* default/kubia: */

# on pod rule 172.17.0.6 pod ip with port 8080
Chain KUBE-SEP-PERL6L2Q363TRRKS (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- any any 172.17.0.6 anywhere /* default/kubia: */
2 120 DNAT tcp -- any any anywhere anywhere /* default/kubia: */ tcp to:172.17.0.6:8080

NodeType service

1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: v1
kind: Service
metadata:
name: kubia
spec:
type: NodePort
ports:
# service port 80--->container port 8080
- port: 80
targetPort: 8080
nodePort: 30000
selector:
app: kubia
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# create service from one command line
$ kubectl expose deployment kubia --type=NodePort --port=80 --target-port=8080

# when you create a nodePort service,
# it creates clusterIP service automatically
# (but this is not true for ingress, you have to create clusterIP service manually),
# then add extra iptables rules to clusterIp service

# if nodePort service exists, no need to create ClusterIP service,otherwise error: services "xxxx" already exists
# clusterIp service is also some iptables rules.
$ kubectl create -f nodeport.yaml

# nodePort service
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubia NodePort 10.104.24.124 <none> 80:30000/TCP 8s
# curl node port, 80 is service port
$ curl 10.109.176.199:80
$ curl $Node_IP:30000
1
2
NodePort service--->ClusterIP service--->Pod
LoadBalancer service--->NodePort service--->ClusterIP service--->Pod

Ingress

Must enable ingress controller firstly

1
$ minikube addons enable ingress
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubia
# no ingress port needed
# ingress listens on 80 for http
# ingress listens on 443 for https
spec:
rules:
- host: kubia.example.com
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: kubia # must create ClusterIP service manually!!!
port:
number: 80
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$ kubectl create -f in.yaml

$ kubectl get ingress

Name: kubia
Namespace: test-ns
# ingress IP, it's public IP!!!(node IP)
Address: 10.116.5.201
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
kubia.example.com
/ kubia:80 (172.17.0.11:8080,172.17.0.12:8080,172.17.0.7:8080)
# kubia:80 is clusterIP service with three endpoints
# Ingress depends on cluster service to get the endpoints and use it directly!!!
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 81s nginx-ingress-controller Ingress test-ns/kubia
Normal UPDATE 27s nginx-ingress-controller Ingress test-ns/kubia

Deployment

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubia
spec:
replicas: 3
selector:
matchLabels:
app: kubia
template:
metadata:
name: kubia
labels:
app: kubia
spec:
containers:
- image: luksa/kubia:v1
name: nodejs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
# create deploy from one command line
$ kubectl create deployment kubia --image=luksa/kubia:v1

# create dp with --record in version history
# have the change to rollback if upgrade fails.
$ kubectl create -f dp.yaml --record
$ sudo kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
kubia 3/3 3 3 21s
$ kubectl get deploy kubia

$ kubectl describe deploy kubia
$ kubectl edit deploy kubia
# deployment will create ReplicaSet automatically
$ sudo kubectl get rs
NAME DESIRED CURRENT READY AGE
kubia-6555dff44f 3 3 3 95s

# trigger upgrade by change image version
$ kubectl set image deploy kubia nodejs=luksa/kubia:v2
# OR
$ kubectl edit deploy kubia
# scale a deploy
$ kubectl scale deploy kubia --replicas=2

# check the status when rolling happens
$ kubectl rollout status deployment kubia

# if the new version has issue, rollback to previous one
$ kubectl rollout undo deployment kubia
$ kubectl rollout undo deployment kubia --to-revision=1

# check the version history of a given deployment when several upgrade happened
$ kubectl rollout history deployment kubia

$ kubectl rollout pause deployment kubia
$ kubectl rollout resume deployment kubia

configMap, downwardAPI, secret

create configMap from shell

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# create configMap from literal, file, dir
# --from-file without key will use file name as the key
# --from-file with dir will use all files
# kubectl create configmap my-cfg --from-file=test.json --from-file=key1=hi.txt --from-file=test_dir/ --from-literal=key2=hello

$ kubectl create configmap my-cfg --from-file=conf.cfg --from-literal=id=jason
$ kubectl get cm
NAME DATA AGE
my-cfg 2 17s

$ kubectl describe cm my-cfg
Name: my-cfg
Namespace: test-ns
Labels: <none>
Annotations: <none>

Data
====

id:
----
jason

conf.cfg:
----
name = hello

Events: <none>
# never use yaml to create configMap as you have to copy file content in yaml file!!!

use configMap from env var in pod’s container

1
2
3
4
5
6
7
8
9
10
11
12
13
# pod definition with value from configMap
env:
- name: FIRST_VAR
valueFrom:
configMapKeyRef:
name: my-cfg
key: foo

# or all entries from configMap as env vars
envFrom:
- prefix: CONFIG_
configMapRef:
name: my-cfg

mount configmap entry as a file keep existing if has same file

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# mount configmap entries to a dir in the pod
apiVersion: v1
kind: Pod
metadata:
name: kubia-hc
labels:
run: kubia
spec:
containers:
- image: luksa/kubia
name: kubia
# mount individual entries under a dir but keep existing file under that dir
# say mount to /etc, but still keep existing file under etc
volumeMounts:
- name: config
mountPath: /etc/new.json
# subPath says new.json is a file
# without subPath, new.json is treated as a path!!!!
subPath: new.json
readOnly: true
volumes:
- name: config
configMap:
# use configMap declared by configMap
# if no item provided, it will use all entries!!!
name: my-cfg
items:
- key: test.json
# rename it
path: new.json

mount configmap entry as a file hide existing if has same file

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# mount configmap entries to a dir in the pod
apiVersion: v1
kind: Pod
metadata:
name: kubia-hc
labels:
run: kubia
spec:
containers:
- image: luksa/kubia
name: kubia
volumeMounts:
- name: config
# if this path does not exist, create
# if exists, hide all existing files with configMap entries
mountPath: /var/test
readOnly: true
volumes:
- name: config
configMap:
# use configMap declared by configMap
# if no items provided, it will use all entries!!!
name: my-cfg
items:
- key: test.json
# rename it
path: new.json

secrets

1
2
3
4
# create secrets
$ kubectl create secret generic test_sc --from-file=key1=test.json
$ kubectl get secrets
$ kubectl describe secrets
1
2
3
4
5
6
7
8
9
10
# secret is mounted in-memory files
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-l5wrk
readOnly: true
volumes:
- name: default-token-l5wrk
secret:
secretName: default-token-l5wrk
# no items lists, all entries in secret are mounted to mountPath!!!
1
2
3
4
5
6
7
# use secret from env
env:
- name: FOO_SECRET
valueFrom:
secretKeyRef:
name: fortune-https
key: foo

downwardAPI

1
2
3
4
5
6
7
8
9
10
11
# check Pod manifest to get the field name
# use ENV in container
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
1
2
3
4
5
6
7
8
9
10
11
12
13
14
  # under container with file
volumeMounts:
- name: downward
mountPath: /etc/downward
volumes:
- name: downward
downwardAPI:
items:
- path: "podName"
fieldRef:
fieldPath: metadata.name
- path: "podIP"
fieldRef:
fieldPath: status.podIP

talk to API server

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
$ kubectl get svc
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9d

# on the node
$ kubectl proxy
Starting to serve on 127.0.0.1:8001
# proxy will talk to API server(10.96.0.1) and do the authentication, cool

$ curl localhost:8001

# talk to API server within a pod
# as for each pod, we mount
# 1. a token(from secret) that can be used to talk with API server
# 2. ca: used to verify server's certificate.
# at /var/run/secrets/kubernetes.io/serviceaccount

# inside a container, run below command
$ export TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
$ curl -H "Authorization: Bearer $TOKEN" https://kubernetes

# another way, you can run another container within a pod, this container runs kubectl proxy
# so other container in the same pod, can access localhost:80001 as proxy to talk with API server.

Volumes

check default storageClass provided by minikube

1
2
3
4
5
6
7
8
$ sudo minikube addons enable storage-provisioner
$ sudo kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
storage-provisioner 1/1 Running 0 20h

$ sudo kubectl get storageClass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard (default) k8s.io/minikube-hostpath Delete Immediate false 19h

declare pvc use storageClass

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-pvc
spec:
# use standard storageClass which is created by minikube automatically
storageClassName: standard
resources:
requests:
storage: 100Mi
accessModes:
- ReadWriteOnce

use data-pvc from pod

1
2
3
4
5
6
7
8
  volumeMounts:
- name: data
mountPath: /var/store

volumes:
- name: data
persistentVolumeClaim:
claimName: data-pvc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ kubectl get sc
$ kubectl describe sc
$ kubectl edit sc

# create pvc use storageClass
$ kubectl create -f pvc.yaml
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-pvc Pending standard 38s

# After a while
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-pvc Bound pvc-0289f274-ab70-4a0f-9a6d-8c7d10bac7fe 100Mi RWO standard 35m

# pv is created automatically for pvc when it uses storageClass
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-a59a4fac-5b1e-44e1-a066-ff33cfc2b1bc 100Mi RWO Delete Bound default/data-pvc standard 104s

Ref