Kubernetes¶
Resources¶
- Setting up local Kubernetes Cluster with Kind
- https://cwhu.medium.com/7431c5f96c3e
- https://godleon.github.io/blog/Kubernetes/k8s-Deployment-Overview/
Tools¶
K9s¶
Kubectl TUI
Install
To go in
Commands
- select namespace
:ns
- see other thing
:deployment/service/pods/etc.
- quit
:q
CD¶
install kubectl¶
install kubectl binary
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
install kubectl
test
Apply config file¶
Write config files in yaml.
operations
- create
- apply
- replace
- patch
To create
To create/update
labels¶
Labels are key-value pair. You can assign labels via command line or define in yaml. See the doc.
roles¶
Role is just another label with key = kubernetes.io/role
and value = <your_role>
.
commands¶
- get component status
kubectl get cs
(deprecated)
- show info about a pod
kubectl get pod <pod_name>
- show info about a node
kubectl get node <node_name>
node related¶
show nodes¶
# show ip
kubectl get nodes -o wide
# show labels
kubectl get nodes --show-labels
# show nodes with certain label
kubectl get nodes -l <key>=<value>
assign pod to node¶
https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/
label your node -> specify nodeSelector
in your yaml
Note that in Deployment, it should be in template.spec
Pulling image¶
pull image from private docker registry¶
login to docker if haven't
see your config.json
create kubectl create
kubectl create secret generic <secret_name> \
--from-file=.dockerconfigjson=<path/to/.docker/config.json> \
--type=kubernetes.io/dockerconfigjson
supply secret in config file
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: <secret_name>
pod related¶
create pod¶
force recreate pods
delete pod¶
limit resources¶
cpu & memory
- https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource
- https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/
- https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#before-you-begin
port forwarding¶
pod log¶
show pods¶
more info (like ip & node run on)
Go into a pod¶
Namespace¶
For segratating your resources (e.g. different apps)
Get things from all namespaces¶
If you don't supply a namespace, the get commands will only show the resources in your current namespace. To get the resources under all namespaces, use the --all-namespaces
flag.
e.g.
View all namespace¶
ns
for short
View currently used namespace¶
Set namespace¶
-n
for short
To unset
Cronjob Related¶
See your cronjob
deployment¶
spec.template
is the definition of the pod
update deployment¶
update image
modify directly
service¶
handle port forwarding
- targetPort
- the port inside the pod
- nodePort
- the port of the kubernetes node
go to http://<node_ip>:<nodePort>
to see your app
internal service url¶
<service_name>.<namespace>.svc.cluster.local:<service_port>
Ingress¶
Use ingress to map host/path to your service. Note that the host you define must be pointing to an ip pointing to your cluster, e.g. your load balancer or ingress controller
kind¶
It's an alternative to minikube. Kind uses Docker container while minikube uses VM. See the official page and the comparison.
Install¶
Install Docker if you haven't.
Mac¶
Install with binary¶
install binary
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
check
create a cluster¶
check if it's running
kind commands¶
- create cluster
kind create cluster
- params
--name
- name of the cluster
- default name =
kind
--config
- create with config yaml
- delete cluster
kind delete cluster
--name
- name of the cluster
- default name =
kind
- list clusters
kind get clusters
- list nodes
kind get nodes
- cluster detals
kubectl cluster-info --context kind-kind
config file¶
Use InitConfiguration
to label 1st control plane node, JoinConfiguration
for others. See Kubeadm Config Patches (related Github issue).
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: test
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "name=edge1"
- role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "name=edge1"
create a local container registry¶
create a local docker registry and a kind cluster
https://kind.sigs.k8s.io/docs/user/local-registry/
troubleshooting¶
error when creating a cluster with many nodes¶
If you can successfully create a cluster with less nodes, then the problem might be Pod errors due to “too many open files”.
To solve it, go to /etc/sysctl.conf
and add/update the following lines:
In my case, my fs.inotify.max_user_watches
is already 1048576
, so I only add the second line.
using webcam¶
containers:
- name: <name>
image: <image>
volumeMounts:
- mountPath: /dev/video0
name: dev-video0
securityContext:
privileged: true
volumes:
- name: dev-video0
hostPath:
path: /dev/video0
https://stackoverflow.com/a/59291859/15493213
Troubleshooting¶
use describe
on the failing node/pod to see more info first
1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate¶
Scenario: You make a pod run on a control plane / master node but it keeps pending. When you describe it, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate
is shown.
Solution: Remove the taint on the node
kubectl taint nodes --all node-role.kubernetes.io/master-
# or
kubectl taint nodes <node_name> node-role.kubernetes.io/master-
the -
is for removing
ref:
- https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
- https://stackoverflow.com/q/59484509/
- https://stackoverflow.com/a/59491824/15493213
The connection to the server localhost:8080 was refused - did you specify the right host or port?¶
Happens when you run any kubectl <xxx>
command.
Easy fix: delete your cluster and start again
If you're in local and you use kind
, kind get clusters
and kind delete cluster --name <cluster_name>
.
no matches for kind "CronJob" in version "batch/v1beta1"¶
Your k8s version >= 1.21, so use this instead
See https://cloud.google.com/kubernetes-engine/docs/how-to/cronjobs