Difference between revisions of "Deploying a Kubernetes cluster"
Line 1: | Line 1: | ||
+ | <span data-link_translate_nl_title="Een Kubernetes cluster implementeren" data-link_translate_nl_url="Een Kubernetes cluster implementeren"></span>[[:nl:Een Kubernetes cluster implementeren]][[nl:Een Kubernetes cluster implementeren]] | ||
<span data-link_translate_pt_title="Implantação de um aglomerado Kubernetes" data-link_translate_pt_url="Implantação de um aglomerado Kubernetes"></span>[[:pt:Implantação de um aglomerado Kubernetes]][[pt:Implantação de um aglomerado Kubernetes]] | <span data-link_translate_pt_title="Implantação de um aglomerado Kubernetes" data-link_translate_pt_url="Implantação de um aglomerado Kubernetes"></span>[[:pt:Implantação de um aglomerado Kubernetes]][[pt:Implantação de um aglomerado Kubernetes]] | ||
<span data-link_translate_es_title="Despliegue de un clúster Kubernetes" data-link_translate_es_url="Despliegue de un clúster Kubernetes"></span>[[:es:Despliegue de un clúster Kubernetes]][[es:Despliegue de un clúster Kubernetes]] | <span data-link_translate_es_title="Despliegue de un clúster Kubernetes" data-link_translate_es_url="Despliegue de un clúster Kubernetes"></span>[[:es:Despliegue de un clúster Kubernetes]][[es:Despliegue de un clúster Kubernetes]] |
Revision as of 15:52, 29 July 2021
nl:Een Kubernetes cluster implementeren
pt:Implantação de um aglomerado Kubernetes
es:Despliegue de un clúster Kubernetes
fr:Deployer un cluster Kubernetes
This article has been created by an automatic translation software. You can view the article source here.
it:Configurare un cluster Kubernetes
fr:Deployer un cluster Kubernetes
What is Kubernetes?
Kubernetes is an open-source platform for managing containerised workloads and services. It supports declarative configuration writing but also automation. Kubernetes is a large and rapidly growing ecosystem.
This procedure will allow you to quickly and easily deploy a three-node cluster Kubernetes (k8s) This procedure will allow you to quickly and easily deploy a three-node cluster from three CentOS 7 instances deployed within the same network in a forward zone.
One of these three instances will be our master node and the other two will be our worker nodes. To summarise simply, the master node is the node from which we manage the Kubernetes cluster (container orchestrator) from its API and the worker nodes are the nodes on which the pods or containers (Docker in our case) will run.
We will assume that your 3 CentOS 7 instances are already deployed and that you have ssh access to them to execute the commands that will follow.
Here is the configuration that we have in our example and that will be used as an example throughout this procedure:
Node master: "k8s-master" / 10.1.1.16
First node worker: "k8s-worker01" / 10.1.1.169
Second node worker: "k8s-worker02" / 10.1.1.87
System preparation and Kubernetes installation tutorial
The following actions must be performed on all instances (master and workers) as root (or with the necessary sudo rights).
Start by populating the /etc/hosts file on each of your instances so that they can resolve their respective hostname (normally already the case in an advanced zone network where the virtual router is a DNS resolver).
In our example this gives the following /etc/hosts file on our three instances (adapt it with the name and ip of your instances):
cat /etc/hosts
127.0.0.1 localhost
::1 localhost
10.1.1.16 k8s-master
10.1.1.169 k8s-worker01
10.1.1.87 k8s-worker02
Enable the bridge module and the iptables rules for it with the following three commands:
modprobe bridge
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
sysctl -p /etc/sysctl.conf
Add the YUM Docker repository :
cat <<EOF > /etc/yum.repos.d/docker.repo
[docker-ce-stable]
name=Docker CE Stable - \$basearch
baseurl=https://download.docker.com/linux/centos/7/\$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg
EOF
Add the YUM Kubernetes repository :
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
Install Docker :
yum install -y docker-ce
Then install the necessary Kubernetes packages:
yum install -y kubeadm kubelet kubectl
Edit the configuration file of the systemd kubelet (/etc/systemd/system/kubelet.service.d/10-kubeadm.conf) configuration file to add the following line in the "[Service]" section:
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
such that :
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf # Note: This dropin only works with kubeadm and kubelet v1.11+ [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf" Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml" *Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"* # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file. EnvironmentFile=-/etc/sysconfig/kubelet ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
Reload the configuration, enable and then start the docker and kubelet services via the following three commands:
systemctl daemon-reload
systemctl enable docker kubelet
systemctl start docker kubelet
</syntaxhighlight>
Disable the system swap (kubelet does not support swap memory, you will get an error during the pre-flight checks when initializing your cluster via kubeadms if you do not disable it):
swapoff -a
please remember to also comment/remove the swap line in the /etc/fstab file of each of your instances such as :
#/dev/mapper/vg01-swap swap swap defaults 0 0
Initialization of the Kubernetes cluster
The following actions are only to be performed on the node master instance
Start the initialization of your Kubernetes cluster via the command below, taking care to modify the value of the "--apiserver-advertise-address=" parameter by the ip address of your master instance.
kubeadm init --apiserver-advertise-address=<ip de votre instance master> --pod-network-cidr=10.244.0.0/16
Note: Please do not modify the network ip "10.244.0.0/16" indicated in the "--pod-network-cidr=" parameter because this parameter allows us to indicate that we are going to use the CNI Flannel plugin to manage the network part of our pods.
Here is what the return of this command should look like when the cluster initializes successfully:
[root@k8s-master ~]# kubeadm init --apiserver-advertise-address=10.1.1.16 --pod-network-cidr=10.244.0.0/16
[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master.cs437cloud.internal kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.1.1.16]
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [k8s-master.cs437cloud.internal localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [k8s-master.cs437cloud.internal localhost] and IPs [10.1.1.16 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 32.502898 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node k8s-master.cs437cloud.internal as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node k8s-master.cs437cloud.internal as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master.cs437cloud.internal" as an annotation
[bootstraptoken] using token: e83pes.u3igpccj2metetu8
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 10.1.1.16:6443 --token e83pes.u3igpccj2metetu8 --discovery-token-ca-cert-hash sha256:7ea9169bc5ac77b3a2ec37e5129006d9a895ce040e306f3093ce77e7422f7f1c
We perform the requested operations in order to finalize the initialization of our cluster:
We create a directory and configuration file in the directory of our user (root in our case):
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
We deploy our pod Flannel network for our cluster:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
note: we will keep the last command provided by the return of the side initialization command ("kubeadm join...") in order to run it on our worker instances later to join them to our cluster.
We can now do the first checks of our cluster from our master instance:
Type the command "kubectl get nodes" to check the nodes currently present in your cluster:
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master.cs437cloud.internal Ready master 41m v1.12.2
Note: there is currently only your master node which is normal as we have not yet added the other nodes to the cluster.
Type the command "kubectl get pods --all-namespaces" to check the pods/containers currently present in your cluster:
[root@k8s-master ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-576cbf47c7-fwxj9 1/1 Running 0 41m
kube-system coredns-576cbf47c7-t86s9 1/1 Running 0 41m
kube-system etcd-k8s-master.cs437cloud.internal 1/1 Running 0 41m
kube-system kube-apiserver-k8s-master.cs437cloud.internal 1/1 Running 0 41m
kube-system kube-controller-manager-k8s-master.cs437cloud.internal 1/1 Running 0 41m
kube-system kube-flannel-ds-amd64-wcm7v 1/1 Running 0 84s
kube-system kube-proxy-h94bs 1/1 Running 0 41m
kube-system kube-scheduler-k8s-master.cs437cloud.internal 1/1 Running 0 40m
Note: There are only pods corresponding to the Kubernetes components needed for our master node (kube-apiserver, etcd, kube-scheduler, etc).
We can check the status of these components with the following command:
[root@k8s-master ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
Adding worker nodes to the cluster
Actions to be performed only on worker instances/nodes
On each of your worker instances (do not do it on your master instance), run the "kubeadm join ..." command provided at the end of your cluster initialization above:
[root@k8s-worker01 ~]# kubeadm join 10.1.1.16:6443 --token e83pes.u3igpccj2metetu8 --discovery-token-ca-cert-hash sha256:7ea9169bc5ac77b3a2ec37e5129006d9a895ce040e306f3093ce77e7422f7f1c
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_sh ip_vs ip_vs_rr ip_vs_wrr] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
[discovery] Trying to connect to API Server "10.1.1.16:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.1.1.16:6443"
[discovery] Requesting info from "https://10.1.1.16:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.1.1.16:6443"
[discovery] Successfully established connection with API Server "10.1.1.16:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-worker01.cs437cloud.internal" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
[root@k8s-worker02 ~]# kubeadm join 10.1.1.16:6443 --token e83pes.u3igpccj2metetu8 --discovery-token-ca-cert-hash sha256:7ea9169bc5ac77b3a2ec37e5129006d9a895ce040e306f3093ce77e7422f7f1c
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_wrr ip_vs_sh ip_vs ip_vs_rr] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
[discovery] Trying to connect to API Server "10.1.1.16:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.1.1.16:6443"
[discovery] Requesting info from "https://10.1.1.16:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.1.1.16:6443"
[discovery] Successfully established connection with API Server "10.1.1.16:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-worker02.cs437cloud.internal" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
Checking the status of the cluster
Actions to be performed from the master instance/node
Check that your worker nodes have been added to your cluster by re-executing the "kubectl get nodes" command:
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master.cs437cloud.internal Ready master 46m v1.12.2
k8s-worker01.cs437cloud.internal Ready <none> 103s v1.12.2
k8s-worker02.cs437cloud.internal Ready <none> 48s v1.12.2
Remark : We can see our two worker nodes (k8s-worker01 and k8s-worker02), so they have been added to our cluster.
Let's now run the "kubectl get pods --all-namespaces" command again:
[root@k8s-master ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-576cbf47c7-fwxj9 1/1 Running 0 46m
kube-system coredns-576cbf47c7-t86s9 1/1 Running 0 46m
kube-system etcd-k8s-master.cs437cloud.internal 1/1 Running 0 46m
kube-system kube-apiserver-k8s-master.cs437cloud.internal 1/1 Running 0 46m
kube-system kube-controller-manager-k8s-master.cs437cloud.internal 1/1 Running 0 46m
kube-system kube-flannel-ds-amd64-724nl 1/1 Running 0 2m6s
kube-system kube-flannel-ds-amd64-wcm7v 1/1 Running 0 6m31s
kube-system kube-flannel-ds-amd64-z7mwg 1/1 Running 3 70s
kube-system kube-proxy-8r7wg 1/1 Running 0 2m6s
kube-system kube-proxy-h94bs 1/1 Running 0 46m
kube-system kube-proxy-m2f5r 1/1 Running 0 70s
kube-system kube-scheduler-k8s-master.cs437cloud.internal 1/1 Running 0 46m
Note: You can see that there are as many "kube-flannel" and "kube-proxy" pods/containers as we have nodes in our cluster.
Deployment of a first pod
We will deploy our first pod in our Kubernetes cluster.
For simplicity, we choose to deploy a pod (without replicas) named "nginx" and using the "nginx" image:
[root@k8s-master ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
If we check, this one appears well at the return of the command listing the pods of our cluster:
[root@k8s-master ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx-55bd7c9fd-5bghl 1/1 Running 0 104s
kube-system coredns-576cbf47c7-fwxj9 1/1 Running 0 57m
kube-system coredns-576cbf47c7-t86s9 1/1 Running 0 57m
kube-system etcd-k8s-master.cs437cloud.internal 1/1 Running 0 57m
kube-system kube-apiserver-k8s-master.cs437cloud.internal 1/1 Running 0 57m
kube-system kube-controller-manager-k8s-master.cs437cloud.internal 1/1 Running 0 57m
kube-system kube-flannel-ds-amd64-724nl 1/1 Running 0 13m
kube-system kube-flannel-ds-amd64-wcm7v 1/1 Running 0 17m
kube-system kube-flannel-ds-amd64-z7mwg 1/1 Running 3 12m
kube-system kube-proxy-8r7wg 1/1 Running 0 13m
kube-system kube-proxy-h94bs 1/1 Running 0 57m
kube-system kube-proxy-m2f5r 1/1 Running 0 12m
kube-system kube-scheduler-k8s-master.cs437cloud.internal 1/1 Running 0 57m
It appears at the top of the list in a different namespace from "kube-system" since it is not a component specific to the operation of Kubernetes.
It is also possible to avoid displaying pods specific to the kube-system namespace by performing this same command without the "--all-namespace" parameter:
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-55bd7c9fd-vs4fq 1/1 Running 0 3d2h
To display the labels :
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-55bd7c9fd-ckltn 1/1 Running 0 8m2s app=nginx,pod-template-hash=55bd7c9fd
We can also check our deployments with the following command:
[root@k8s-master ~]# kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 1 1 1 1 93m
So we have a nginx pod deployed and started but it is not yet accessible from the outside. To make it externally accessible, we need to expose the port of our pod by creating the service (of type NodePort) via the following command:
[root@k8s-master ~]# kubectl create service nodeport nginx --tcp=80:80
service/nginx created
Our service is thus created:
[root@k8s-master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 147m
nginx NodePort 10.108.251.178 <none> 80:30566/TCP 20s
Note: It listens on port 80/tcp and will be available/exposed from outside on port 30566/tcp
We can get the flannel ip of our pod and the name of the node it is currently running on via the following command:
[root@k8s-master ~]# kubectl get pods --selector="app=nginx" --output=wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
nginx-55bd7c9fd-vs4fq 1/1 Running 0 174m 10.244.2.2 k8s-worker02.cs437cloud.internal <none>
Here our nginx pod has the ip 10.244.2.2 and is running on our node k8s-worker02.
You can also simply run a command or open a shell on our nginx pod via the following command (very similar to the docker command):
[root@k8s-master ~]# kubectl exec -it nginx-55bd7c9fd-vs4fq -- /bin/bash
root@nginx-55bd7c9fd-vs4fq:/#
All you have to do is create your load balancing rule on your Ikoula One Cloud network to access / make public your web server (nginx pod):
- Connect to the Cloud Ikoula One
- go to "Network" in the left vertical menu
- click on your network in which you have deployed your Kubernetes instances then on "View IP Addresses" and on your NAT Source ip and go to the "Configuration" tab
- click on "Load Balancing" and create your rule by specifying a name, the public port "80" in our case, the private port "30566" in our case (see above), by choosing an LB algorithm (e.g. round-robin) such as :
- tick all your worker instances:
Check your kubernetes worker instances
Test access to your web server / nginx pod from your browser (via the public ip of your network on which you created the LB rule):
The fact that your nginx pod can be accessed from any of your nodes is made possible by the "kube-proxy" component which is responsible for pointing connections to the node(s) on which it is running (in case of replicates).
So you have just deployed a basic Kubernetes cluster of 3 nodes with a master and two workers.
Go further
You can go further by deploying the Kubernetes dashboard or by creating persistent volumes for your pods, by increasing the number of your worker nodes, or even by redundantly assigning the master role for high availability or by dedicating nodes to certain components such as Etcd for example.
Here are some useful links:
https://kubernetes.io/docs/reference/kubectl/cheatsheet/
https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/
https://kubernetes.io/docs/concepts/storage/volumes/
https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/