Kubeadm quickly builds a k8s cluster

surroundings

Master01: 192.168.1.110 (minimum 2 core CPU)

node01: 192.168.1.100

planning

Services network: 10.96.0.0/12

Pod network: 10.244.0.0/16

  1. Configure hosts to resolve each host

vim /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.110 master01
192.168.1.100 node01

  1. Synchronize each host time

yum install -y ntpdate
ntpdate time.windows.com

14 Mar 16:51:32 ntpdate[46363]: adjust time server 13.65.88.161 offset -0.001108 sec

  1. Close SWAP and close selinux

swapoff -a

vim /etc/selinux/config

This file controls the state of SELinux on the system.

SELINUX= can take one of these three values:

enforcing – SELinux security policy is enforced.

permissive – SELinux prints warnings instead of enforcing.

disabled – No SELinux policy is loaded.

SELINUX=disabled

  1. Install docker-ce

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager –add-repo http://mirrors.aliyun.com/docker-ce/linux/CentOS/docker-ce.repo
yum makecache fast
yum -y install docker-ce

Appears after Docker installation: WARNING: bridge-nf-call-iptables is disabled

vim /etc/sysctl.conf

sysctl settings are defined through files in

/usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.

Vendors settings live in /usr/lib/sysctl.d/.

To override a whole file, create a new file with the same in

/etc/sysctl.d/ and put new settings there. To override

only specific settings, add a file with a lexically later

name in /etc/sysctl.d/ and put new settings there.

#

For more information, see sysctl.conf(5) and sysctl.d(5).

net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-arptables=1

systemctl enable docker && systemctl start docker

  1. Install kubernetes

cat < /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet

  1. Initialize the cluster

kubeadm init –image-repository registry.aliyuncs.com/google_containers –cubernetes-version v1.13.1 –pod-network-cidr = 10.244.0.0 / 16

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 192.168.1.110:6443 –token wgrs62.vy0trlpuwtm5jd75 –discovery-token-ca-cert-hash sha256:6e947e63b176acf976899483d41148609a6e109067ed6970b9fbca8d9261c8d0

  1. Manually deploy flannel

Flannel URL: https://github.com/coreos/flannel

for Kubernetes v1.7 +

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

8.node placement

Install docker kubelet kubeadm

Docker installation is the same as step 4, kubelet kubeadm installation is the same as step 5

9.node joins the master

kubeadm join 192.168.1.110:6443 –token wgrs62.vy0trlpuwtm5jd75 –discovery-token-ca-cert-hash sha256:6e947e63b176acf976899483d41148609a6e109067ed6970b9fbca8d9261c8d0

Kubectl get nodes #View node status

NAME STATUS ROLES AGE VERSION
localhost.localdomain NotReady 130m v1.13.4
master01 Ready master 4h47m v1.13.4
node01 Ready 94m v1.13.4

Kubectl get cs #View component status

NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {“health”: “true”}

Kubectl get ns #View namespace

NAME STATUS AGE
default Active 4h41m
kube-public Active 4h41m
kube-system Active 4h41m

Kubectl get pods -n kube-system #View pod status

NAME READY STATUS RESTARTS AGE
coredns-78d4cf999f-bszbk 1/1 Running 0 4h44m
coredns-78d4cf999f-j68hb 1/1 Running 0 4h44m
etcd-master01 1/1 Running 0 4h43m
kube-apiserver-master01 1/1 Running 1 4h43m
kube-controller-manager-master01 1/1 Running 2 4h43m
kube-flannel-ds-amd64-27×59 1/1 Running 1 126m
kube-flannel-ds-amd64-5sxgk 1/1 Running 0 140m
kube-flannel-ds-amd64-xvrbw 1/1 Running 0 91m
kube-proxy-4pbdf 1/1 Running 0 91m
kube-proxy-9fmrl 1/1 Running 0 4h44m
kube-proxy-nwkl9 1/1 Running 0 126m
kube-scheduler-master01 1/1 Running 2 4h43m

Environment preparation master01 node01 node02, connect to the network, modify the hosts file, and confirm that the three hosts resolve each other.

Vim /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.201 master01
192.168.1.202 node01
192.168.1.203 node02

Host configuration Ali YUM source

mv /etc/yum.repos.d/ CentOS -Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup && curl -o /etc/yum.repos.d/CentOS-Base.repo http ://mirrors.aliyun.com/repo/Centos-7.repo

Start deploying kubernetes

  1. Install etcd on master01

Yum install etcd -y

After the installation is complete, modify the etcd configuration file /etc/etcd/etcd.conf

Vim /etc/etcd/etcd.conf

ETCD_LISTEN_CLIENT_URLS=”http://0.0.0.0:2379″

Modify the listening address ETCD_LISTEN_CLIENT_URLS=”http://192.168.1.201:2379″ #Modify the etcd address as the local address

Set service startup

Systemctl start etcd && systemctl enable etcd

  1. Install kubernetes on all hosts

Yum install kubernetes -y

  1. Configure the master

Vim /etc/kubernetes/config

KUBE_MASTER=”–master=http://192.168.1.201:8080″ #Modify kube_master address

Vim /etc/kubernetes/apiserver

KUBE_API_ADDRESS=”–insecure-bind-address=0.0.0.0″

Modify the listening address KUBE_ETCD_SERVERS=”–etcd-servers=http://192.168.1.201:2379″ #Modify the etcd address

KUBE_ADMISSION_CONTROL=”–admission-control =NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota” #Delete authentication parameter ServiceAccount

Set the service startup, start sequence apiserver>scheduler=controller-manager

Systemctl start docker && systemctl enable docker
systemctl start kube-apiserver && systemctl enable kube-apiserver
systemctl start kube-scheduler && systemctl enable kube-scheduler
systemctl start kube-controller-manager && systemctl enable kube-controller-manager

  1. Configure node

Vim /etc/kubernetes/config

KUBE_MASTER=”–master=http://192.168.1.201:8080″ #Modify the master address

Vim /etc/kubernetes/kubelet

KUBELET_ADDRESS=”–address=192.168.1.202″ #Modify kubelet address
KUBELET_HOSTNAME=”–hostname-override=192.168.1.202″ #Modify kubelet hostname
KUBELET_API_SERVER=”–api-servers=http://192.168.1.201: 8080″ #Modify apiserver address

Set service startup

Systemctl start docker && systemctl enable docker
systemctl start kubelet && systemctl enable kubelet
systemctl start kube-proxy && systemctl enable kube-proxy

  1. Deployment is complete, check the cluster status

Kubectl get nodes

[root@node02 kubernetes]

# kubectl -s http://192.168.1.201:8080 get nodes -o wide
NAME STATUS AGE EXTERNAL-IP
192.168.1.202 Ready 29s
192.168.1.203 Ready 16m

  1. Install flannel on all hosts

Yum install flannel -y

Vim /etc/sysconfig/flanneld

FLANNEL_ETCD_ENDPOINTS=”http://192.168.1.201:2379″ #Modify the etcd address

Etcdctl mk /atomic.io/network/config ‘{ “Network”: “172.16.0.0/16” }’ #Set the container network in the etcd host

Master host restart service

Systemctl start flanneld && systemctl enable flanneld
systemctl restart docker
systemctl restart kube-apiserver
systemctl restart kube-scheduler
systemctl restart kube-controller-manager

Node host restart service

Systemctl start flanneld && systemctl enable flanneld
systemctl restart docker
systemctl restart kubelet
systemctl restart kube-proxy

Leave a comment

Your email address will not be published. Required fields are marked *

Blue Captcha Image
Refresh

*

Protected by WP Anti Spam

Hit Counter provided by dental implants orange county