K8S实战
1. 环境¶
所有资料
链接:https://pan.baidu.com/s/12_ukeH_MG3kwT-qSOJtVmA 提取码:nqsg
1.1 环境¶
ip |
主机名 | 安装软件 |
---|---|---|
192.168.71.133 | master |
|
192.168.71.134 | node1 |
|
192.168.71.135 | node2 |
系统是
CentOS Linux release 7.7.1908 (Core)
1. 2 设置主机名与时区¶
timedatectl set-timezone Asia/Shanghai #都要执行
[root@node01 ~]# timedatectl set-timezone Asia/Shanghai [root@node02 ~]# timedatectl set-timezone Asia/Shanghai [root@node03 ~]# timedatectl set-timezone Asia/Shanghai
1.3 添加host和设置主机名¶
[root@node01 ~]# hostnamectl set-hostname master [root@node01 ~]# bash [root@master ~]# [root@node02 ~]# hostnamectl set-hostname node1 [root@node02 ~]# bash [root@node1 ~]# [root@node03 ~]# hostnamectl set-hostname node2 [root@node03 ~]# bash [root@node2 ~]#
cat>> /etc/hosts<<EOF 192.168.71.133 master 192.168.71.134 node1 192.168.71.135 node2 EOF
三个机器都要做,以后我们通过主机名进行网络通信
测试
[root@master ~]# ping -c1 master PING master (192.168.71.133) 56(84) bytes of data. 64 bytes from master (192.168.71.133): icmp_seq=1 ttl=64 time=0.029 ms --- master ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms [root@master ~]# ping -c1 node1 PING node1 (192.168.71.134) 56(84) bytes of data. 64 bytes from node1 (192.168.71.134): icmp_seq=1 ttl=64 time=0.227 ms --- node1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms [root@master ~]# ping -c1 nodee ^C [root@master ~]# ping -c1 node2 PING node2 (192.168.71.135) 56(84) bytes of data. 64 bytes from node2 (192.168.71.135): icmp_seq=1 ttl=64 time=0.180 ms --- node2 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms
以上是主机测试到两个node,两个机器省略,测试都要能通。
1.4 关闭防火墙¶
关闭防火墙,三台虚拟机都要设置,生产环境跳过这一步
[root@master ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config [root@master ~]# setenforce 0 setenforce: SELinux is disabled [root@master ~]# systemctl disable firewalld [root@master ~]# systemctl stop firewalld
2. Kubeadmin
¶
2.1 准备包¶
将镜像包上传至服务器每个节点
[root@master ~]# mkdir /usr/local/k8s-install [root@master ~]# cd /usr/local/k8s-install/ [root@master k8s-install]# rz -E rz waiting to receive. [root@master k8s-install]# ls [root@master k8s-install]# mkdir kubernetes-1.14 [root@master k8s-install]# cd kubernetes-1.14/ [root@master kubernetes-1.14]# rz -E rz waiting to receive. [root@master kubernetes-1.14]# ls admin-role.yaml flannel-dashboard.tar.gz k8s.conf kubernetes-dashboard-admin.rbac.yaml daemon.json init.sh kube114-rpm.tar.gz kubernetes-dashboard.yaml docker-ce-18.09.tar.gz k8s-114-images.tar.gz kube-flannel.yml worker-node.sh
XFTP
上传安装包文件
2.2 安装docker¶
按每个Centos
上安装Docker
tar -zxvf docker-ce-18.09.tar.gz cd docker yum localinstall -y *.rpm systemctl start docker systemctl enable docker systemctl status docker
[root@master kubernetes-1.14]# tar -zxvf docker-ce-18.09.tar.gz [root@master docker]# ls audit-2.8.4-4.el7.x86_64.rpm libselinux-python-2.5-14.1.el7.x86_64.rpm audit-libs-2.8.4-4.el7.x86_64.rpm libselinux-utils-2.5-14.1.el7.x86_64.rpm audit-libs-python-2.8.4-4.el7.x86_64.rpm libsemanage-2.5-14.el7.x86_64.rpm checkpolicy-2.5-8.el7.x86_64.rpm libsemanage-python-2.5-14.el7.x86_64.rpm containerd.io-1.2.5-3.1.el7.x86_64.rpm libsepol-2.5-10.el7.x86_64.rpm container-selinux-2.74-1.el7.noarch.rpm policycoreutils-2.5-29.el7_6.1.x86_64.rpm docker-ce-18.09.5-3.el7.x86_64.rpm policycoreutils-python-2.5-29.el7_6.1.x86_64.rpm docker-ce-cli-18.09.5-3.el7.x86_64.rpm python-IPy-0.75-6.el7.noarch.rpm libcgroup-0.41-20.el7.x86_64.rpm selinux-policy-3.13.1-229.el7_6.9.noarch.rpm libseccomp-2.3.1-3.el7.x86_64.rpm selinux-policy-targeted-3.13.1-229.el7_6.9.noarch.rpm libselinux-2.5-14.1.el7.x86_64.rpm setools-libs-3.3.8-4.el7.x86_64.rpm [root@master docker]# yum localinstall -y *.rpm [root@master docker]# systemctl start docker [root@master docker]# systemctl enable docker Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. [root@master docker]# systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Active: active (running) since Sat 2020-04-18 21:11:58 CST; 8s ago Docs: https://docs.docker.com Main PID: 46098 (dockerd) CGroup: /system.slice/docker.service └─46098 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
安装会缺少包,缺啥就补啥,三台机器都要做
2.3 检查 环境¶
确保从cgroups
均在同一个从groupfs
cgroups
是control groups的简称,它为Linux内核提供了一种任务聚集和划分的机制,通过一组参数集合将一些任务组织成一个或多个子系统。cgroups
是实现IaaS
虚拟化(kvm
、lx
c等),PaaS
容器沙箱(Docker等)的资源管理控制部分的底层基础。- 子系统是根据
cgroup
对任务的划分功能将任务按照一种指定的属性划分成的一个组,主要用来实现资源的控制。 - 在
cgroup
中,划分成的任务组以层次结构的形式组织,多个子系统形成一个数据结构中类似多根树的结构。cgroup
包含了多个孤立的子系统,每一个子系统代表单一的资源
[root@master docker]# docker info | grep cgroup Cgroup Driver: cgroupfs [root@node1 docker]# docker info | grep cgroup Cgroup Driver: cgroupfs [root@node2 docker]# docker info | grep cgroup Cgroup Driver: cgroupfs
如果不是groupfs
,执行下列语句
cat << EOF > /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=cgroupfs"] } EOF systemctl daemon-reload && systemctl restart docker
WARNING: bridge-nf-call-iptables is disabled
3. 安装kubeadm
准备¶
3.1 解压安装¶
kubeadm
是集群部署工具
cd /usr/local/k8s-install/kubernetes-1.14 tar -zxvf kube114-rpm.tar.gz cd kube114-rpm yum localinstall -y *.rpm
详细过程
[root@master docker]# cd /usr/local/k8s-install/kubernetes-1.14 [root@master kubernetes-1.14]# tar -zxvf kube114-rpm.tar.gz kube114-rpm/ kube114-rpm/conntrack-tools-1.4.4-4.el7.x86_64.rpm kube114-rpm/libnetfilter_cttimeout-1.0.0-6.el7.x86_64.rpm kube114-rpm/libnetfilter_conntrack-1.0.6-1.el7_3.x86_64.rpm kube114-rpm/socat-1.7.3.2-2.el7.x86_64.rpm kube114-rpm/libnetfilter_cthelper-1.0.0-9.el7.x86_64.rpm kube114-rpm/libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm kube114-rpm/cri-tools-1.12.0-0.x86_64.rpm kube114-rpm/kubernetes-cni-0.7.5-0.x86_64.rpm kube114-rpm/kubectl-1.14.1-0.x86_64.rpm kube114-rpm/kubeadm-1.14.1-0.x86_64.rpm kube114-rpm/kubelet-1.14.1-0.x86_64.rpm [root@master kubernetes-1.14]# cd kube114-rpm [root@master kube114-rpm]# yum localinstall -y *.rpm
三个机器都要操作
3.2 关闭交换区¶
[root@master kube114-rpm]# swapoff -a
vi /etc/fstab ``#swap一行注释
[root@master kube114-rpm]# vim /etc/fstab [root@master kube114-rpm]# grep -irn swap Binary file kubernetes-cni-0.7.5-0.x86_64.rpm matches [root@master kube114-rpm]# grep -irn swap /etc/fstab 12:#/dev/mapper/centos-swap swap swap defaults 0 0
3.3 配置网桥¶
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
3.4 通过镜像安装k8s¶
cd /usr/local/k8s-install/kubernetes-1.14
docker load -i k8s-114-images.tar.gz
docker load -i flannel-dashboard.tar.gz
docker images
[root@master kube114-rpm]# cd /usr/local/k8s-install/kubernetes-1.14 [root@master kubernetes-1.14]# docker load -i k8s-114-images.tar.gz 5ba3be777c2d: Loading layer 43.88MB/43.88MB e04ef32df86e: Loading layer 39.26MB/39.26MB Loaded image: k8s.gcr.io/kube-scheduler:v1.14.1 0b8d2e946c93: Loading layer 3.403MB/3.403MB 8b9a8fc88f0d: Loading layer 36.69MB/36.69MB Loaded image: k8s.gcr.io/kube-proxy:v1.14.1 e17133b79956: Loading layer 744.4kB/744.4kB Loaded image: k8s.gcr.io/pause:3.1 8a788232037e: Loading layer 1.37MB/1.37MB 30796113fb51: Loading layer 232MB/232MB 6fbfb277289f: Loading layer 24.98MB/24.98MB Loaded image: k8s.gcr.io/etcd:3.3.10 fb61a074724d: Loading layer 479.7kB/479.7kB c6a5fc8a3f01: Loading layer 40.05MB/40.05MB Loaded image: k8s.gcr.io/coredns:1.3.1 97f70f3a7a0c: Loading layer 167.6MB/167.6MB Loaded image: k8s.gcr.io/kube-apiserver:v1.14.1 d8ca6e1aa16e: Loading layer 115.6MB/115.6MB Loaded image: k8s.gcr.io/kube-controller-manager:v1.14.1 [root@master kubernetes-1.14]# docker load -i flannel-dashboard.tar.gz 7bff100f35cb: Loading layer 4.672MB/4.672MB 5d3f68f6da8f: Loading layer 9.526MB/9.526MB 9b48060f404d: Loading layer 5.912MB/5.912MB 3f3a4ce2b719: Loading layer 35.25MB/35.25MB 9ce0bb155166: Loading layer 5.12kB/5.12kB Loaded image: quay.io/coreos/flannel:v0.11.0-amd64 fbdfe08b001c: Loading layer 122.3MB/122.3MB Loaded image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 [root@master kubernetes-1.14]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.14.1 20a2d7035165 12 months ago 82.1MB k8s.gcr.io/kube-apiserver v1.14.1 cfaa4ad74c37 12 months ago 210MB k8s.gcr.io/kube-scheduler v1.14.1 8931473d5bdb 12 months ago 81.6MB k8s.gcr.io/kube-controller-manager v1.14.1 efb3887b411d 12 months ago 158MB quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 14 months ago 52.6MB k8s.gcr.io/coredns 1.3.1 eb516548c180 15 months ago 40.3MB k8s.gcr.io/kubernetes-dashboard-amd64 v1.10.1 f9aed6605b81 16 months ago 122MB k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 16 months ago 258MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 2 years ago 742kB
4. 安装k8s¶
4.1 初始化¶
master主服务器配置
kubeadm init --kubernetes-version=v1.14.1 --pod-network-cidr=10.244.0.0/16
虚拟ip的范围是10.244.0.0网段内
虚拟机的CPU个数一定要是2个以上 否则会报错
[root@master kubernetes-1.14]# kubeadm init --kubernetes-version=v1.14.1 --pod-network-cidr=10.244.0.0/16 [init] Using Kubernetes version: v1.14.1 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2 [preflight] If you know what you are doing, you can make a check non-fatal with
--ignore-preflight-errors=...
詳細安装过程
[root@master ~]# kubeadm init --kubernetes-version=v1.14.1 --pod-network-cidr=10.244.0.0/16 [init] Using Kubernetes version: v1.14.1 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow theguide at https://kubernetes.io/docs/setup/cri/ [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.71.133] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.71.133 127.0.0.1 ::1] [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.71.133 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 16.504730 seconds [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --experimental-upload-certs [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: fi5c4i.i7ph6eci4dwgv62n [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.71.133:6443 --token fi5c4i.i7ph6eci4dwgv62n \ --discovery-token-ca-cert-hash sha256:4600c78e66fc207b5cd2911e08ddb8a16d038abbfeb54cd89a549ee2fe1842c2
上述提示需要手动执行
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
操作过程
[root@master k8s-install]# mkdir -p $HOME/.kube [root@master k8s-install]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@master k8s-install]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
admin.conf -- k8s的核心配置文件
4.2 安装flannel网络¶
kubectl get nodes #查看存在问题的pod kubectl get pod --all-namespaces #设置全局变量 #安装flannel网络组件 kubectl create -f kube-flannel.yml
[root@master k8s-install]# kubectl get nodes NAME STATUS ROLES AGE VERSION master NotReady master 5m13s v1.14.1 [root@master k8s-install]# kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-fb8b8dccf-jrjxv 0/1 Pending 0 6m2s kube-system coredns-fb8b8dccf-xhvt2 0/1 Pending 0 6m2s kube-system etcd-master 1/1 Running 0 4m53s kube-system kube-apiserver-master 1/1 Running 0 5m4s kube-system kube-controller-manager-master 1/1 Running 0 4m55s kube-system kube-proxy-ltqgq 1/1 Running 0 6m2s kube-system kube-scheduler-master 1/1 Running 0 5m4s
安装后flannel后coredns就是running了,开始安装
[root@master k8s-install]# ls admin-role.yaml docker-ce-18.09.tar.gz k8s-114-images.tar.gz kube114-rpm.tar.gz kubernetes-dashboard.yaml daemon.json flannel-dashboard.tar.gz k8s.conf kube-flannel.yml worker-node.sh docker init.sh kube114-rpm kubernetes-dashboard-admin.rbac.yaml [root@master k8s-install]# kubectl create -f kube-flannel.yml podsecuritypolicy.extensions/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created [root@master k8s-install]# kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-fb8b8dccf-jrjxv 1/1 Running 0 8m49s kube-system coredns-fb8b8dccf-xhvt2 1/1 Running 0 8m49s kube-system etcd-master 1/1 Running 0 7m40s kube-system kube-apiserver-master 1/1 Running 0 7m51s kube-system kube-controller-manager-master 1/1 Running 0 7m42s kube-system kube-flannel-ds-amd64-7cs4l 1/1 Running 0 27s kube-system kube-proxy-ltqgq 1/1 Running 0 8m49s kube-system kube-scheduler-master 1/1 Running 0 7m51s
4.3 node加入集群¶
master初始化的时候,会有一句话
kubeadm join 192.168.71.133:6443 --token fi5c4i.i7ph6eci4dwgv62n \ --discovery-token-ca-cert-hash sha256:4600c78e66fc207b5cd2911e08ddb8a16d038abbfeb54cd89a549ee2fe1842c2
这个用于,node加入到master集群中使用,一定要记住。
[root@node2 k8s-install]# kubeadm join 192.168.71.133:6443 --token upobd4.jocvrwd5jpl7x8n6 \ > --discovery-token-ca-cert-hash sha256:9c3e4af2ec2bc4daf72b7a29300490f28105ba065d698a465f06f240975de81a [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
此时查看
[root@master k8s-install]# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 13m v1.14.1 node2 Ready <none> 19s v1.14.1
第一个节点已经加入到集群了。
开始加入第二个,[若是忘记怎么办?],在master 上执行kubeadm token list 查看 ,在node上运行
[root@master k8s-install]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS upobd4.jocvrwd5jpl7x8n6 23h 2020-04-19T23:26:06+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
首先查到master上的token
upobd4.jocvrwd5jpl7x8n6
[root@node2 ~]# kubeadm join master:6443 --token upobd4.jocvrwd5jpl7x8n6 --discovery-token-unsafe-skip-ca-verification [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
后来发现我电脑时间没同步,在新机器加入的时候,token过期一直不对,只好重新生成新的token再次加入
生成token
[root@master k8s-install]# kubeadm token create ljsplz.2sqcfecxc96rjhti [root@master k8s-install]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS fi5c4i.i7ph6eci4dwgv62n 23h 2020-04-20T00:01:52+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token ljsplz.2sqcfecxc96rjhti 23h 2020-04-20T00:11:58+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token px8gno.m5155igdlhcg5led 23h 2020-04-20T00:09:27+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
加入
[root@node1 ~]# kubeadm join master:6443 --token ljsplz.2sqcfecxc96rjhti --discovery-token-unsafe-skip-ca-verification [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
再次查看
[root@master k8s-install]# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 13m v1.14.1 node1 Ready <none> 2m35s v1.14.1 node2 Ready <none> 2m18s v1.14.1 [root@master k8s-install]# kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-fb8b8dccf-6plpm 1/1 Running 0 13m kube-system coredns-fb8b8dccf-qm2t7 1/1 Running 0 13m kube-system etcd-master 1/1 Running 0 12m kube-system kube-apiserver-master 1/1 Running 0 12m kube-system kube-controller-manager-master 1/1 Running 0 12m kube-system kube-flannel-ds-amd64-dsbph 1/1 Running 0 11m kube-system kube-flannel-ds-amd64-frkjb 1/1 Running 0 2m40s kube-system kube-flannel-ds-amd64-kcntw 1/1 Running 0 2m23s kube-system kube-proxy-cxpx5 1/1 Running 0 2m40s kube-system kube-proxy-rdtxc 1/1 Running 0 13m kube-system kube-proxy-v4tdg 1/1 Running 0 2m23s kube-system kube-scheduler-master 1/1 Running 0 12m
到从一主两从搭建好了。
4.4 重启集群¶
systemctl start kubelet systemctl enable kubelet
5. 开启仪表盘¶
Master开启仪表盘 kubectl apply -f kubernetes-dashboard.yaml kubectl apply -f admin-role.yaml kubectl apply -f kubernetes-dashboard-admin.rbac.yaml kubectl -n kube-system get svc http://master:32000 访问 192.168.71.133:32000
操作过程
[root@master k8s-install]# ls admin-role.yaml docker-ce-18.09.tar.gz k8s-114-images.tar.gz kube114-rpm.tar.gz kubernetes-dashboard.yaml daemon.json flannel-dashboard.tar.gz k8s.conf kube-flannel.yml worker-node.sh docker init.sh kube114-rpm kubernetes-dashboard-admin.rbac.yaml [root@master k8s-install]# kubectl apply -f kubernetes-dashboard.yaml secret/kubernetes-dashboard-certs created serviceaccount/kubernetes-dashboard created role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created deployment.apps/kubernetes-dashboard created service/kubernetes-dashboard created [root@master k8s-install]# kubectl apply -f admin-role.yaml clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created [root@master k8s-install]# kubectl apply -f kubernetes-dashboard-admin.rbac.yaml clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created [root@master k8s-install]# kubectl -n kube-system get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 19m kubernetes-dashboard NodePort 10.99.140.239 <none> 443:32001/TCP,80:32000/TCP 29s
6. docker 开启加速¶
cat>> /etc/docker/daemon.json<<EOF { "registry-mirrors": ["https://****.mirror.aliyuncs.com"] } EOF # 重启服务 systemctl daemon-reload systemctl restart docker
操作过程[node1,node2都要做]
[root@node1 ~]# cat>> /etc/docker/daemon.json<<EOF > { > "registry-mirrors": ["https://****.mirror.aliyuncs.com"] > } > EOF [root@node1 ~]# # 重启服务 [root@node1 ~]# systemctl daemon-reload [root@node1 ~]# systemctl restart docker
检查一下
[root@node1 ~]# docker info Containers: 13 Running: 6 Paused: 0 Stopped: 7 Images: 9 Server Version: 18.09.5 Storage Driver: overlay2 Backing Filesystem: xfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84 runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30 init version: fec3683 Security Options: seccomp Profile: default Kernel Version: 3.10.0-1062.18.1.el7.x86_64 Operating System: CentOS Linux 7 (Core) OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 2.761GiB Name: node1 ID: 5KYP:ECPA:XN2K:GU4Z:REWK:4ALQ:J4U3:QQH2:XN2S:IM6N:J2O4:BBDI Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Registry Mirrors: https://****.mirror.aliyuncs.com/ Live Restore Enabled: false
Registry,可以看到配置成为了阿里云的。
7. 部署¶
7.1 k8s 部署一个tomcat¶
7.1.1 dashbord创建¶
等待一会。docker会下载镜像。
7.1.2 访问测试¶
7.2 命令行部署tomcat集群¶
- ▪部署是指Kubernetes向Node节点发送指令,创建容器的过程
- Kubernetes支持yml格式的部署脚本
- kubectl create -f 部署yml文件 #创建部署
tomcat-deploy.yml
一个例子
apiVersion: extensions/v1beta1 # 解析yaml的版本一般写死的 kind: Deployment # 用途,Deployment用于部署 metadata: # 元数据 name: tomcat-deploy spec: # 设置详细内容 replicas: 2 # 部署两个 template: # 部署模板 metadata: labels: # 标签 app: tomcat-cluster # pod的名字 spec: volumes: - name: web-app hostPath: path: /mnt containers: # 容器 - name: tomcat-cluster # 容器名字,一般和上面的app名字保持一致 image: tomcat:latest # 容器使用的镜像 resources: # 容器使用的资源 requests: cpu: 0.5 memory: 200Mi limits: cpu: 1 memory: 512Mi ports: - containerPort: 8080 volumeMounts: - name: web-app mountPath: /usr/local/tomcat/webapps
7.2.1 常见部署命令¶
- kubectl create -f 部署yml文件 #创建部署
- kubectl apply -f 部署yml文件 #更新部署配置
- kubectl get pod [-o wide] #查看已部署pod
- kubectl describe pod pod名称 #查看Pod详细信息
- kubectl logs [-f] pod名称 #查看pod输出日志
详细操作过程
[root@master k8s-install]# cd /usr/local/ [root@master local]# mkdir k8s [root@master local]# cd k8s [root@master k8s]# mkdir tomcat-deploy [root@master k8s]# cd tomcat-deploy/ [root@master tomcat-deploy]# rz -E rz waiting to receive. [root@master tomcat-deploy]# ls tomcat-deploy.yml [root@master tomcat-deploy]# cat tomcat-deploy.yml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: tomcat-deploy spec: replicas: 2 template: metadata: labels: app: tomcat-cluster spec: volumes: - name: web-app hostPath: path: /mnt containers: - name: tomcat-cluster image: tomcat:latest resources: requests: cpu: 0.5 memory: 200Mi limits: cpu: 1 memory: 512Mi ports: - containerPort: 8080 volumeMounts: - name: web-app mountPath: /usr/local/tomcat/webapps [root@master tomcat-deploy]# ls tomcat-deploy.yml [root@master tomcat-deploy]# vim tomcat-basic-deploy.yml [root@master tomcat-deploy]# cat tomcat-basic-deploy.yml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: tomcat-deploy spec: replicas: 2 template: metadata: labels: app: tomcat-cluster spec: containers: - name: tomcat-cluster image: tomcat:latest ports: - containerPort: 8080 [root@master tomcat-deploy]# kubectl create -f tomcat-basic-deploy.yml deployment.extensions/tomcat-deploy created # 查看 [root@master tomcat-deploy]# kubectl get deployment tomcat-deploy NAME READY UP-TO-DATE AVAILABLE AGE tomcat-deploy 2/2 2 2 2m19s [root@master tomcat-deploy]# kubectl get pod NAME READY STATUS RESTARTS AGE my-tomcat-6d9867c99d-9rwx5 1/1 Running 0 9h my-tomcat-6d9867c99d-xrpkc 1/1 Running 0 9h tomcat-deploy-5fd4fc7ddb-b9668 1/1 Running 0 2m53s tomcat-deploy-5fd4fc7ddb-l82zn 1/1 Running 0 2m53s [root@master tomcat-deploy]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES my-tomcat-6d9867c99d-9rwx5 1/1 Running 0 9h 10.244.1.5 node1 <none> <none> my-tomcat-6d9867c99d-xrpkc 1/1 Running 0 9h 10.244.2.2 node2 <none> <none> tomcat-deploy-5fd4fc7ddb-b9668 1/1 Running 0 3m7s 10.244.2.3 node2 <none> <none> tomcat-deploy-5fd4fc7ddb-l82zn 1/1 Running 0 3m7s 10.244.1.6 node1 <none> <none>
解释
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES tomcat-deploy-5fd4fc7ddb-b9668 1/1 Running 0 3m7s 10.244.2.3 node2 <none> <none>
- NAME 每一个pod都会有一个唯一的名称
- READY pod内部有几个容器
- STATUS pod状态
- AGE 启动时间
- IP 内部地址虚拟ip,不能从外界直接访问
- NODE 当前pod在哪个机器上
- NOMINATED NODE
- READINESS GATES
查看更具体的某个pod详细信息
[root@master tomcat-deploy]# kubectl describe pod tomcat-deploy-5fd4fc7ddb-b9668 Name: tomcat-deploy-5fd4fc7ddb-b9668 Namespace: default Priority: 0 PriorityClassName: <none> Node: node2/192.168.71.135 Start Time: Sun, 19 Apr 2020 10:01:59 +0800 Labels: app=tomcat-cluster pod-template-hash=5fd4fc7ddb Annotations: <none> Status: Running IP: 10.244.2.3 Controlled By: ReplicaSet/tomcat-deploy-5fd4fc7ddb Containers: tomcat-cluster: Container ID: docker://ca5bad543fe64824c4fa5a069afeda444496bba7e04c0888d70248f68bd5a48a Image: tomcat:latest Image ID: docker-pullable://tomcat@sha256:1458fb9bb9d3d46d3129f95849233e31d82391723830ebd61441c24635460b84 Port: 8080/TCP Host Port: 0/TCP State: Running Started: Sun, 19 Apr 2020 10:02:07 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-8nk95 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-8nk95: Type: Secret (a volume populated by a Secret) SecretName: default-token-8nk95 Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 8m11s default-scheduler Successfully assigned default/tomcat-deploy-5fd4fc7ddb-b9668 to node2 Normal Pulling 8m9s kubelet, node2 Pulling image "tomcat:latest" Normal Pulled 8m3s kubelet, node2 Successfully pulled image "tomcat:latest" Normal Created 8m3s kubelet, node2 Created container tomcat-cluster Normal Started 8m3s kubelet, node2 Started container tomcat-cluster
查看pod内部日志输出内容
[root@master tomcat-deploy]# kubectl logs -f tomcat-deploy-5fd4fc7ddb-b9668 19-Apr-2020 02:02:08.062 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version name: Apache Tomcat/8.5.54 19-Apr-2020 02:02:08.065 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built: Apr 3 2020 14:06:10 UTC 19-Apr-2020 02:02:08.066 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version number: 8.5.54.0 19-Apr-2020 02:02:08.066 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Name: Linux 19-Apr-2020 02:02:08.066 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Version: 3.10.0-1062.18.1.el7.x86_64 19-Apr-2020 02:02:08.066 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Architecture: amd64 19-Apr-2020 02:02:08.066 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Java Home: /usr/local/openjdk-8/jre 19-Apr-2020 02:02:08.067 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Version: 1.8.0_252-b09 19-Apr-2020 02:02:08.067 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Vendor: Oracle Corporation 19-Apr-2020 02:02:08.067 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_BASE: /usr/local/tomcat 19-Apr-2020 02:02:08.067 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_HOME: /usr/local/tomcat 19-Apr-2020 02:02:08.067 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.config.file=/usr/local/tomcat/conf/logging.properties 19-Apr-2020 02:02:08.068 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager 19-Apr-2020 02:02:08.068 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djdk.tls.ephemeralDHKeySize=2048 19-Apr-2020 02:02:08.068 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.protocol.handler.pkgs=org.apache.catalina.webresources 19-Apr-2020 02:02:08.068 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dorg.apache.catalina.security.SecurityListener.UMASK=0027 19-Apr-2020 02:02:08.068 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dignore.endorsed.dirs= 19-Apr-2020 02:02:08.068 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.base=/usr/local/tomcat 19-Apr-2020 02:02:08.068 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.home=/usr/local/tomcat 19-Apr-2020 02:02:08.069 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.io.tmpdir=/usr/local/tomcat/temp 19-Apr-2020 02:02:08.069 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent Loaded APR based Apache Tomcat Native library[1.2.23] using APR version [1.6.5]. 19-Apr-2020 02:02:08.069 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true]. 19-Apr-2020 02:02:08.069 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent APR/OpenSSL configuration: useAprConnector [false], useOpenSSL [true] 19-Apr-2020 02:02:08.074 INFO [main] org.apache.catalina.core.AprLifecycleListener.initializeSSL OpenSSL successfully initialized [OpenSSL 1.1.1d 10 Sep 2019] 19-Apr-2020 02:02:08.219 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-nio-8080"] 19-Apr-2020 02:02:08.234 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read 19-Apr-2020 02:02:08.251 INFO [main] org.apache.catalina.startup.Catalina.load Initialization processed in 763 ms 19-Apr-2020 02:02:08.288 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service [Catalina] 19-Apr-2020 02:02:08.288 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet Engine: Apache Tomcat/8.5.54 19-Apr-2020 02:02:08.303 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"] 19-Apr-2020 02:02:08.323 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 71 ms
7.2.2 外部访问¶
[root@master tomcat-service]# touch tomcat-service.yml [root@master tomcat-service]# mv tomcat-service.yml tomcat-basic-service.yml [root@master tomcat-service]# rz -E rz waiting to receive. [root@master tomcat-service]# ls tomcat-basic-service.yml tomcat-service.yml [root@master tomcat-service]# vimtutor ^C [root@master tomcat-service]# vim tomcat-basic-service.yml [root@master tomcat-service]# cp tomcat-service.yml tomcat-basic-service.yml cp: overwrite ‘tomcat-basic-service.yml’? y [root@master tomcat-service]# cat tomcat-basic-service.yml apiVersion: v1 kind: Service metadata: name: tomcat-service labels: app: tomcat-service spec: # type: NodePort selector: app: tomcat-cluster ports: - port: 8000 targetPort: 8080 # nodePort: 32500
解释
[root@master tomcat-service]# cat tomcat-basic-service.yml apiVersion: v1 kind: Service metadata: name: tomcat-service # 用于显示 labels: app: tomcat-service # 后期选择pod使用 spec: type: NodePort selector: app: tomcat-cluster ports: # 端口设置 - port: 8000 # 服务的端口 targetPort: 8080 # 容器内部暴露端口 nodePort: 32500 # 节点对外暴露端口,让内部的8080端口映射到这端口上
[root@master tomcat-service]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES my-tomcat-6d9867c99d-9rwx5 1/1 Running 0 9h 10.244.1.5 node1 <none> <none> my-tomcat-6d9867c99d-xrpkc 1/1 Running 0 9h 10.244.2.2 node2 <none> <none> tomcat-deploy-5fd4fc7ddb-b9668 1/1 Running 0 28m 10.244.2.3 node2 <none> <none> tomcat-deploy-5fd4fc7ddb-l82zn 1/1 Running 0 28m 10.244.1.6 node1 <none> <none>
详细操作
[root@master tomcat-service]# cat tomcat-basic-service.yml apiVersion: v1 kind: Service metadata: name: tomcat-service labels: app: tomcat-service spec: type: NodePort selector: app: tomcat-cluster ports: - port: 8000 targetPort: 8080 nodePort: 32500 [root@master tomcat-service]# ls tomcat-basic-service.yml tomcat-service.yml [root@master tomcat-service]# kubectl create -f tomcat-basic-service.yml service/tomcat-service created # 检查 [root@master tomcat-service]# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10h my-tomcat LoadBalancer 10.110.154.31 <pending> 8000:32589/TCP 9h tomcat-service NodePort 10.110.214.82 <none> 8000:32500/TCP 65s | |_____ 节点内部暴露端口 |__________ 对外访问端口 [root@master tomcat-service]# kubectl describe service tomcat-service Name: tomcat-service Namespace: default Labels: app=tomcat-service Annotations: <none> Selector: app=tomcat-cluster Type: NodePort IP: 10.110.214.82 Port: <unset> 8000/TCP TargetPort: 8080/TCP NodePort: <unset> 32500/TCP Endpoints: 10.244.1.6:8080,10.244.2.3:8080 Session Affinity: None External Traffic Policy: Cluster Events: <none> # 测试 [root@master tomcat-service]# curl 192.168.71.134:32500 <!doctype html><html lang="en"><head><title>HTTP Status 404 – Not Found</title><style type="text/css">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 404 – Not Found</h1><hr class="line" /><p><b>Type</b> Status Report</p><p><b>Message</b> Not found</p><p><b>Description</b> The origin server did not find a current representation for the target resource or is not willing to disclose that one exists.</p><hr class="line" /><h3>Apache Tomcat/8.5.54</h3></body></html>[root@master tomcat-service]# curl 192.168.71.135:32500 <!doctype html><html lang="en"><head><title>HTTP Status 404 – Not Found</title><style type="text/css">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 404 – Not Found</h1><hr class="line" /><p><b>Type</b> Status Report</p><p><b>Message</b> Not found</p><p><b>Description</b> The origin server did not find a current representation for the target resource or is not willing to disclose that one exists.</p><hr class="line" /><h3>Apache Tomcat/8.5.54</h3></body></html>[root@master tomcat-service]#
7.3 集群文件共享¶
7.3.1 NFS部署¶
- NFS,是由SUN公司研制的文件传输协议
- NFS主要是采用远程过程调用RPC机制实现文件传输
yum install -y nfs-utils rpcbind
[root@master tomcat-service]# yum install -y nfs-utils rpcbind
首先要安装NFS,我们选择master作为NFS的server
创建共享目录,配置NFS
[root@master tomcat-service]# cd /usr/local/ [root@master local]# mkdir data [root@master local]# cd data/ [root@master data]# mkdir www-data [root@master data]# cd www-data/ [root@master www-data]# vim /etc/exports [root@master www-data]# cat /etc/exports /usr/local/data/www-data 192.168.71.133/24(rw,sync) [root@master www-data]# ls /usr/local/data/www-data # 启动服务 [root@master www-data]# systemctl start nfs.service [root@master www-data]# systemctl start rpcbind.service # 配置开机自启动 [root@master www-data]# systemctl enable rpcbind.service [root@master www-data]# systemctl enable nfs.service Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service. #检查 [root@master ~]# exportfs /usr/local/data/www-data 192.168.71.133/24
一定要先启动rpcbind
[root@master ~]# systemctl restart rpcbind [root@master ~]# systemctl restart nfs [root@master ~]# showmount -e localhost Export list for localhost: /usr/local/data/www-data 192.168.71.133/24
否则会报错
[root@master ~]# showmount -e localhost clnt_create: RPC: Program not registered
以上是NFS server配置好了,开始安装配置client,使用方也就是挂载nfs的客户端,只要安装一个软件即可。
# 客户端安装nfs,只要安装 nfs-utils 即可 [root@node1 ~]# yum install nfs-utils -y # 开机自启动 [root@node1 ~]# systemctl enable nfs.service Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service. # 挂载前测试 [root@node1 ~]# showmount -e 192.168.71.133 Export list for 192.168.71.133: /usr/local/data/www-data 192.168.71.133/24 # 客户端挂载 [root@node1 ~]# mount master:/usr/local/data/www-data /mnt/ # 测试文件 [root@master ~]# echo 'hello caimengzhi'> /usr/local/data/www-data/test.txt [root@node1 ~]# ls /mnt/ test.txt [root@node1 ~]# cat /mnt/test.txt hello caimengzhi
node2上也相同操作
7.3.2 集群文件共享¶
删除之前的service.【时间久了我忘记了】
[root@master ~]# kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE my-tomcat 2/2 2 2 10h tomcat-deploy 2/2 2 2 71m [root@master ~]# kubectl delete deployment tomcat-deploy deployment.extensions "tomcat-deploy" deleted [root@master ~]# kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE my-tomcat 2/2 2 2 10h [root@master ~]# kubectl delete deployment my-tomcat deployment.extensions "my-tomcat" deleted [root@master ~]# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11h my-tomcat LoadBalancer 10.110.154.31 <pending> 8000:32589/TCP 10h tomcat-service NodePort 10.110.214.82 <none> 8000:32500/TCP 44m [root@master ~]# kubectl delete service my-tomcat service "my-tomcat" deleted [root@master ~]# kubectl delete service tomcat-service service "tomcat-service" deleted [root@master ~]# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11h
前期准备OK。
[root@master tomcat-deploy]# pwd /usr/local/k8s/tomcat-deploy [root@master tomcat-deploy]# cat tomcat-deploy.yml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: tomcat-deploy spec: replicas: 2 template: metadata: labels: app: tomcat-cluster spec: volumes: - name: web-app hostPath: path: /mnt containers: - name: tomcat-cluster image: tomcat:latest resources: requests: cpu: 0.5 memory: 200Mi limits: cpu: 1 memory: 512Mi ports: - containerPort: 8080 volumeMounts: - name: web-app mountPath: /usr/local/tomcat/webapps [root@master tomcat-deploy]# kubectl create -f tomcat-deploy.yml deployment.extensions/tomcat-deploy created [root@master tomcat-deploy]# kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE tomcat-deploy 1/2 2 1 10s [root@master tomcat-deploy]# kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE tomcat-deploy 2/2 2 2 15s [root@master tomcat-deploy]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES tomcat-deploy-6765889cd7-78v8s 1/1 Running 0 59s 10.244.1.8 node1 <none> <none> tomcat-deploy-6765889cd7-s6s5z 1/1 Running 0 59s 10.244.2.5 node2 <none> <none>
检查看看是否挂载上去了。使用一个笨方法,
- 方法1. 直接去node机器上的docker中看。
[root@node1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3be9e96f29cf tomcat "catalina.sh run" About a minute ago Up About a minute k8s_tomcat-cluster_tomcat-deploy-6765889cd7-78v8s_default_266f7fc6-81ed-11ea-8994-000c29f9577f_0 293d5cbeeb4f k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_tomcat-deploy-6765889cd7-78v8s_default_266f7fc6-81ed-11ea-8994-000c29f9577f_0 f5f971dd2d83 f9aed6605b81 "/dashboard --insecu…" 11 hours ago Up 11 hours k8s_kubernetes-dashboard_kubernetes-dashboard-6647f9f49-tkzf6_kube-system_a1354f64-8190-11ea-8994-000c29f9577f_2 aaf49053919a 20a2d7035165 "/usr/local/bin/kube…" 11 hours ago Up 11 hours k8s_kube-proxy_kube-proxy-cxpx5_kube-system_693bdb1e-818f-11ea-8994-000c29f9577f_2 1439e3b8a497 ff281650a721 "/opt/bin/flanneld -…" 11 hours ago Up 11 hours k8s_kube-flannel_kube-flannel-ds-amd64-frkjb_kube-system_693ced1f-818f-11ea-8994-000c29f9577f_1 7c442fa4d4c0 k8s.gcr.io/pause:3.1 "/pause" 11 hours ago Up 11 hours k8s_POD_kubernetes-dashboard-6647f9f49-tkzf6_kube-system_a1354f64-8190-11ea-8994-000c29f9577f_2 89247e963369 k8s.gcr.io/pause:3.1 "/pause" 11 hours ago Up 11 hours k8s_POD_kube-proxy-cxpx5_kube-system_693bdb1e-818f-11ea-8994-000c29f9577f_2 03d23db5c502 k8s.gcr.io/pause:3.1 "/pause" 11 hours ago Up 11 hours k8s_POD_kube-flannel-ds-amd64-frkjb_kube-system_693ced1f-818f-11ea-8994-000c29f9577f_2 [root@node1 ~]# docker exec -it 3be9e96f29cf /bin/bash root@tomcat-deploy-6765889cd7-78v8s:/usr/local/tomcat# ls BUILDING.txt LICENSE README.md RUNNING.txt conf lib native-jni-lib webapps work CONTRIBUTING.md NOTICE RELEASE-NOTES bin include logs temp webapps.dist root@tomcat-deploy-6765889cd7-78v8s:/usr/local/tomcat# cd webapps root@tomcat-deploy-6765889cd7-78v8s:/usr/local/tomcat/webapps# ls test.txt root@tomcat-deploy-6765889cd7-78v8s:/usr/local/tomcat/webapps# cat test.txt hello caimengzhi # 变化文件看看 [root@master tomcat-deploy]# echo 'hello caimengzhi66666'> /usr/local/data/www-data/test.txt root@tomcat-deploy-6765889cd7-78v8s:/usr/local/tomcat/webapps# cat test.txt hello caimengzhi66666 # 说明测试OK
- 方法2直接在master上
[root@master tomcat-deploy]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES tomcat-deploy-6765889cd7-78v8s 1/1 Running 0 4m30s 10.244.1.8 node1 <none> <none> tomcat-deploy-6765889cd7-s6s5z 1/1 Running 0 4m30s 10.244.2.5 node2 <none> <none> [root@master tomcat-deploy]# kubectl exec -it tomcat-deploy-6765889cd7-78v8s /bin/bash root@tomcat-deploy-6765889cd7-78v8s:/usr/local/tomcat# cd webapps root@tomcat-deploy-6765889cd7-78v8s:/usr/local/tomcat/webapps# ls test.txt root@tomcat-deploy-6765889cd7-78v8s:/usr/local/tomcat/webapps# cat test.txt hello caimengzhi66666
8. Rinetd¶
8.1 准备¶
利用Rinetd实现Service负载均衡
[root@master tomcat-service]# ls tomcat-basic-service.yml tomcat-service.yml [root@master tomcat-service]# cat tomcat-service.yml apiVersion: v1 kind: Service metadata: name: tomcat-service labels: app: tomcat-service spec: # type: NodePort selector: app: tomcat-cluster ports: - port: 8000 targetPort: 8080 # nodePort: 32500 [root@master tomcat-service]# kubectl create -f tomcat-service.yml service/tomcat-service created [root@master tomcat-service]# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11h tomcat-service ClusterIP 10.100.169.124 <none> 8000/TCP 115s [root@master tomcat-service]# kubectl describe service tomcat-service Name: tomcat-service Namespace: default Labels: app=tomcat-service Annotations: <none> Selector: app=tomcat-cluster Type: ClusterIP IP: 10.100.169.124 Port: <unset> 8000/TCP TargetPort: 8080/TCP Endpoints: 10.244.1.10:8080,10.244.2.7:8080 Session Affinity: None Events: <none>
测试
# 测试pod1 [root@master tomcat-service]# curl 10.244.1.10:8080 <!doctype html><html lang="en"><head><title>HTTP Status 404 – Not Found</title><style type="text/css">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 404 – Not Found</h1><hr class="line" /><p><b>Type</b> Status Report</p><p><b>Message</b> Not found</p><p><b>Description</b> The origin server did not find a current representation for the target resource or is not willing to disclose that one exists.</p><hr class="line" /><h3>Apache Tomcat/8.5.54</h3></body></html>[root@master tomcat-service]# # 测试pod2 [root@master tomcat-service]# curl 10.244.2.7:8080 <!doctype html><html lang="en"><head><title>HTTP Status 404 – Not Found</title><style type="text/css">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 404 – Not Found</h1><hr class="line" /><p><b>Type</b> Status Report</p><p><b>Message</b> Not found</p><p><b>Description</b> The origin server did not find a current representation for the target resource or is not willing to disclose that one exists.</p><hr class="line" /><h3>Apache Tomcat/8.5.54</h3></body></html> # 测试service [root@master tomcat-service]# curl 10.100.169.124:8000 <!doctype html><html lang="en"><head><title>HTTP Status 404 – Not Found</title><style type="text/css">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 404 – Not Found</h1><hr class="line" /><p><b>Type</b> Status Report</p><p><b>Message</b> Not found</p><p><b>Description</b> The origin server did not find a current representation for the target resource or is not willing to disclose that one exists.</p><hr class="line" /><h3>Apache Tomcat/8.5.54</h3></body></html> # 输出本地ip [root@master tomcat-service]# cd /usr/local/data/www-data/ [root@master www-data]# mkdir test [root@master www-data]# cd test/ [root@master test]# ls [root@master test]# vim index.jsp [root@master test]# cat index.jsp <%=request.getLocalAddr()%> 再次测试,默认K8S将请求随机分配到后端两个机器 [root@master test]# curl 10.100.169.124:8000/test/index.jsp 10.244.1.10 [root@master test]# curl 10.100.169.124:8000/test/index.jsp 10.244.1.10 [root@master test]# curl 10.100.169.124:8000/test/index.jsp 10.244.1.10 [root@master test]# curl 10.100.169.124:8000/test/index.jsp 10.244.1.10 [root@master test]# curl 10.100.169.124:8000/test/index.jsp 10.244.2.7 [root@master test]# curl 10.100.169.124:8000/test/index.jsp 10.244.2.7 [root@master test]# curl 10.100.169.124:8000/test/index.jsp 10.244.2.7 # 所以机器测试[都能正常访问] [root@node1 ~]# curl 10.100.169.124:8000/test/index.jsp 10.244.1.10 [root@node2 ~]# curl 10.100.169.124:8000/test/index.jsp 10.244.1.10
集群上机器是可以正常访问10.100.169.124
,但是集群外怎么访问呢?必须通过本地的ip也就是192.168.71.133
访问呢,此时我们就可以通过端口转发即可实现。
8.2 端口转发工具-Rinetd¶
- Rinetd是Linux操作系统中为重定向传输控制协议工具
- 可将源IP端口数据转发至目标IP端口
- 在Kubernetes中用于将service服务对外暴露
cd /usr/local/src wget http://www.boutell.com/rinetd/http/rinetd.tar.gz tar xf rinetd.tar.gz cd rinetd sed -i 's/65536/65535/g' rinetd.c mkdir -p /usr/man/ yum install -y gcc make && make install echo $? cat>>/etc/rinetd.conf<<EOF 0.0.0.0 8000 10.100.169.124 8000 EOF rinetd -c /etc/rinetd.conf
0.0.0.0 8000 10.100.169.124 8000 表示
10.100.169.124改为自己要映射的K8S service的ip
访问本机的8000端口转发到10.100.169.124的8000端口
详细过程
[root@master src]# ls rinetd.tar.gz [root@master src]# tar xf rinetd.tar.gz [root@master src]# cd rinetd [root@master rinetd]# sed -i 's/65536/65535/g' rinetd.c [root@master rinetd]# mkdir -p /usr/man/ [root@master rinetd]# yum install -y gcc [root@master rinetd]# make && make install cc -DLINUX -g -c -o rinetd.o rinetd.c rinetd.c:176:6: warning: conflicting types for built-in function ‘log’ [enabled by default] void log(int i, int coSe, int result); ^ cc -DLINUX -g -c -o match.o match.c gcc rinetd.o match.o -o rinetd install -m 700 rinetd /usr/sbin install -m 644 rinetd.8 /usr/man/man8 [root@master rinetd]# echo $? 0 [root@master rinetd]# cat>>/etc/rinetd.conf<<EOF > 0.0.0.0 8000 10.100.169.124 8000 > EOF [root@master rinetd]# cat /etc/rinetd.conf 0.0.0.0 8000 10.100.169.124 8000 [root@master rinetd]# rinetd -c /etc/rinetd.conf [root@master rinetd]# netstat -lnp|grep 8000 tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 111559/rinetd
测试访问
OK了啊。啦啦啦
总结,K8S外网访问的两种方式
- 直接node节点上映射端口,暴露,一般用于小的应用,
- 通过rinetd做端口暴露,一般适合,复杂的,集群支撑的应用,推荐这种service方式的负载均衡
9. 集群配置调整与资源限定¶
9.1 K8S部署调整命令¶
更新集群配置 kubectl apply -f yml文件路径 删除部署(Deployment)|服务(Service) kubectl delete deployment|service 部署|服务名称
9.2 资源限定¶
containers: - name: tomcat-cluster image: tomcat:latest resources: requests: # 需要资源 cpu: 1 # 当前只要有1核cpu就可以创建应用 memory: 500Mi limits: # 限制资源,资源使用没有上限 cpu: 2 memory: 1024Mi
详细操作
[root@master tomcat-deploy]# cat tomcat-deploy.yml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: tomcat-deploy spec: replicas: 4 template: metadata: labels: app: tomcat-cluster spec: volumes: - name: web-app hostPath: path: /mnt containers: - name: tomcat-cluster image: tomcat:latest resources: requests: cpu: 0.5 memory: 500Mi limits: cpu: 2 memory: 800Mi ports: - containerPort: 8080 volumeMounts: - name: web-app mountPath: /usr/local/tomcat/webapps [root@master tomcat-deploy]# kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE tomcat-deploy 1/2 2 1 57m [root@master tomcat-deploy]# vim tomcat-deploy.yml [root@master tomcat-deploy]# kubectl apply -f tomcat-deploy.yml deployment.extensions/tomcat-deploy configured [root@master tomcat-deploy]# kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE tomcat-deploy 3/4 4 3 57m [root@master tomcat-deploy]# kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE tomcat-deploy 4/4 4 4 58m [root@master tomcat-deploy]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES tomcat-deploy-86d7f5d577-47zht 1/1 Running 0 26s 10.244.2.9 node2 <none> <none> tomcat-deploy-86d7f5d577-76k4c 1/1 Running 0 26s 10.244.1.12 node1 <none> <none> tomcat-deploy-86d7f5d577-bgdnk 1/1 Running 0 68s 10.244.2.8 node2 <none> <none> tomcat-deploy-86d7f5d577-rr8zx 1/1 Running 0 68s 10.244.1.11 node1 <none> <none>
k8s发布的pod默认是低负载的机器上。
10. 项目K8S构建商城¶
10.1 环境准备¶
[root@master beiqin]# pwd /usr/local/beiqin [root@master beiqin]# ll total 16 -rw-r--r-- 1 root root 568 Apr 18 17:59 beiqin-app-deploy.yml -rw-r--r-- 1 root root 191 Apr 18 17:59 beiqin-app-service.yml -rw-r--r-- 1 root root 576 Apr 18 17:59 beiqin-db-deploy.yml -rw-r--r-- 1 root root 192 Apr 18 17:59 beiqin-db-service.yml drwxr-xr-x 2 root root 51 Apr 19 13:37 dist drwxr-xr-x 2 root root 24 Apr 19 13:37 sql
[root@master beiqin]# vim /etc/exports [root@master beiqin]# cat /etc/exports /usr/local/data/www-data 192.168.71.133/24(rw,sync) /usr/local/beiqin/dist 192.168.71.133/24(rw,sync) /usr/local/beiqin/sql 192.168.71.133/24(rw,sync) [root@master beiqin]# systemctl restart nfs [root@master beiqin]# systemctl restart rpcbind [root@master beiqin]# exportfs /usr/local/data/www-data 192.168.71.133/24 /usr/local/beiqin/dist 192.168.71.133/24 /usr/local/beiqin/sql 192.168.71.133/24 [root@master beiqin]# showmount -e localhost clnt_create: RPC: Program not registered [root@master beiqin]# systemctl restart rpcbind [root@master beiqin]# systemctl restart nfs [root@master beiqin]# showmount -e localhost Export list for localhost: /usr/local/beiqin/sql 192.168.71.133/24 /usr/local/beiqin/dist 192.168.71.133/24 /usr/local/data/www-data 192.168.71.133/24 # 客户端挂载 [root@node1 ~]# mkdir /usr/local/beiqin-dist [root@node1 ~]# mkdir /usr/local/beiqin-sql [root@node1 ~]# mount master:/usr/local/beiqin/dist /usr/local/beiqin-dist [root@node1 ~]# mount master:/usr/local/beiqin/sql /usr/local/beiqin-sql [root@node1 ~]# ls /usr/local/beiqin-dist/ application.yml beiqin-app.jar [root@node1 ~]# ls /usr/local/beiqin-sql beiqin.sql
10.2 部署mysql¶
beiqin-db-deploy.yml
apiVersion: apps/v1beta1 kind: Deployment metadata: name: beiqin-db-deploy spec: replicas: 1 template: metadata: labels: app: beiqin-db-deploy spec: volumes: - name: beiqin-db-volume hostPath: path: /usr/local/beiqin-sql containers: - name: beiqin-db-deploy image: mysql:5.7 ports: - containerPort: 3306 env: - name: MYSQL_ROOT_PASSWORD value: "root" volumeMounts: - name: beiqin-db-volume mountPath: /docker-entrypoint-initdb.d [root@master beiqin]# kubectl create -f beiqin-db-deploy.yml deployment.apps/beiqin-db-deploy created [root@master beiqin]# kubectl get pod NAME READY STATUS RESTARTS AGE beiqin-db-deploy-c7785f9d4-bhmc9 1/1 Running 0 3s #检查一下 [root@master beiqin]# kubectl exec -it beiqin-db-deploy-c7785f9d4-bhmc9 bash root@beiqin-db-deploy-c7785f9d4-bhmc9:/# mysql -uroot -proot mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 2 Server version: 5.7.29 MySQL Community Server (GPL) Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | beiqin | | mysql | | performance_schema | | sys | +--------------------+ 5 rows in set (0.00 sec) mysql> use beiqin; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> show tables; +---------------------+ | Tables_in_beiqin | +---------------------+ | t_category | | t_evaluate | | t_goods | | t_goods_cover | | t_goods_detail | | t_goods_param | | t_promotion_seckill | +---------------------+ 7 rows in set (0.00 sec)
部署service
[root@master beiqin]# cat beiqin-db-service.yml apiVersion: v1 kind: Service metadata: name: beiqin-db-service labels: app: beiqin-db-service spec: selector: app: beiqin-db-deploy ports: - port: 3310 targetPort: 3306
查看服务
[root@master beiqin]# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE beiqin-db-service ClusterIP 10.104.83.21 <none> 3310/TCP 9s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16h tomcat-service ClusterIP 10.100.169.124 <none> 8000/TCP 5h7m [root@master beiqin]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE beiqin-db-service ClusterIP 10.104.83.21 <none> 3310/TCP 13s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16h tomcat-service ClusterIP 10.100.169.124 <none> 8000/TCP 5h7m [root@master beiqin]# kubectl get svc beiqin-db-service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE beiqin-db-service ClusterIP 10.104.83.21 <none> 3310/TCP 3m31s
kubectl get service 缩写为kubectl get svc
[root@master beiqin]# kubectl describe service beiqin-db-service Name: beiqin-db-service Namespace: default Labels: app=beiqin-db-service Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"beiqin-db-service"},"name":"beiqin-db-service","namespac... Selector: app=beiqin-db-deploy Type: ClusterIP IP: 10.104.83.21 Port: <unset> 3310/TCP TargetPort: 3306/TCP Endpoints: 10.244.1.20:3306 Session Affinity: None Events: <none> 测试端口 [root@master beiqin]# telnet 10.104.83.21 3310 Trying 10.104.83.21... Connected to 10.104.83.21. Escape character is '^]'. J 5.7.29dQtCU`xu !u2o o}mSmysql_native_password^CConnection closed by foreign host
10.3 部署tomcat¶
- deploy文件
[root@master beiqin]# cat beiqin-app-deploy.yml apiVersion: apps/v1beta1 kind: Deployment metadata: name: beiqin-app-deploy spec: replicas: 2 template: metadata: labels: app: beiqin-app-deploy spec: volumes: - name : beqin-app-volume hostPath: path: /usr/local/beiqin-dist containers: - name: beiqin-app-deploy image: openjdk:8u222-jre command: ["/bin/sh"] args: ["-c","cd /usr/local/beiqin-dist;java -jar beiqin-app.jar"] volumeMounts: - name: beqin-app-volume mountPath: /usr/local/beiqin-dist
- service文件
[root@master beiqin]# cat beiqin-app-service.yml apiVersion: v1 kind: Service metadata: name: beiqin-app-service labels: app: beiqin-app-service spec: selector: app: beiqin-app-deploy ports: - port: 80 targetPort: 80
部署
[root@master beiqin]# kubectl apply -f beiqin-app-deploy.yml [root@master beiqin]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES beiqin-app-deploy-596fd7c4dd-9bvmf 1/1 Running 0 19m 10.244.1.21 node1 <none> <none> beiqin-app-deploy-596fd7c4dd-v6gcw 1/1 Running 0 54s 10.244.1.22 node1 <none> <none> beiqin-db-deploy-c7785f9d4-bhmc9 1/1 Running 0 32m 10.244.1.20 node1 <none> <none>
查看日志
[root@master beiqin]# kubectl logs -f beiqin-app-deploy-596fd7c4dd-9bvmf . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v2.0.4.RELEASE) 2020-04-19 09:06:36.605 INFO 6 --- [ main] com.itlaoqi.babytun.BabytunApplication : Starting BabytunApplication v0.0.1-SNAPSHOT onbeiqin-app-deploy-596fd7c4dd-9bvmf with PID 6 (/usr/local/beiqin-dist/beiqin-app.jar started by root in /usr/local/beiqin-dist) 2020-04-19 09:06:36.622 INFO 6 --- [ main] com.itlaoqi.babytun.BabytunApplication : No active profile set, falling back to defaultprofiles: default 2020-04-19 09:06:36.742 INFO 6 --- [ main] ConfigServletWebServerApplicationContext : Refreshing org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@50675690: startup date [Sun Apr 19 09:06:36 UTC 2020]; root of context hierarchy 2020-04-19 09:06:39.300 INFO 6 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 80 (http) 2020-04-19 09:06:39.373 INFO 6 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2020-04-19 09:06:39.373 INFO 6 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.32 2020-04-19 09:06:39.399 INFO 6 --- [ost-startStop-1] o.a.catalina.core.AprLifecycleListener : The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib] 2020-04-19 09:06:39.578 INFO 6 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2020-04-19 09:06:39.579 INFO 6 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 2840 ms 2020-04-19 09:06:39.709 INFO 6 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean : Servlet dispatcherServlet mapped to [/] 2020-04-19 09:06:39.717 INFO 6 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'characterEncodingFilter' to: [/*] 2020-04-19 09:06:39.717 INFO 6 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'hiddenHttpMethodFilter' to: [/*] 2020-04-19 09:06:39.717 INFO 6 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'httpPutFormContentFilter' to:[/*] 2020-04-19 09:06:39.717 INFO 6 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'requestContextFilter' to: [/*] 2020-04-19 09:06:40.831 INFO 6 --- [ main] s.w.s.m.m.a.RequestMappingHandlerAdapter : Looking for @ControllerAdvice: org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@50675690: startup date [Sun Apr 19 09:06:36 UTC 2020]; root of context hierarchy 2020-04-19 09:06:40.990 INFO 6 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/goods],methods=[GET]}" onto public org.springframework.web.servlet.ModelAndView com.itlaoqi.babytun.controller.GoodsController.showGoods(java.lang.Long) 2020-04-19 09:06:40.998 INFO 6 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error],produces=[text/html]}" onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.servlet.error.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse) 2020-04-19 09:06:40.998 INFO 6 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error]}" onto public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.Object>> org.springframework.boot.autoconfigure.web.servlet.error.BasicErrorController.error(javax.servlet.http.HttpServletRequest) 2020-04-19 09:06:41.065 INFO 6 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] 2020-04-19 09:06:41.065 INFO 6 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] 2020-04-19 09:06:41.432 INFO 6 --- [ main] o.s.ui.freemarker.SpringTemplateLoader : SpringTemplateLoader for FreeMarker: using resource loader [org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@50675690: startup date [Sun Apr 1909:06:36 UTC 2020]; root of context hierarchy] and template loader path [classpath:/templates/] 2020-04-19 09:06:41.434 INFO 6 --- [ main] o.s.w.s.v.f.FreeMarkerConfigurer : ClassTemplateLoader for Spring macros added toFreeMarker configuration 2020-04-19 09:06:41.750 INFO 6 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup 2020-04-19 09:06:41.753 INFO 6 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Bean with name 'dataSource' has been autodetected for JMX exposure 2020-04-19 09:06:41.766 INFO 6 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Located MBean 'dataSource': registering with JMX server as MBean [com.zaxxer.hikari:name=dataSource,type=HikariDataSource] 2020-04-19 09:06:41.845 INFO 6 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 80 (http) with context path '' 2020-04-19 09:06:41.851 INFO 6 --- [ main] com.itlaoqi.babytun.BabytunApplication : Started BabytunApplication in 6.005 seconds (JVM running for 6.859) 新开一个master窗口测试 [root@master ~]# curl 10.244.1.21:80/goods?gid=1788 -I HTTP/1.1 200 Content-Type: text/html;charset=UTF-8 Content-Language: en Content-Length: 8009 Date: Sun, 19 Apr 2020 09:27:50 GMT 日志会打印 2020-04-19 09:27:23.334 INFO 6 --- [p-nio-80-exec-1] c.i.babytun.controller.GoodsController : gid:1788 2020-04-19 09:27:23.377 INFO 6 --- [p-nio-80-exec-1] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting... 2020-04-19 09:27:23.717 INFO 6 --- [p-nio-80-exec-1] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed. 2020-04-19 09:27:50.020 INFO 6 --- [p-nio-80-exec-3] c.i.babytun.controller.GoodsController : gid:1788
整体测试
[root@master beiqin]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES beiqin-app-deploy-596fd7c4dd-9bvmf 1/1 Running 0 30m 10.244.1.21 node1 <none> <none> beiqin-app-deploy-596fd7c4dd-v6gcw 1/1 Running 0 11m 10.244.1.22 node1 <none> <none> beiqin-db-deploy-c7785f9d4-bhmc9 1/1 Running 0 42m 10.244.1.20 node1 <none> <none> [root@master beiqin]# curl 10.244.1.21:80/goods?gid=1788 -I HTTP/1.1 200 Content-Type: text/html;charset=UTF-8 Content-Language: en Content-Length: 8009 Date: Sun, 19 Apr 2020 09:34:34 GMT [root@master beiqin]# curl 10.244.1.22:80/goods?gid=1788 -I HTTP/1.1 200 Content-Type: text/html;charset=UTF-8 Content-Language: en Content-Length: 8009 Date: Sun, 19 Apr 2020 09:34:37 GMT
- 创建service
[root@master beiqin]# cat beiqin-app-service.yml apiVersion: v1 kind: Service metadata: name: beiqin-app-service labels: app: beiqin-app-service spec: selector: app: beiqin-app-deploy ports: - port: 80 targetPort: 80 [root@master beiqin]# kubectl apply -f beiqin-app-service.yml service/beiqin-app-service created [root@master beiqin]# kubectl get pod NAME READY STATUS RESTARTS AGE beiqin-app-deploy-596fd7c4dd-9bvmf 1/1 Running 0 32m beiqin-app-deploy-596fd7c4dd-v6gcw 1/1 Running 0 13m beiqin-db-deploy-c7785f9d4-bhmc9 1/1 Running 0 44m [root@master beiqin]# kubectl describe service beiqin-app-service Name: beiqin-app-service Namespace: default Labels: app=beiqin-app-service Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"beiqin-app-service"},"name":"beiqin-app-service","namesp... Selector: app=beiqin-app-deploy Type: ClusterIP IP: 10.110.131.111 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 10.244.1.21:80,10.244.1.22:80 Session Affinity: None Events: <none> # 访问测试[内部访问] [root@master beiqin]# curl 10.110.131.111:80/goods?gid=1788 -I HTTP/1.1 200 Content-Type: text/html;charset=UTF-8 Content-Language: en Content-Length: 8009 Date: Sun, 19 Apr 2020 09:38:08 GMT
- 外网访问
[root@master beiqin]# cat /etc/rinetd.conf 0.0.0.0 80 10.110.131.111 80 [root@master beiqin]# rinetd -c /etc/rinetd.conf