Gasida님의 Database Operator In Kubernetes study(=DOIK) 스터디 진행 중 테스트 한 내용입니다.
- 구성도 요약
- 설치 스크립트 설명
- AWS CLI 배포 (Windows CMD)
- 설치 완료 후 정보 확인
- 샘플파드 배포 및 확인
1. 구성도 요약
- 사용자 네트워크 대역은 퍼블릭 서브넷 2개와 프라이빗 서브넷 2개로 구성됩니다
- CloudFormation 에 EC2의 UserData 부분( Script 실행)으로 바닐라쿠버네티스 설치를 진행합니다 (AWS EKS 등 관리형 미사용)
- 쿠버네티스 버전 v1.23.6 , Flannel CNI (VXLAN 모드, ENI S/D Uncheck) , CRI (container d ), StorageClass (local-path/hostpath, nfs-subpath/EFS)
- 1개의 마스터노드와 3개의 워커노드로 구성하여 테스트를 진행하며, 각 서버 사양은 아래와 같습니다.
2. 설치 스크립트 설명
▶ init2.sh 스크립트 내용 : EC2 공통 적용, CRI 설치 등 기본 설정, 계정(root / Pa55W0rd), bat 툴 등 설치
#!/bin/bash -xe
echo ">>>> Initial Config Start <<<<"
echo "[TASK 1] Setting Root Password"
printf "Pa55W0rd\nPa55W0rd\n" | passwd
echo "[TASK 2] Setting Sshd Config"
sed -i "s/^PasswordAuthentication no/PasswordAuthentication yes/g" /etc/ssh/sshd_config
sed -i "s/^#PermitRootLogin prohibit-password/PermitRootLogin yes/g" /etc/ssh/sshd_config
systemctl restart sshd
echo > .ssh/authorized_keys
echo "[TASK 3] Change Timezone & Setting Profile & Bashrc"
# Change Timezone
ln -sf /usr/share/zoneinfo/Asia/Seoul /etc/localtime
# Setting Profile & Bashrc
echo 'alias vi=vim' >> /etc/profile
echo "sudo su -" >> /home/ubuntu/.bashrc
echo "[TASK 4] Disable ufw & AppArmor"
systemctl stop ufw && systemctl disable ufw
systemctl stop apparmor && systemctl disable apparmor
echo "[TASK 5] Install Packages"
apt update && apt install -y tree jq sshpass bridge-utils net-tools bat exa duf nfs-common sysstat
echo "alias cat='batcat --paging=never'" >> /etc/profile
echo "[TASK 6] Setting Local DNS Using Hosts file"
echo "192.168.10.10 k8s-m" >> /etc/hosts
echo "192.168.10.101 k8s-w1" >> /etc/hosts
echo "192.168.10.102 k8s-w2" >> /etc/hosts
echo "192.168.20.103 k8s-w3" >> /etc/hosts
#echo "192.168.20.104 k8s-w4" >> /etc/hosts
echo "[TASK 7] Install containerd.io"
# Install Runtime - Containerd https://kubernetes.io/docs/setup/production-environment/container-runtimes/
cat <<EOF > /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter
cat <<EOF > /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl -p
sysctl --system
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list
apt-get update
apt-get install containerd.io -y
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
echo "[TASK 8] Using the systemd cgroup driver"
#sed -i'' -r -e "/runc.options/a\ SystemdCgroup = true" /etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
systemctl restart containerd
echo "[TASK 9] Install Kubernetes components (kubeadm, kubelet and kubectl)"
curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
apt-get update && apt-get install -y kubelet=$KUBERNETES_VERSION-00 kubectl=$KUBERNETES_VERSION-00 kubeadm=$KUBERNETES_VERSION-00
apt-mark hold kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
echo "[TASK 10] Git Clone"
git clone https://github.com/gasida/DOIK.git /root/DOIK
find /root/DOIK -regex ".*\.\(sh\)" -exec chmod 700 {} \;
cp /root/DOIK/1/final2.sh /root/final.sh
echo ">>>> Initial Config End <<<<"
▶ master.sh 스크립트 내용 : 쿠버네티스 초기화 구성 및 편리 설정, 파드 대역(172.16.0.0), 서비스 대역(10.200.1.0/24)
#!/bin/bash -xe
echo ">>>> K8S Controlplane config Start <<<<"
echo "[TASK 1] Initial Kubernetes - Pod CIDR 172.16.0.0/16 , Service CIDR 10.200.1.0/24 , API Server 192.168.10.10"
kubeadm init --token 123456.1234567890123456 --token-ttl 0 --pod-network-cidr=172.16.0.0/16 --apiserver-advertise-address=192.168.10.10 --service-cidr 10.200.1.0/24
echo "[TASK 2] Setting kube config file"
mkdir -p /root/.kube
cp -i /etc/kubernetes/admin.conf /root/.kube/config
chown $(id -u):$(id -g) /root/.kube/config
echo "[TASK 3] Source the completion"
echo 'source <(kubectl completion bash)' >> /etc/profile
echo "[TASK 4] Alias kubectl to k"
echo 'alias k=kubectl' >> /etc/profile
echo 'complete -F __start_kubectl k' >> /etc/profile
echo "[TASK 5] Install Kubectx & Kubens"
git clone https://github.com/ahmetb/kubectx /opt/kubectx
ln -s /opt/kubectx/kubens /usr/local/bin/kubens
ln -s /opt/kubectx/kubectx /usr/local/bin/kubectx
echo "[TASK 6] Install Kubeps"
git clone https://github.com/jonmosco/kube-ps1.git /root/kube-ps1
cat <<"EOT" >> /root/.bash_profile
source /root/kube-ps1/kube-ps1.sh
KUBE_PS1_SYMBOL_ENABLE=true
KUBE_PS1_SYMBOL_DEFAULT=🐱
function get_cluster_short() {
echo "$1" | cut -d . -f1
}
KUBE_PS1_CLUSTER_FUNCTION=get_cluster_short
KUBE_PS1_SUFFIX=') '
PS1='$(kube_ps1)'$PS1
EOT
echo "[TASK 7] Install Packages"
apt install kubetail etcd-client -y
echo "[TASK 8] Install Helm"
curl -s https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
# echo "[TASK 9] Config NFS Server"
# apt install -y nfs-kernel-server
# mkdir /nfs4-share
# echo '/nfs4-share *(rw,sync,no_root_squash,no_subtree_check)' >> /etc/exports
# systemctl enable nfs-server
# exportfs -r && exportfs -v
echo ">>>> K8S Controlplane Config End <<<<"
▶ worker.sh 스크립트 내용 : 마스터 노드 Join, kubeconfig 파일 가져오기
#!/bin/bash -xe
echo ">>>> K8S Node config Start <<<<"
echo "[TASK 1] K8S Controlplane Join - API Server 192.168.10.10"
kubeadm join --token 123456.1234567890123456 --discovery-token-unsafe-skip-ca-verification 192.168.10.10:6443
echo "[TASK 2] Config kubeconfig"
mkdir -p /root/.kube
sshpass -p "Pa55W0rd" scp -o StrictHostKeyChecking=no root@k8s-m:/etc/kubernetes/admin.conf /root/.kube/config
echo "[TASK 3] Source the completion"
echo 'source <(kubectl completion bash)' >> /etc/profile
echo "[TASK 4] Alias kubectl to k"
echo 'alias k=kubectl' >> /etc/profile
echo 'complete -F __start_kubectl k' >> /etc/profile
echo "[TASK 5] Install Helm"
curl -s https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
echo "[TASK 5] Install calicoctl Tool"
curl -L https://github.com/projectcalico/calico/releases/download/v3.22.2/calicoctl-linux-amd64 -o calicoctl
chmod +x calicoctl && mv calicoctl /usr/bin
echo ">>>> K8S Node config End <<<<"
▶ final2.sh 스크립트 내용 : CNI/CSI 설치 등, 중요한 파드들은 k8s-m 노드에서 배포 될 수 있게 tolerations 설정 ← 워커 노드 Failover 테스트를 위함
#!/bin/bash -xe
echo ">>>> K8S Final config Start <<<<"
echo "[TASK 9] Install Flannel CNI"
#kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/gasida/DOIK/main/1/kube-flannel-v0.18.0.yml
echo "sleep 3"
sleep 3
echo "[TASK 10] Setting PS1"
kubectl config rename-context "kubernetes-admin@kubernetes" "DOIK-Lab"
echo "[TASK 11] Install Metrics server on k8s-m node - v0.6.1"
kubectl apply -f https://raw.githubusercontent.com/gasida/DOIK/main/1/metrics-server.yaml
echo "[TASK 12] Dynamically provisioning persistent local storage with Kubernetes on k8s-m node - v0.0.22"
kubectl apply -f https://raw.githubusercontent.com/gasida/DOIK/main/1/local-path-storage.yaml
kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
echo "[TASK 13] NFS External Provisioner on AWS EFS - v4.0.16"
# https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner
printf 'tolerations: [{key: node-role.kubernetes.io/master, operator: Exists, effect: NoSchedule}]\n' | \
helm install nfs-provisioner -n kube-system nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=$(cat /root/efs.txt) --set nfs.path=/ --set nodeSelector."kubernetes\.io/hostname"=k8s-m \
--values /dev/stdin
echo ">>>> K8S Final Config End <<<<"
3. AWS CLI 배포 (Windows CMD)
▶ YAML 파일 다운로드
$ curl -O https://s3.ap-northeast-2.amazonaws.com/cloudformation.cloudneta.net/K8S/cloudneta-k8s-3.yaml
▶ CloudFormation 스택 배포
$ aws cloudformation deploy --template-file cloudneta-k8s-3.yaml --stack-name k8s-test --parameter-overrides KeyName=K8S-KEY SgIngressCidr=xxx.xxx.xxx.xxx(내IP)/32
Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - k8s-test
▶ (옵션) EC2 인스턴스 타입 변경 : WorkerNodeInstanceType=t3.xlarge & EC2 EBS 볼륨 사이즈 변경
$ aws cloudformation deploy --template-file cloudneta-k8s-3.yaml --stack-name myk8s --parameter-overrides KeyName=kp-gasida SgIngressCidr=$(curl -s ipinfo.io/ip)/32 WorkerNodeInstanceType=t3.xlarge Ec2EbsVolumeSize=200
▶ CloudFormation 스택 배포 완료 후 EC2 IP 출력
$ aws cloudformation describe-stacks --stack-name myk8s --query "Stacks[*].Outputs[*].OutputValue" --output text
3.34.50.102 13.125.223.50 13.125.10.57 3.35.22.98 # 각각 마스터노드, 워커노드1~3(순서는 뒤바뀔수 있음)
$ aws cloudformation describe-stacks --stack-name myk8s --query "Stacks[*].Outputs[0].OutputValue" --output text
3.34.50.102 # 마스터노드
▶ SSH로 서버에 접속하여 최초배포 완료여부 확인
$ sudo tail -f /var/log/cloud-init-output.log
No VM guests are running outdated hypervisor (qemu) binaries on this host.
[TASK 8] Install Helm
Downloading https://get.helm.sh/helm-v3.9.0-linux-amd64.tar.gz
Verifying checksum... Done.
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm
[TASK 9] Create Directory
>>>> K8S Controlplane Config End <<<<
Cloud-init v. 22.2-0ubuntu1~22.04.2 finished at Fri, 24 Jun 2022 11:13:54 +0000. Datasource DataSourceEc2Local. Up 166.08 seconds
▶ 최종 스크립트 실행
$ ./final.sh
+ echo '>>>> K8S Final config Start <<<<'
>>>> K8S Final config Start <<<<
+ echo '[TASK 9] Install Flannel CNI'
[TASK 9] Install Flannel CNI
+ kubectl apply -f https://raw.githubusercontent.com/gasida/DOIK/main/1/kube-flannel-v0.18.0.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
+ echo 'sleep 3'
sleep 3
+ sleep 3
+ echo '[TASK 10] Setting PS1'
[TASK 10] Setting PS1
+ kubectl config rename-context kubernetes-admin@kubernetes DOIK-Lab
Context "kubernetes-admin@kubernetes" renamed to "DOIK-Lab".
+ echo '[TASK 11] Install Metrics server on k8s-m node - v0.6.1'
[TASK 11] Install Metrics server on k8s-m node - v0.6.1
+ kubectl apply -f https://raw.githubusercontent.com/gasida/DOIK/main/1/metrics-server.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
+ echo '[TASK 12] Dynamically provisioning persistent local storage with Kubernetes on k8s-m node - v0.0.22'
[TASK 12] Dynamically provisioning persistent local storage with Kubernetes on k8s-m node - v0.0.22
+ kubectl apply -f https://raw.githubusercontent.com/gasida/DOIK/main/1/local-path-storage.yaml
namespace/local-path-storage created
serviceaccount/local-path-provisioner-service-account created
clusterrole.rbac.authorization.k8s.io/local-path-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind created
deployment.apps/local-path-provisioner created
storageclass.storage.k8s.io/local-path created
configmap/local-path-config created
+ kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
storageclass.storage.k8s.io/local-path patched
+ echo '[TASK 13] NFS External Provisioner on AWS EFS - v4.0.16'
[TASK 13] NFS External Provisioner on AWS EFS - v4.0.16
+ helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner
"nfs-subdir-external-provisioner" has been added to your repositories
+ printf 'tolerations: [{key: node-role.kubernetes.io/master, operator: Exists, effect: NoSchedule}]\n'
++ cat /root/efs.txt
+ helm install nfs-provisioner -n kube-system nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=fs-055a50dfd99cce432.efs.ap-northeast-2.amazonaws.com --set nfs.path=/ --set 'nodeSelector.kubernetes\.io/hostname=k8s-m' --values /dev/stdin
NAME: nfs-provisioner
LAST DEPLOYED: Fri Jun 24 20:18:32 2022
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
+ echo '[TASK 14] K8S v1.24 : k8s-m node config taint & label'
[TASK 14] K8S v1.24 : k8s-m node config taint & label
+ kubectl taint node k8s-m node-role.kubernetes.io/control-plane-
4. 설치 완료 후 정보 확인
▶ EC2기본정보 확인
$ hostnamectl
Static hostname: k8s-m
Icon name: computer-vm
Chassis: vm
Machine ID: ec2bdde774be2746f8e85918a3e2cbff
Boot ID: a6ed2b08c9b741228034fe6900740df8
Virtualization: amazon
Operating System: Ubuntu 22.04 LTS
Kernel: Linux 5.15.0-1013-aws
Architecture: x86-64
Hardware Vendor: Amazon EC2
Hardware Model: t3.large
$ cat /etc/hosts
───────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
│ File: /etc/hosts
───────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
1 │ 127.0.0.1 localhost
2 │
3 │ # The following lines are desirable for IPv6 capable hosts
4 │ ::1 ip6-localhost ip6-loopback
5 │ fe00::0 ip6-localnet
6 │ ff00::0 ip6-mcastprefix
7 │ ff02::1 ip6-allnodes
8 │ ff02::2 ip6-allrouters
9 │ ff02::3 ip6-allhosts
10 │ 192.168.10.10 k8s-m
11 │ 192.168.10.101 k8s-w1
12 │ 192.168.10.102 k8s-w2
13 │ 192.168.20.103 k8s-w3
───────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
$ ip -br -c -4 addr
lo UNKNOWN 127.0.0.1/8
ens5 UP 192.168.10.10/24 metric 100
flannel.1 UNKNOWN 172.16.0.0/32
cni0 UP 172.16.0.1/24
▶ storageclasses 확인
$ kubectl get storageclasses
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 3m37s
nfs-client cluster.local/nfs-provisioner-nfs-subdir-external-provisioner Delete Immediate true 3m35s
▶ 볼륨 확인
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 25.1M 1 loop /snap/amazon-ssm-agent/5656
loop1 7:1 0 55.5M 1 loop /snap/core18/2409
loop2 7:2 0 61.9M 1 loop /snap/core20/1518
loop3 7:3 0 79.9M 1 loop /snap/lxd/22923
loop4 7:4 0 47M 1 loop /snap/snapd/16010
nvme0n1 259:0 0 50G 0 disk
├─nvme0n1p1 259:1 0 49.9G 0 part /
├─nvme0n1p14 259:2 0 4M 0 part
└─nvme0n1p15 259:3 0 106M 0 part /boot/efi
▶ AWS EFS 볼륨 마운트 확인 : EFS 파일시스템 ID는 실습환경에 따라 다름
$ duf -only local,network
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ 2 local devices │
├────────────┬────────┬──────┬───────┬───────────────────────────────┬──────┬─────────────────┤
│ MOUNTED ON │ SIZE │ USED │ AVAIL │ USE% │ TYPE │ FILESYSTEM │
├────────────┼────────┼──────┼───────┼───────────────────────────────┼──────┼─────────────────┤
│ / │ 48.3G │ 3.6G │ 44.7G │ [#...................] 7.4% │ ext4 │ /dev/root │
│ /boot/efi │ 104.4M │ 5.2M │ 99.1M │ [....................] 5.0% │ vfat │ /dev/nvme0n1p15 │
╰────────────┴────────┴──────┴───────┴───────────────────────────────┴──────┴─────────────────╯
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ 2 network devices │
├────────────────────────────────────────────────────────────────────────────┬──────┬──────┬───────┬──────┬──────┬─────────────────────────────────────────────────────────┤
│ MOUNTED ON │ SIZE │ USED │ AVAIL │ USE% │ TYPE │ FILESYSTEM │
├────────────────────────────────────────────────────────────────────────────┼──────┼──────┼───────┼──────┼──────┼─────────────────────────────────────────────────────────┤
│ /nfs4-share │ 8.0E │ 0B │ 8.0E │ │ nfs4 │ fs-055a50dfd99cce432.efs.ap-northeast-2.amazonaws.com:/ │
│ /var/lib/kubelet/pods/a73c4722-bbcc-40c0-97a4-71e0ad7f9dc1/volumes/kuberne │ 8.0E │ 0B │ 8.0E │ │ nfs4 │ fs-055a50dfd99cce432.efs.ap-northeast-2.amazonaws.com:/ │
│ tes.io~nfs/nfs-subdir-external-provisioner-root │ │ │ │ │ │ │
╰────────────────────────────────────────────────────────────────────────────┴──────┴──────┴───────┴──────┴──────┴─────────────────────────────────────────────────────────╯
5. 샘플파드 배포 및 확인
▶ YAML 파일 확인
$ cat /root/DOIK/1/1-3pods.yaml
───────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
│ File: /root/DOIK/1/1-3pods.yaml
───────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
1 │ apiVersion: v1
2 │ kind: Pod
3 │ metadata:
4 │ name: testpod1
5 │ labels:
6 │ app: testpod1
7 │ spec:
8 │ nodeName: k8s-w1
9 │ containers:
10 │ - name: netshoot-pod
11 │ image: nicolaka/netshoot
12 │ command: ["tail"]
13 │ args: ["-f", "/dev/null"]
14 │ terminationGracePeriodSeconds: 0
15 │ ---
16 │ apiVersion: v1
17 │ kind: Pod
18 │ metadata:
19 │ name: testpod2
20 │ labels:
21 │ app: testpod2
22 │ spec:
23 │ nodeName: k8s-w2
24 │ containers:
25 │ - name: netshoot-pod
26 │ image: nicolaka/netshoot
27 │ command: ["tail"]
28 │ args: ["-f", "/dev/null"]
29 │ terminationGracePeriodSeconds: 0
30 │ ---
31 │ apiVersion: v1
32 │ kind: Pod
33 │ metadata:
34 │ name: testpod3
35 │ labels:
36 │ app: testpod3
37 │ spec:
38 │ nodeName: k8s-w3
39 │ containers:
40 │ - name: netshoot-pod
41 │ image: nicolaka/netshoot
42 │ command: ["tail"]
43 │ args: ["-f", "/dev/null"]
44 │ terminationGracePeriodSeconds: 0
───────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
▶ 파드 배포
$ kubectl apply -f /root/DOIK/1/1-3pods.yaml
pod/testpod1 created
pod/testpod2 created
pod/testpod3 created
▶ 파드 확인 : 파드 IP는 실습 환경에 따라 다름
$ kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
testpod1 1/1 Running 0 29s 172.16.1.2 k8s-w1 <none> <none>
testpod2 1/1 Running 0 29s 172.16.2.3 k8s-w2 <none> <none>
testpod3 1/1 Running 0 29s 172.16.3.3 k8s-w3 <none> <none>
▶ 파드 IP 변수 지정 : 파드가 정상 상태가 된 이후에 아래 변수 지정 할 것
$ POD1=$(kubectl get pod testpod1 -o jsonpath={.status.podIP})
$ POD2=$(kubectl get pod testpod2 -o jsonpath={.status.podIP})
$ POD3=$(kubectl get pod testpod3 -o jsonpath={.status.podIP})
▶ 파드1 에서 파드2,파드3과 ping 통신
$ kubectl exec -it testpod1 -- ping -c 1 $POD2
PING 172.16.2.3 (172.16.2.3) 56(84) bytes of data.
64 bytes from 172.16.2.3: icmp_seq=1 ttl=62 time=1.16 ms
--- 172.16.2.3 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.160/1.160/1.160/0.000 ms
$ kubectl exec -it testpod1 -- ping -c 1 $POD3
PING 172.16.3.3 (172.16.3.3) 56(84) bytes of data.
64 bytes from 172.16.3.3: icmp_seq=1 ttl=62 time=1.27 ms
--- 172.16.3.3 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.266/1.266/1.266/0.000 ms
▶ 파드1 zsh 접속
$ kubectl exec -it testpod1 -- zsh
dP dP dP
88 88 88
88d888b. .d8888b. d8888P .d8888b. 88d888b. .d8888b. .d8888b. d8888P
88' `88 88ooood8 88 Y8ooooo. 88' `88 88' `88 88' `88 88
88 88 88. ... 88 88 88 88 88. .88 88. .88 88
dP dP `88888P' dP `88888P' dP dP `88888P' `88888P' dP
Welcome to Netshoot! (github.com/nicolaka/netshoot)
testpod1 ~
▶ 외부 통신 확인
~ ping -c 1 www.google.com
PING www.google.com (142.250.207.4) 56(84) bytes of data.
64 bytes from nrt13s54-in-f4.1e100.net (142.250.207.4): icmp_seq=1 ttl=103 time=31.5 ms
--- www.google.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 31.505/31.505/31.505/0.000 ms
~ curl -s ipinfo.io ## 인터넷 접속 시 사용되는 PUBLIC IP
{
"ip": "3.38.178.162",
"hostname": "ec2-3-38-178-162.ap-northeast-2.compute.amazonaws.com",
"city": "Seoul",
"region": "Seoul",
"country": "KR",
"loc": "37.5660,126.9784",
"org": "AS16509 Amazon.com, Inc.",
"postal": "03141",
"timezone": "Asia/Seoul",
"readme": "https://ipinfo.io/missingauth"
~ curl wttr.in/seoul ## 날씨 확인
Weather report: seoul
\ / Partly cloudy
_ /"".-. +26(27) °C
\_( ). ↗ 13 km/h
/(___(__) 10 km
0.0 mm
┌─────────────┐
┌──────────────────────────────┬───────────────────────┤ Fri 24 Jun ├───────────────────────┬──────────────────────────────┐
│ Morning │ Noon └──────┬──────┘ Evening │ Night │
├──────────────────────────────┼──────────────────────────────┼──────────────────────────────┼──────────────────────────────┤
│ _`/"".-. Patchy rain po…│ \ / Partly cloudy │ \ / Sunny │ \ / Clear │
│ ,\_( ). +23(25) °C │ _ /"".-. +28(29) °C │ .-. +28(29) °C │ .-. +22(25) °C │
│ /(___(__) ↗ 12-14 km/h │ \_( ). ↗ 13-15 km/h │ ― ( ) ― ↗ 24-28 km/h │ ― ( ) ― ↗ 14-19 km/h │
│ ‘ ‘ ‘ ‘ 10 km │ /(___(__) 10 km │ `-’ 10 km │ `-’ 10 km │
│ ‘ ‘ ‘ ‘ 0.1 mm | 65% │ 0.0 mm | 0% │ / \ 0.0 mm | 0% │ / \ 0.0 mm | 0% │
└──────────────────────────────┴──────────────────────────────┴──────────────────────────────┴──────────────────────────────┘
┌─────────────┐
┌──────────────────────────────┬───────────────────────┤ Sat 25 Jun ├───────────────────────┬──────────────────────────────┐
│ Morning │ Noon └──────┬──────┘ Evening │ Night │
├──────────────────────────────┼──────────────────────────────┼──────────────────────────────┼──────────────────────────────┤
│ \ / Sunny │ \ / Sunny │ \ / Sunny │ \ / Partly cloudy │
│ .-. +25(26) °C │ .-. +32(34) °C │ .-. +29(31) °C │ _ /"".-. +25(28) °C │
│ ― ( ) ― ↑ 7-8 km/h │ ― ( ) ― ↑ 15-17 km/h │ ― ( ) ― ↗ 17-19 km/h │ \_( ). ↑ 10-14 km/h │
│ `-’ 10 km │ `-’ 10 km │ `-’ 10 km │ /(___(__) 10 km │
│ / \ 0.0 mm | 0% │ / \ 0.0 mm | 0% │ / \ 0.0 mm | 0% │ 0.0 mm | 0% │
└──────────────────────────────┴──────────────────────────────┴──────────────────────────────┴──────────────────────────────┘
┌─────────────┐
┌──────────────────────────────┬───────────────────────┤ Sun 26 Jun ├───────────────────────┬──────────────────────────────┐
│ Morning │ Noon └──────┬──────┘ Evening │ Night │
├──────────────────────────────┼──────────────────────────────┼──────────────────────────────┼──────────────────────────────┤
│ _`/"".-. Moderate or he…│ _`/"".-. Light rain sho…│ \ / Sunny │ Overcast │
│ ,\_( ). 22 °C │ ,\_( ). +28(31) °C │ .-. +29(32) °C │ .--. +26(28) °C │
│ /(___(__) ← 10-15 km/h │ /(___(__) ↖ 14-18 km/h │ ― ( ) ― ↗ 18-21 km/h │ .-( ). ↑ 13-17 km/h │
│ ‚‘‚‘‚‘‚‘ 7 km │ ‘ ‘ ‘ ‘ 10 km │ `-’ 10 km │ (___.__)__) 10 km │
│ ‚’‚’‚’‚’ 3.2 mm | 66% │ ‘ ‘ ‘ ‘ 1.0 mm | 53% │ / \ 0.0 mm | 0% │ 0.0 mm | 0% │
└──────────────────────────────┴──────────────────────────────┴──────────────────────────────┴──────────────────────────────┘
Location: 서울특별시, 대한민국 [37.5666791,126.9782914]
Follow @igor_chubin for wttr.in updates
▶ 파드 삭제
$ kubectl delete pod --all
pod "testpod1" deleted
pod "testpod2" deleted
pod "testpod3" deleted
6. 테스트 환경 삭제 : AWS CLI로 실행
$ aws cloudformation delete-stack --stack-name k8s-test
'K8S' 카테고리의 다른 글
Cloud Native PostgreSQL 오퍼레이터 (0) | 2022.06.22 |
---|---|
Percona Distribution for MongoDB - 샤딩 (3/3) (0) | 2022.06.19 |
Percona Distribution for MongoDB 오퍼레이터 - 복제 (2/3) (0) | 2022.06.19 |
Percona Distribution for MongoDB 오퍼레이터 - 기본설치 (1/3) (0) | 2022.06.19 |
Helm을 이용해 K8S에 MariaDB Galera 설치하기 (0) | 2022.05.29 |