728x90
반응형

이제 쿠버네티스 클러스 마스터 노드를 설정할 차례다.
나는 kubeadm 를 사용해 클러스터를 만들어볼 생각이고
앞서 각 노드별 기본 설정과 필요한 것들을 설치했다.

쿠버네티스 마스터 노드 설정 요약 정리

1. 쿠버네티스 실행

systemctl enable --now kubelet

2. kubeadm 초기화 설정하기

kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.0.2.10

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

1. 쿠버네티스 실행

쿠버네티스를 설치까지만 했기 때문에 이제 쿠버네티스를 실행시켜준다.

[root@k8s-master ~]# systemctl enable --now kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

상태를 확인해보니 서비스가 정상적으로 실행이 되지 않고 failed 메시지가 보였다.

[root@k8s-master ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since 일 2021-08-29 02:16:05 KST; 1s ago
     Docs: https://kubernetes.io/docs/
  Process: 1682 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
 Main PID: 1682 (code=exited, status=1/FAILURE)

 8월 29 02:16:05 k8s-master systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
 8월 29 02:16:05 k8s-master systemd[1]: Unit kubelet.service entered failed state.
 8월 29 02:16:05 k8s-master systemd[1]: kubelet.service failed.

이런 문제가 발생하는 이유는 잘 모르겠는데 해결할 수 있는 방법에 대해 찾아보니
kubeadm init 명령어를 통해 초기화를 해주면 서비스가 올라올 수도 있다고 한다.
그래서 kubeadm init 명령어를 통해 초기화를 진행해주었다.

 

2. kubeadm 초기화 설정하기

--pod-network-cidr 옵션은 pod가 생성될때 IP가 자동으로 생성된다고 한다.
--apiserver-advertise-address 옵션은 API 서버가 수신 중이라고  IP 주소입니다.
설정하지 않으면 기본 네트워크 인터페이스가 사용된다고 한다.

이 옵션에 대해서는 찾아보긴 했는데 잘 이해가 되지 않아 다시 한 번 찾아봐야겠다.

[root@k8s-master ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.0.2.10
[init] Using Kubernetes version: v1.22.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [10.0.2.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [10.0.2.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 10.002892 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: n93a9u.gt377l7fuxiaeiri
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.2.10:6443 --token n93a9u.gt377l7fuxiaeiri \
        --discovery-token-ca-cert-hash sha256:d5952e815ffe5c7a23dbf147d35b45b4dc0a06a4220c746f1868018d3a4450d9

로그를 보면 초기화 설정이 완료되었고 추가로 진행해야할 과정과 명령어를 알려주고있다.
그리고 kubelet 서비스가 작동하지 않았었는데 kubeadm 초기화를 해주고나니까 정상적으로 작동하는 것을 확인했다.

[root@k8s-master ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since 일 2021-08-29 02:59:16 KST; 2min 21s ago
     Docs: https://kubernetes.io/docs/
 Main PID: 5076 (kubelet)
    Tasks: 13
   Memory: 79.9M
   CGroup: /system.slice/kubelet.service
           └─5076 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernete...

 8월 29 03:01:12 k8s-master kubelet[5076]: E0829 03:01:12.258660    5076 kubelet.go:2332] "Container runtime network ...lized"
 8월 29 03:01:16 k8s-master kubelet[5076]: I0829 03:01:16.392021    5076 cni.go:239] "Unable to update cni config" er...net.d"
 8월 29 03:01:17 k8s-master kubelet[5076]: E0829 03:01:17.282869    5076 kubelet.go:2332] "Container runtime network ...lized"
 8월 29 03:01:21 k8s-master kubelet[5076]: I0829 03:01:21.393246    5076 cni.go:239] "Unable to update cni config" er...net.d"
 8월 29 03:01:22 k8s-master kubelet[5076]: E0829 03:01:22.313679    5076 kubelet.go:2332] "Container runtime network ...lized"
 8월 29 03:01:26 k8s-master kubelet[5076]: I0829 03:01:26.399095    5076 cni.go:239] "Unable to update cni config" er...net.d"
 8월 29 03:01:27 k8s-master kubelet[5076]: E0829 03:01:27.351435    5076 kubelet.go:2332] "Container runtime network ...lized"
 8월 29 03:01:31 k8s-master kubelet[5076]: I0829 03:01:31.400451    5076 cni.go:239] "Unable to update cni config" er...net.d"
 8월 29 03:01:32 k8s-master kubelet[5076]: E0829 03:01:32.394727    5076 kubelet.go:2332] "Container runtime network ...lized"
 8월 29 03:01:36 k8s-master kubelet[5076]: I0829 03:01:36.402156    5076 cni.go:239] "Unable to update cni config" er...net.d"
Hint: Some lines were ellipsized, use -l to show in full.

그리고나서 이후 과정을 따라해봤다.

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master ~]# ls -rlt ~/.kube/config
-rw-------. 1 root root 5633  8월 29 03:07 /root/.kube/config

명령어를 통해 홈 디렉토리에 .kube 디렉토리를 만들어 admin.conf 파일을 복사해 가져오고
root 권한을 주었다.
이렇게 하는 이유는 root 계정을 이용해서 kubectl 을 실행하기 위해서 환경 변수를 설정하는 거라고 한다.

그리고 root 유저의 경우 환경변수를 export 해주면 된다.
하지만 export 만 하면 재접속했을 때 사라진다는 점...

[root@k8s-master ~]# export KUBECONFIG=/etc/kubernetes/admin.conf

그리고 중요한 부분!!

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.2.10:6443 --token n93a9u.gt377l7fuxiaeiri \
        --discovery-token-ca-cert-hash sha256:d5952e815ffe5c7a23dbf147d35b45b4dc0a06a4220c746f1868018d3a4450d9

마지막에 토큰을 알려주는데 워커노드에서 사용해야 하기 때문에 기억하고 있어야 한다.

자 이제 워커노드 설정하러 가보자 !

728x90
반응형
복사했습니다!