본문 바로가기
IT 기술/k8s

[cka] Backup and Restore Methods 2

by Geunny 2024. 7. 12.
반응형

1. In this lab environment, you will get to work with multiple kubernetes clusters where we will practice backing up and restoring the ETCD database.

 

2. You will notice that, you are logged in to the student-node (instead of the controlplane).
The student-node has the kubectl client and has access to all the Kubernetes clusters that are configured in this lab environment.
Before proceeding to the next question, explore the student-node and the clusters it has access to.

 

student-node ~ ➜  k get nodes
NAME                    STATUS   ROLES           AGE   VERSION
cluster1-controlplane   Ready    control-plane   25m   v1.24.0
cluster1-node01         Ready    <none>          25m   v1.24.0

 

현재 계정의 컨텍스트는 student-node 이다.

 

3. How many clusters are defined in the kubeconfig on the student-node?
You can make use of the kubectl config command.

 

student-node ~ ➜  k get nodes
NAME                    STATUS   ROLES           AGE   VERSION
cluster1-controlplane   Ready    control-plane   25m   v1.24.0
cluster1-node01         Ready    <none>          25m   v1.24.0

 

answer : 2

 

4. How many nodes (both controlplane and worker) are part of cluster1?
Make sure to switch the context to cluster1:

student-node ~ ➜  kubectl config use-context cluster1 # 컨텍스트 전환
Switched to context "cluster1".

student-node ~ ➜  k get nodes
NAME                    STATUS   ROLES           AGE   VERSION
cluster1-controlplane   Ready    control-plane   28m   v1.24.0
cluster1-node01         Ready    <none>          27m   v1.24.0

 

answer : 2

 

5. What is the name of the controlplane node in cluster2?
Make sure to switch the context to cluster2:

student-node ~ ➜  kubectl config use-context cluster2
Switched to context "cluster2".

student-node ~ ➜  k get nodes
NAME                    STATUS   ROLES           AGE   VERSION
cluster2-controlplane   Ready    control-plane   29m   v1.24.0
cluster2-node01         Ready    <none>          29m   v1.24.0

 

answer : cluster2-controlplane

 

6. You can SSH to all the nodes (of both clusters) from the student-node. - 각 클러스터로 접속하는 방법

For example:

student-node ~ ➜  ssh cluster1-controlplane
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1086-gcp x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage
This system has been minimized by removing packages and content that are
not required on a system that users do not log into.

To restore this content, you can run the 'unminimize' command.

cluster1-controlplane ~ ➜


To get back to the student node, use the logout or exit command, or, hit Control +D

cluster1-controlplane ~ ➜  logout
Connection to cluster1-controlplane closed.

student-node ~ ➜

 

7. How is ETCD configured for cluster1?

Remember, you can access the clusters from student-node using the kubectl tool. You can also ssh to the cluster nodes from the student-node.

Make sure to switch the context to cluster1:

student-node ~ ✖ k get nodes
NAME                    STATUS   ROLES           AGE   VERSION
cluster1-controlplane   Ready    control-plane   32m   v1.24.0
cluster1-node01         Ready    <none>          32m   v1.24.0

student-node ~ ➜  ssh cluster1-controlplane
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1106-gcp x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage
This system has been minimized by removing packages and content that are
not required on a system that users do not log into.

To restore this content, you can run the 'unminimize' command.

cluster1-controlplane ~ ➜  k get pods
No resources found in default namespace.

cluster1-controlplane ~ ➜  k get pods -A
NAMESPACE     NAME                                            READY   STATUS    RESTARTS      AGE
kube-system   coredns-6d4b75cb6d-794kb                        1/1     Running   0             32m
kube-system   coredns-6d4b75cb6d-n6nxs                        1/1     Running   0             32m
kube-system   etcd-cluster1-controlplane                      1/1     Running   0             32m
kube-system   kube-apiserver-cluster1-controlplane            1/1     Running   0             32m
kube-system   kube-controller-manager-cluster1-controlplane   1/1     Running   0             32m
kube-system   kube-proxy-pfclj                                1/1     Running   0             32m
kube-system   kube-proxy-zbnvt                                1/1     Running   0             32m
kube-system   kube-scheduler-cluster1-controlplane            1/1     Running   0             33m
kube-system   weave-net-lcs89                                 2/2     Running   0             32m
kube-system   weave-net-tjjhc                                 2/2     Running   1 (32m ago)   32m


cluster1-controlplane 에 접속하여 kube-system pod 를 확인해 보았을 때 pod로 관리되고 있는것을 보아Stacked ETCD Topology 방식으로 ETCD 가 실행중이다.

 

answer : Stacked ETCD

 

Kubernetes 클러스터에서 etcd는 클러스터 상태를 저장하는 중요한 구성 요소입니다. etcd의 배포 방식에는 여러 가지가 있지만, 대표적으로 두 가지 방식이 있습니다: Stacked etcdExternal etcd. 각 방식의 차이점과 적절한 사용 사례를 이해하는 것이 중요합니다.

 

Stacked etcd와 External etcd의 차이

 

Stacked etcd

Stacked etcd는 etcd가 Kubernetes 컨트롤 플레인 노드와 같은 노드에서 실행되는 배포 방식입니다. 즉, etcd와 Kubernetes API 서버가 동일한 물리적 또는 가상 머신에서 실행됩니다.

 

장점:

간단한 설정: 컨트롤 플레인과 etcd가 동일한 노드에 있으므로 설정이 간단합니다.

성능 향상: 동일한 노드에 있으므로 네트워크 지연이 적습니다.

 

단점:

단일 실패 지점(Single Point of Failure): 컨트롤 플레인 노드에 문제가 생기면 etcd도 함께 영향을 받을 수 있습니다.

자원 경쟁: 컨트롤 플레인과 etcd가 동일한 자원을 사용하므로 자원 경쟁이 발생할 수 있습니다.

 

사용 사례:

소규모 클러스터에서 적합

간단하고 빠른 배포가 필요할 때

 

External etcd

External etcd는 etcd가 Kubernetes 클러스터 외부에서 별도의 전용 노드 또는 클러스터로 실행되는 배포 방식입니다.

 

장점:

높은 가용성: etcd가 독립적으로 실행되므로 컨트롤 플레인 노드에 문제가 발생해도 etcd는 영향을 받지 않습니다.

유연한 확장성: etcd 클러스터를 독립적으로 확장하거나 유지 관리할 수 있습니다.

 

단점:

설정 복잡성: 별도의 노드 또는 클러스터를 설정하고 관리해야 하므로 설정이 더 복잡합니다.

네트워크 지연: 네트워크를 통해 통신해야 하므로 지연이 발생할 수 있습니다.

 

사용 사례:

대규모 클러스터에서 적합

높은 가용성과 독립적인 관리가 필요한 경우

 

8. How is ETCD configured for cluster2?
Remember, you can access the clusters from student-node using the kubectl tool. You can also ssh to the cluster nodes from the student-node.
Make sure to switch the context to cluster2:

student-node ~ ➜  kubectl config use-context cluster2
Switched to context "cluster2".

student-node ~ ➜  k get nodes
NAME                    STATUS   ROLES           AGE   VERSION
cluster2-controlplane   Ready    control-plane   38m   v1.24.0
cluster2-node01         Ready    <none>          38m   v1.24.0

student-node ~ ➜  ssh cluster2-controlplane
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1106-gcp x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage
This system has been minimized by removing packages and content that are
not required on a system that users do not log into.

To restore this content, you can run the 'unminimize' command.

cluster2-controlplane ~ ➜  k get pods -A | grep etcd

cluster2-controlplane ~ ✖ ps -ef | grep etcd
root        1762    1340  0 13:23 ?        00:01:38 kube-apiserver --advertise-address=192.26.193.9 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.pem --etcd-certfile=/etc/kubernetes/pki/etcd/etcd.pem --etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-key.pem --etcd-servers=https://192.26.193.21:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
root        7422    7253  0 14:03 pts/0    00:00:00 grep etcd

 

kubelet 으론 관리되지 않으나 ps 로 확인해보면 etcd가 동작중인걸로 보아 External ETCD 방식으로 동작중이다.

an

 

9. What is the IP address of the External ETCD datastore used in cluster2?

cluster2-controlplane ~ ✖ ps -ef | grep etcd
root        1762    1340  0 13:23 ?        00:01:38 kube-apiserver --advertise-address=192.26.193.9 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.pem --etcd-certfile=/etc/kubernetes/pki/etcd/etcd.pem --etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-key.pem --etcd-servers=https://192.26.193.21:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key

# --etcd-servers=https://192.26.193.21:2379 #

answer : 192.26.193.21:2379

 

10. What is the default data directory used the for ETCD datastore used in cluster1?
Remember, this cluster uses a Stacked ETCD topology.

student-node ~ ✖ kubectl config use-context cluster1
Switched to context "cluster1".

student-node ~ ➜  kubectl -n kube-system describe pod etcd-cluster1-controlplane | grep data-dir
      --data-dir=/var/lib/etcd


answer :  /var/lib/etcd

 

11. From the new terminal you can now SSH from the student-node to either the IP of the ETCD datastore (that you identified in the previous questions) OR the hostname etcd-server:

 

-> ssh 를 통해 외부에서 동작중인 etcd-server 접근이 가능하다.

 

12. What is the default data directory used the for ETCD datastore used in cluster2?
Remember, this cluster uses an External ETCD topology.

 

student-node ~ ➜  ssh etcd-server
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1106-gcp x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage
This system has been minimized by removing packages and content that are
not required on a system that users do not log into.

To restore this content, you can run the 'unminimize' command.
Last login: Fri Jul 12 14:09:29 2024 from 192.26.193.4

etcd-server ~ ➜  ps -ef | grep etcd | grep data-dir
etcd         807       1  0 13:23 ?        00:00:40 /usr/local/bin/etcd --name etcd-server --data-dir=/var/lib/etcd-data --cert-file=/etc/etcd/pki/etcd.pem --key-file=/etc/etcd/pki/etcd-key.pem --peer-cert-file=/etc/etcd/pki/etcd.pem --peer-key-file=/etc/etcd/pki/etcd-key.pem --trusted-ca-file=/etc/etcd/pki/ca.pem --peer-trusted-ca-file=/etc/etcd/pki/ca.pem --peer-client-cert-auth --client-cert-auth --initial-advertise-peer-urls https://192.26.193.21:2380 --listen-peer-urls https://192.26.193.21:2380 --advertise-client-urls https://192.26.193.21:2379 --listen-client-urls https://192.26.193.21:2379,https://127.0.0.1:2379 --initial-cluster-token etcd-cluster-1 --initial-cluster etcd-server=https://192.26.193.21:2380 --initial-cluster-state new

 

answer : /var/lib/etcd-data

 

13. How many nodes are part of the ETCD cluster that etcd-server is a part of?

 

https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#securing-communication

 

Operating etcd clusters for Kubernetes

etcd is a consistent and highly-available key value store used as Kubernetes' backing store for all cluster data. If your Kubernetes cluster uses etcd as its backing store, make sure you have a back up plan for the data. You can find in-depth information a

kubernetes.io

 

etcd-server ~ ✖ ETCDCTL_API=3 etcdctl \
>  --endpoints=https://127.0.0.1:2379 \
>  --cacert=/etc/etcd/pki/ca.pem \
>  --cert=/etc/etcd/pki/etcd.pem \
>  --key=/etc/etcd/pki/etcd-key.pem \
>   member list
7f088452a37de24b, started, etcd-server, https://192.26.193.21:2380, https://192.26.193.21:2379, false

 

answer : 1

 

Take a backup of etcd on cluster1 and save it on the student-node at the path /opt/cluster1.db
If needed, make sure to set the context to cluster1 (on the student-node):

 

https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#snapshot-using-etcdctl-options

 

Operating etcd clusters for Kubernetes

etcd is a consistent and highly-available key value store used as Kubernetes' backing store for all cluster data. If your Kubernetes cluster uses etcd as its backing store, make sure you have a back up plan for the data. You can find in-depth information a

kubernetes.io

 

student-node ~ ✖ kubectl config use-context cluster1
Switched to context "cluster1".

student-node ~ ➜  k describe po -n kube-system etcd-cluster1-controlplane 
Name:                 etcd-cluster1-controlplane
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 cluster1-controlplane/192.26.193.6
Start Time:           Fri, 12 Jul 2024 13:24:34 +0000
Labels:               component=etcd
                      tier=control-plane
Annotations:          kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.26.193.6:2379
                      kubernetes.io/config.hash: 67de76649aeaeec7db64fd5485736131
                      kubernetes.io/config.mirror: 67de76649aeaeec7db64fd5485736131
                      kubernetes.io/config.seen: 2024-07-12T13:24:33.376327935Z
                      kubernetes.io/config.source: file
                      seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status:               Running
IP:                   192.26.193.6
IPs:
  IP:           192.26.193.6
Controlled By:  Node/cluster1-controlplane
Containers:
  etcd:
    Container ID:  containerd://11292d80eb83a4cf246c1dbb31ec761d94abccbd5f850d194b57eb1d0c93537b
    Image:         k8s.gcr.io/etcd:3.5.3-0
    Image ID:      k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5
    Port:          <none>
    Host Port:     <none>
    Command:
      etcd
      --advertise-client-urls=https://192.26.193.6:2379
      --cert-file=/etc/kubernetes/pki/etcd/server.crt
      --client-cert-auth=true
      --data-dir=/var/lib/etcd
      --experimental-initial-corrupt-check=true
      --initial-advertise-peer-urls=https://192.26.193.6:2380
      --initial-cluster=cluster1-controlplane=https://192.26.193.6:2380
      --key-file=/etc/kubernetes/pki/etcd/server.key
      --listen-client-urls=https://127.0.0.1:2379,https://192.26.193.6:2379
      --listen-metrics-urls=http://127.0.0.1:2381
      --listen-peer-urls=https://192.26.193.6:2380
      --name=cluster1-controlplane
      --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
      --peer-client-cert-auth=true
      --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
      --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
      --snapshot-count=10000
      --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt

 

etcd 백업에서 필요한 정보들 :

ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
  --cacert=<trusted-ca-file> \
  --cert=<cert-file> --key=<key-file> \
  snapshot save <backup-file-location>

 

trusted-ca-file : /etc/kubernetes/pki/etcd/ca.crt

cert-file : /etc/kubernetes/pki/etcd/server.crt

key-file : /etc/kubernetes/pki/etcd/server.key

backup-file-location : /opt/cluster1.db (문제에서 주어짐)

student-node ~ ✖ k get nodes
NAME                    STATUS   ROLES           AGE   VERSION
cluster1-controlplane   Ready    control-plane   55m   v1.24.0
cluster1-node01         Ready    <none>          55m   v1.24.0

student-node ~ ➜  ssh cluster1-controlplane
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1106-gcp x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage
This system has been minimized by removing packages and content that are
not required on a system that users do not log into.

To restore this content, you can run the 'unminimize' command.
Last login: Fri Jul 12 14:07:40 2024 from 192.26.193.3

cluster1-controlplane ~ ➜  ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
>   --cacert=/etc/kubernetes/pki/etcd/ca.crt \
>   --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
>   snapshot save /opt/cluster1.db
Snapshot saved at /opt/cluster1.db

 

이후 해당 노드를 나온후에 cluster 에서 해당 db 정보를 현재 context로 scp를 이용하여 저장해둔다.

 

cluster1-controlplane ~ ➜  exit
logout
Connection to cluster1-controlplane closed.

student-node ~ ➜   scp cluster1-controlplane:/opt/cluster1.db /opt
cluster1.db                                  100% 2100KB 114.2MB/s   00:00

 

15. An ETCD backup for cluster2 is stored at /opt/cluster2.db. Use this snapshot file to carryout a restore on cluster2 to a new path /var/lib/etcd-data-new.
Once the restore is complete, ensure that the controlplane components on cluster2 are running.
The snapshot was taken when there were objects created in the critical namespace on cluster2. These objects should be available post restore.

If needed, make sure to set the context to cluster2 (on the student-node):

 

15-1 : 14번 문제에서 백업해둔 파일을 External ETCD cluster 로 scp를 이용하여 복사한다.

student-node ~ ➜  scp /opt/cluster2.db etcd-server:/root
cluster2.db                                  100% 2036KB 141.3MB/s   00:00

 

15-2. etcd-server로 접속하여 해당 .db 파일을 이용하여 etcd restore 를 진행한다.

student-node ~ ➜  ssh etcd-server
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1106-gcp x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage
This system has been minimized by removing packages and content that are
not required on a system that users do not log into.

To restore this content, you can run the 'unminimize' command.
Last login: Fri Jul 12 14:10:45 2024 from 192.26.193.3

etcd-server ~ ➜  ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/etcd/pki/ca.pem --cert=/etc/etcd/pki/etcd.pem --key=/etc/etcd/pki/etcd-key.pem snapshot restore /root/cluster2.db --data-dir /var/lib/etcd-data-new
{"level":"info","ts":1720794362.2835734,"caller":"snapshot/v3_snapshot.go:296","msg":"restoring snapshot","path":"/root/cluster2.db","wal-dir":"/var/lib/etcd-data-new/member/wal","data-dir":"/var/lib/etcd-data-new","snap-dir":"/var/lib/etcd-data-new/member/snap"}
{"level":"info","ts":1720794362.3015418,"caller":"mvcc/kvstore.go:388","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":4407}
{"level":"info","ts":1720794362.3081691,"caller":"membership/cluster.go:392","msg":"added member","cluster-id":"cdf818194e3a8c32","local-member-id":"0","added-peer-id":"8e9e05c52164694d","added-peer-peer-urls":["http://localhost:2380"]}
{"level":"info","ts":1720794362.4015892,"caller":"snapshot/v3_snapshot.go:309","msg":"restored snapshot","path":"/root/cluster2.db","wal-dir":"/var/lib/etcd-data-new/member/wal","data-dir":"/var/lib/etcd-data-new","snap-dir":"/var/lib/etcd-data-new/member/snap"}

 

15-3. etcd static pod 정보를 수정하여 해당 경로를 보도록 수정한다.

 

etcd-server ~ ✖ cd /etc
etcd-server /etc ➜  find . -type f -exec grep -l "etcd" {} + # etcd config 위치 확인
./group
./passwd
./gshadow
./hosts
./shadow
./systemd/system/etcd.service # 해당경로
./systemd/system/einit.service
./hostname
./mime.types

etcd-server /etc ✖ vi /etc/systemd/system/etcd.service

[Unit]
Description=etcd key-value store
Documentation=https://github.com/etcd-io/etcd
After=network.target

[Service]
User=etcd
Type=notify
ExecStart=/usr/local/bin/etcd \
  --name etcd-server \
  --data-dir=/var/lib/etcd-data \ # 경로 변경 -> /var/lib/etcd-data-new
  --cert-file=/etc/etcd/pki/etcd.pem \
  --key-file=/etc/etcd/pki/etcd-key.pem \
  --peer-cert-file=/etc/etcd/pki/etcd.pem \
  --peer-key-file=/etc/etcd/pki/etcd-key.pem \
  --trusted-ca-file=/etc/etcd/pki/ca.pem \
  --peer-trusted-ca-file=/etc/etcd/pki/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth \

 

15-4 . 해당 파일에 대한 Read 권한 부여

etcd-server /etc ➜  chown -R etcd:etcd /var/lib/etcd-data-new

 

15-5. etcd 서버 재기동

etcd-server /etc ➜  systemctl daemon-reload
etcd-server /etc ➜  systemctl restart etcd

'IT 기술 > k8s' 카테고리의 다른 글

[cka] Certificates API  (0) 2024.07.19
[cka] View Certificate Details  (0) 2024.07.16
[cka] Backup and Restore Methods  (0) 2024.07.12
[cka] Cluster Upgrade Process  (0) 2024.07.11
[cka] OS Upgrades  (0) 2024.07.10

댓글