vm.gowatana.jp

NEOにほんごVMware(仮)

Tanzu Kubernetes Grid での Tanzu CLI によるクラスタ削除 / 作成。

VMware Tanzu Kubernetes Grid(TKG)の Kubernetes クラスタの削除 → 作成を、Tanzu CLI で実施してみます。

 

今回の内容です。

 

クラスタを定義する YAML ファイルのパラメータは、ドキュメントでは下記のあたりが参考になります。

 

今回の環境

下記の一連の投稿で構築した TKG の Management Cluster / Worload Cluster を、CLI で削除してから再作成してみます。

 

開始時点では、Management Cluster と Worload Cluster が作成されています。

demo-01 [ ~ ]$ tanzu cluster list --include-management-cluster
  NAME       NAMESPACE   STATUS   CONTROLPLANE  WORKERS  KUBERNETES         ROLES       PLAN  TKR
  tkg16wc01  default     running  1/1           1/1      v1.23.10+vmware.1  <none>      dev   v1.23.10---vmware.1-tkg.1
  tkg16mc01  tkg-system  running  1/1           1/1      v1.23.10+vmware.1  management  dev   v1.23.10---vmware.1-tkg.1

 

vSphere Client でも、Kubernetes ノードとして仮想マシンが作成されていることが確認できます。

 

1. Kubernetes クラスタの削除

クラスタの削除は、まず Worload Cluster をすべて削除してから Management Cluster を削除します。

 

1-1. Workload Cluster の削除

Workload Cluster「tkg16wc01」を削除します。

demo-01 [ ~ ]$ tanzu cluster delete tkg16wc01 -y
Workload cluster 'tkg16wc01' is being deleted

 

Workload Cluster の削除処理は非同期で、STATUS が deleting になります。

demo-01 [ ~ ]$ tanzu cluster list --include-management-cluster
  NAME       NAMESPACE   STATUS    CONTROLPLANE  WORKERS  KUBERNETES         ROLES       PLAN  TKR
  tkg16wc01  default     deleting  1/1                    v1.23.10+vmware.1  <none>      dev   v1.23.10---vmware.1-tkg.1
  tkg16mc01  tkg-system  running   1/1           1/1      v1.23.10+vmware.1  management  dev   v1.23.10---vmware.1-tkg.1

 

削除が完了したら、Workload Cluster が表示されなくなります。

demo-01 [ ~ ]$ tanzu cluster list --include-management-cluster
  NAME       NAMESPACE   STATUS   CONTROLPLANE  WORKERS  KUBERNETES         ROLES       PLAN  TKR
  tkg16mc01  tkg-system  running  1/1           1/1      v1.23.10+vmware.1  management  dev   v1.23.10---vmware.1-tkg.1

 

vSphere Client から、Workload Cluster を構成していた仮想マシン 2台(Control Plane x1、Worker x1)が削除されたことが確認できます。

 

Workload Cluster のコンテキスト「tkg16wc01-admin@tkg16wc01」も削除しておきます。

demo-01 [ ~ ]$ kubectl config delete-context tkg16wc01-admin@tkg16wc01
warning: this removed your active context, use "kubectl config use-context" to select a different one
deleted context tkg16wc01-admin@tkg16wc01 from /home/demo-01/.kube/config

 

1-2. Management Cluster の削除

Management Cluster「tkg16mc01」を削除します。Workload Cluster とは異なり、削除処理は処理が完了するまでプロンプトは戻りません。作成時の Bootstrap Cluster と同様に kind による Cleanup Cluster が作成されるため、そこそこ時間がかかります。そして、Management Cluster のコンテキストも自動的に削除されます。

demo-01 [ ~ ]$ tanzu management-cluster delete tkg16mc01 -y
Verifying management cluster...
Setting up cleanup cluster...
Installing providers to cleanup cluster...
Fetching providers
Installing cert-manager Version="v1.7.2"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v1.1.5" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v1.1.5" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v1.1.5" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-vsphere" Version="v1.3.5" TargetNamespace="capv-system"
Moving Cluster API objects from management cluster to cleanup cluster...
Performing move...
Discovering Cluster API objects
Moving Cluster API objects Clusters=1
Moving Cluster API objects ClusterClasses=0
Creating objects in the target cluster
Deleting objects from the source cluster
Waiting for the Cluster API objects to get ready after move...
Deleting management cluster...
Management cluster 'tkg16mc01' deleted.
Deleting the management cluster context from the kubeconfig file '/home/demo-01/.kube/config'

Management cluster deleted!
demo-01 [ ~ ]$

 

vSphere Client で確認すると、仮想マシンも削除されています。

 

2. Kubernetes クラスタの作成

Management Cluster を Web UI のかわりに CLI で作成したあとに、Workload Cluster を作成します。

 

2-2. Management Cluster の作成

以前に Web UI で作成した YAML ファイル(tkg16mc01.yml)から、未定義のパラメータを除外したファイルを作成しておきます。

demo-01 [ ~ ]$ cat tkg16mc01.yml | grep -v ': ""' > tkg16mc01_simple.yml

 

vSphere 7 環境にスーパーバイザー クラスタを利用しないで TKG をデプロイする確認をスキップするために、「DEPLOY_TKG_ON_VSPHERE7: "true"」を追記します。

demo-01 [ ~ ]$ echo 'DEPLOY_TKG_ON_VSPHERE7: "true"' >> tkg16mc01_simple.yml

 

下記のような YAML になります。

tkg16mc01_simple.yml

  • VSPHERE_PASSWORD に指定してある Vk13YXJlMSE は、Base64 エンコーディングされた VMware1! です。
  • VSPHERE_SSH_AUTHORIZED_KEY に指定する SSH 公開鍵は、あらかじめ ssh-keygen などで鍵ペアを生成しておきます。もう一方の秘密鍵は扱いに注意が必要です。

gist.github.com

 

Management Cluster を作成します。

demo-01 [ ~ ]$ tanzu management-cluster create -f ./tkg16mc01_simple.yml

 

コマンドを実行して、Management Cluster の作成処理が完了するとプロンプトが戻ります。

demo-01 [ ~ ]$ tanzu management-cluster create -f ./tkg16mc01_simple.yml

Validating the pre-requisites...

vSphere 7.0 Environment Detected.

You have connected to a vSphere 7.0 environment which does not have vSphere with Tanzu enabled. vSphere with Tanzu includes
an integrated Tanzu Kubernetes Grid Service which turns a vSphere cluster into a platform for running Kubernetes workloads in dedicated
resource pools. Configuring Tanzu Kubernetes Grid Service is done through vSphere HTML5 client.

Tanzu Kubernetes Grid Service is the preferred way to consume Tanzu Kubernetes Grid in vSphere 7.0 environments. Alternatively you may
deploy a non-integrated Tanzu Kubernetes Grid instance on vSphere 7.0.
Deploying TKG management cluster on vSphere 7.0 ...
Identity Provider not configured. Some authentication features won't work.
Using default value for CONTROL_PLANE_MACHINE_COUNT = 1. Reason: CONTROL_PLANE_MACHINE_COUNT variable is not set
Using default value for WORKER_MACHINE_COUNT = 1. Reason: WORKER_MACHINE_COUNT variable is not set

Setting up management cluster...
Validating configuration...
Using infrastructure provider vsphere:v1.3.5
Generating cluster configuration...
Setting up bootstrapper...
Bootstrapper created. Kubeconfig: /home/demo-01/.kube-tkg/tmp/config_F1W0Q2ky
Installing providers on bootstrapper...
Fetching providers
Installing cert-manager Version="v1.7.2"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v1.1.5" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v1.1.5" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v1.1.5" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-vsphere" Version="v1.3.5" TargetNamespace="capv-system"
Using default value for CONTROL_PLANE_MACHINE_COUNT = 1. Reason: CONTROL_PLANE_MACHINE_COUNT variable is not set
Using default value for WORKER_MACHINE_COUNT = 1. Reason: WORKER_MACHINE_COUNT variable is not set
Management cluster config file has been generated and stored at: '/home/demo-01/.config/tanzu/tkg/clusterconfigs/tkg16mc01.yaml'
Start creating management cluster...
cluster control plane is still being initialized: WaitingForControlPlane
cluster control plane is still being initialized: WaitingForKubeadmInit
cluster control plane is still being initialized: Cloning @ Machine/tkg16mc01-control-plane-xnvk2
cluster control plane is still being initialized: PoweringOn @ Machine/tkg16mc01-control-plane-xnvk2
Saving management cluster kubeconfig into /home/demo-01/.kube/config
Installing providers on management cluster...
Fetching providers
Installing cert-manager Version="v1.7.2"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v1.1.5" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v1.1.5" TargetNamespace="capi-kubeadm-bootstrap-system"
I0109 16:31:50.281994   16991 request.go:665] Waited for 1.046041936s due to client-side throttling, not priority and fairness, request: GET:https://192.168.11.201:6443/apis/flowcontrol.apiserver.k8s.io/v1beta2?timeout=30s
Installing Provider="control-plane-kubeadm" Version="v1.1.5" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-vsphere" Version="v1.3.5" TargetNamespace="capv-system"
Waiting for the management cluster to get ready for move...
Waiting for addons installation...
Moving all Cluster API objects from bootstrap cluster to management cluster...
Performing move...
Discovering Cluster API objects
Moving Cluster API objects Clusters=1
Moving Cluster API objects ClusterClasses=0
Creating objects in the target cluster
Deleting objects from the source cluster
I0109 16:33:06.972827   16991 request.go:665] Waited for 1.029509781s due to client-side throttling, not priority and fairness, request: GET:https://192.168.11.201:6443/apis/core.antrea.tanzu.vmware.com/v1alpha2?timeout=30s
Waiting for additional components to be up and running...
Waiting for packages to be up and running...
You can now access the management cluster tkg16mc01 by running 'kubectl config use-context tkg16mc01-admin@tkg16mc01'

Management cluster created!


You can now create your first workload cluster by running the following:

  tanzu cluster create [name] -f [file]


Some addons might be getting installed! Check their status by running the following:

  kubectl get apps -A

Checking for required plugins...
All required plugins are already installed and up-to-date
demo-01 [ ~ ]$

 

2-1. Workload Cluster の作成

以前に Web UI で作成した YAML ファイル(tkg16wc01.yml)から、未定義のパラメータを除外したファイルを作成しておきます。

demo-01 [ ~ ]$ cat tkg16wc01.yml | grep -v ': ""' > tkg16wc01_simple.yml

 

下記のような YAML になります。

tkg16wc01_simple.yml

gist.github.com

 

Management Cluster と Workload Cluster の YAML の差分は下記のようになっています。

demo-01 [ ~ ]$ diff tkg16mc01_simple.yml tkg16wc01_simple.yml
5c5
< CLUSTER_NAME: tkg16mc01
---
> CLUSTER_NAME: tkg16wc01
21c21
< VSPHERE_CONTROL_PLANE_ENDPOINT: 192.168.11.201
---
> VSPHERE_CONTROL_PLANE_ENDPOINT: 192.168.11.202
26c26
< VSPHERE_FOLDER: /infra-dc-01/vm/05-Lab-k8s/k8s_lab-tkg-02_demo-01/vm_tkg16mc01
---
> VSPHERE_FOLDER: /infra-dc-01/vm/05-Lab-k8s/k8s_lab-tkg-02_demo-01/vm_tkg16wc01
37d36
< DEPLOY_TKG_ON_VSPHERE7: "true"

 

Workload Cluster を作成します。

demo-01 [ ~ ]$ tanzu cluster create -f ./tkg16wc01_simple.yml

 

Workload Cluster の作成が完了するとプロンプトが戻ります。

demo-01 [ ~ ]$ tanzu cluster create -f ./tkg16wc01_simple.yml
Validating configuration...
Warning: Pinniped configuration not found; Authentication via Pinniped will not be set up in this cluster. If you wish to set up Pinniped after the cluster is created, please refer to the documentation.
creating workload cluster 'tkg16wc01'...
waiting for cluster to be initialized...
cluster control plane is still being initialized: WaitingForControlPlane
cluster control plane is still being initialized: Cloning @ Machine/tkg16wc01-control-plane-78gjk
cluster control plane is still being initialized: PoweringOn @ Machine/tkg16wc01-control-plane-78gjk
cluster control plane is still being initialized: WaitingForKubeadmInit
waiting for cluster nodes to be available...
waiting for addons installation...
waiting for packages to be up and running...

Workload cluster 'tkg16wc01' created

demo-01 [ ~ ]$

 

これで Management Cluster と Workload Cluster が作成できました。

demo-01 [ ~ ]$ tanzu cluster list --include-management-cluster
  NAME       NAMESPACE   STATUS   CONTROLPLANE  WORKERS  KUBERNETES         ROLES       PLAN  TKR
  tkg16wc01  default     running  1/1           1/1      v1.23.10+vmware.1        dev   v1.23.10---vmware.1-tkg.1
  tkg16mc01  tkg-system  running  1/1           1/1      v1.23.10+vmware.1  management  dev   v1.23.10---vmware.1-tkg.1

 

Workload Cluster のコンテキストを取得して切り替えます。

demo-01 [ ~ ]$ tanzu cluster kubeconfig get tkg16wc01 --admin
demo-01 [ ~ ]$ kubectl config use-context tkg16wc01-admin@tkg16wc01

 

これで kubectl で Workload Cluster にアクセスできるようになりました。

demo-01 [ ~ ]$ kubectl get nodes
NAME                              STATUS   ROLES                  AGE     VERSION
tkg16wc01-control-plane-78gjk     Ready    control-plane,master   12m     v1.23.10+vmware.1
tkg16wc01-md-0-55b9fbdcdb-h4qp7   Ready                     9m34s   v1.23.10+vmware.1

 

以上、Tanzu CLI での Kubernetes クラスタ削除と作成でした。