OCI環境のOKEで構成されたKuberntesコンテナーへYugabyteDBを構築しました。

投稿日:

作業時のメモです。

Docker Desktop上へ、Podクラスタを組んで遊んでいます。

ノードクラスターは未対応です。

helm repo add yugabytedb https://charts.yugabyte.com

helm repo update

kubectl create namespace yugabyte

helm install yugabyte yugabytedb/yugabyte --set resource.master.requests.cpu=0.5,resource.master.requests.memory=0.5Gi,resource.tserver.requests.cpu=0.5,resource.tserver.requests.memory=0.5Gi,replicas.master=3,replicas.tserver=3,Image.tag=2024.2.3.2-b6 --namespace yugabyte

1. Get YugabyteDB Pods by running this command:

kubectl --namespace yugabyte get pods

2. Get list of YugabyteDB services that are running:

kubectl --namespace yugabyte get services

kubectl exec --namespace yugabyte -it yb-tserver-0 -- /home/yugabyte/bin/ysqlsh -h yb-tserver-0.yb-tservers.yugabyte

6. Cleanup YugabyteDB Pods

For helm 2:

helm delete yugabyte --purge

For helm 3:

helm delete yugabyte -n yugabyte

NOTE: You need to manually delete the persistent volume

kubectl delete pvc --namespace yugabyte -l app=yb-master

kubectl delete pvc --namespace yugabyte -l app=yb-tserver

NOTE: The yugabyted UI is now available and is enabled by default. It requires version 2.21.0 or greater.

If you are using a custom image of YugabyteDB that is older than 2.21.0, please disable the UI by setting yugabytedUi.enabled to false.

PS C:\Users\mazin> kubectl --namespace yugabyte get pods

NAME READY STATUS RESTARTS AGE

yb-master-0 0/3 Pending 0 30s

yb-master-1 0/3 Pending 0 30s

yb-master-2 0/3 Pending 0 30s

yb-tserver-0 0/3 Pending 0 30s

yb-tserver-1 0/3 Pending 0 30s

yb-tserver-2 0/3 Pending 0 30s

PS C:\Users\mazin> kubectl --namespace yugabyte get services

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

yb-master-ui LoadBalancer 10.107.228.137 localhost 7000:32531/TCP 50s

yb-masters ClusterIP None <none> 7000/TCP,7100/TCP,15433/TCP 50s

yb-tserver-service LoadBalancer 10.97.190.11 localhost 6379:30979/TCP,9042:30606/TCP,5433:30517/TCP 50s

yb-tservers ClusterIP None <none> 9000/TCP,12000/TCP,11000/TCP,13000/TCP,9100/TCP,6379/TCP,9042/TCP,5433/TCP,15433/TCP 50s

yugabyted-ui-service LoadBalancer 10.98.72.106 localhost 15433:30725/TCP 50s

PS C:\Users\mazin> kubectl exec --namespace yugabyte -it yb-tserver-0 -- /home/yugabyte/bin/ysqlsh -h yb-tserver-0.yb-tservers.yugabyte

Defaulted container "yb-tserver" out of: yb-tserver, yb-cleanup, yugabyted-ui

Error from server (BadRequest): pod yb-tserver-0 does not have a host assigned

・ログ確認

kubectl logs yb-tserver-0 -f -n=yugabyte

・Podログイン

PS C:\Users\mazin> kubectl exec -n=yugabyte -it yb-tserver-0 -- /bin/bash --login

Defaulted container "yb-tserver" out of: yb-tserver, yb-cleanup, yugabyted-ui

[root@yb-tserver-0 cores]#

kubectl --namespace yugabyte port-forward svc/yb-master-ui 7000:7000

→アクセス不能

・SQLコマンドモード

kubectl --namespace yugabyte exec -it yb-tserver-0 -- sh -c "cd /home/yugabyte && ysqlsh -h yb-tserver-0 --echo-queries"

server-0 --echo-queries"

Defaulted container "yb-tserver" out of: yb-tserver, yb-cleanup, yugabyted-ui

ysqlsh (11.2-YB-2024.2.3.2-b0)

Type "help" for help.

・すべてのタブレット サーバーを一覧表示

[root@yb-tserver-0 cores]# yb-admin -master_addresses 10.1.0.64:7100,10.1.0.58:7100,10.1.0.65:7100 list_all_tablet_servers

Tablet Server UUID RPC Host/Port Heartbeat delay Status Reads/s Writes/s Uptime SST total size SST uncomp size SST #files Memory Broadcast Host/Port

f2b008ac01564be8b2d61efba0ece54c yb-tserver-0.yb-tservers.yugabyte.svc.cluster.local:9100 0.55s ALIVE 0.00 0.00 7482 0 B 0 B 0 53.85 MB yb-tserver-0.yb-tservers.yugabyte.svc.cluster.local:9100

711e5f72311b4bf49bcef1d25d6ca91b yb-tserver-2.yb-tservers.yugabyte.svc.cluster.local:9100 0.50s ALIVE 0.00 0.60 7484 0 B 0 B 0 55.84 MB yb-tserver-2.yb-tservers.yugabyte.svc.cluster.local:9100

10e2fae4437c44bfba0d1b2baff5c57a yb-tserver-1.yb-tservers.yugabyte.svc.cluster.local:9100 0.55s ALIVE 0.00 0.00 7489 0 B 0 B 0 53.46 MB yb-tserver-1.yb-tservers.yugabyte.svc.cluster.local:9100

・YB マスター サーバーの一覧を表示

[root@yb-tserver-0 cores]# yb-admin -master_addresses 10.1.0.64:7100,10.1.0.58:7100,10.1.0.65:7100 list_all_masters

Master UUID RPC Host/Port State Role Broadcast Host/Port

6fccb607e7e941daa9ad9e64e9768dda yb-master-0.yb-masters.yugabyte.svc.cluster.local:7100 ALIVE FOLLOWER yb-master-0.yb-masters.yugabyte.svc.cluster.local:7100

8852847c45294d2283ae50c4b88af4ca yb-master-1.yb-masters.yugabyte.svc.cluster.local:7100 ALIVE FOLLOWER yb-master-1.yb-masters.yugabyte.svc.cluster.local:7100

93e3ac21117e49bcbd1661183554ff72 yb-master-2.yb-masters.yugabyte.svc.cluster.local:7100 ALIVE LEADER yb-master-2.yb-masters.yugabyte.svc.cluster.local:7100

・snapshot list

yb-admin -master_addresses ip1:7100,ip2:7100,ip3:7100 list_snapshots

yb-admin -master_addresses 10.1.0.64:7100,10.1.0.58:7100,10.1.0.65:7100 list_snapshots

PS C:\Users\mazin> kubectl describe pod yb-master-0 -n=yugabyte | findstr IP

IP: 10.1.0.64

IPs:

IP: 10.1.0.64

POD_IP: (v1:status.podIP)

PS C:\Users\mazin> kubectl describe pod yb-master-1 -n=yugabyte | findstr IP

IP: 10.1.0.58

IPs:

IP: 10.1.0.58

POD_IP: (v1:status.podIP)

PS C:\Users\mazin> kubectl describe pod yb-master-2 -n=yugabyte | findstr IP

IP: 10.1.0.65

IPs:

IP: 10.1.0.65

POD_IP: (v1:status.podIP)

PS C:\Users\mazin> kubectl exec -n=yugabyte -it yb-tserver-0 -- /bin/bash --login

Defaulted container "yb-tserver" out of: yb-tserver, yb-cleanup, yugabyted-ui

[root@yb-tserver-0 cores]# yb-admin -master_addresses 10.1.0.64:7100,10.1.0.58:7100,10.1.0.65:7100 list_snapshots

No snapshots

No snapshot restorations

・スナップショットのスケジュール設定

yb-admin -master_addresses 10.1.0.64:7100,10.1.0.58:7100,10.1.0.65:7100 list_snapshot_schedules

[root@yb-tserver-0 cores]# yb-admin -master_addresses 10.1.0.64:7100,10.1.0.58:7100,10.1.0.65:7100 list_snapshot_schedules

{

"schedules": []

}

・tablets状態確認コマンド

yb-ts-cli [ --server_address=<host>:<port> ] list_tablets

¥状態表示

yb-ts-cli [ --server_address=<host>:<port> ] status

yb-ts-cli --server_address=10.1.0.62:9100,10.1.0.63:9100,10.1.0.57:9100 status →コマンド十敗

yb-ts-cli --server_address=10.1.0.62:9100 status

PS C:\Users\mazin> kubectl describe pod yb-tserver-0 -n=yugabyte | findstr IP

IP: 10.1.0.62

IPs:

IP: 10.1.0.62

POD_IP: (v1:status.podIP)

PS C:\Users\mazin> kubectl describe pod yb-tserver-1 -n=yugabyte | findstr IP

IP: 10.1.0.63

IPs:

IP: 10.1.0.63

POD_IP: (v1:status.podIP)

PS C:\Users\mazin> kubectl describe pod yb-tserver-2 -n=yugabyte | findstr IP

IP: 10.1.0.57

IPs:

IP: 10.1.0.57

POD_IP: (v1:status.podIP)

[root@yb-tserver-0 cores]# yb-ts-cli --server_address=10.1.0.62:9100 status

node_instance {

permanent_uuid: "f2b008ac01564be8b2d61efba0ece54c"

instance_seqno: 1751773831706082

start_time_us: 1751773831706082

}

bound_rpc_addresses {

host: "10.1.0.62"

port: 9100

}

bound_http_addresses {

host: "0.0.0.0"

port: 9000

}

version_info {

git_hash: "1d80005a21157bd42050615aeca130ff3ff140ef"

build_hostname: "alma8-gcp-jenkins-builder-5yfh0t"

build_timestamp: "18 Jun 2025 22:03:25 UTC"

build_username: "jenkins"

build_clean_repo: true

build_id: "3640"

build_type: "RELEASE"

version_number: "2024.2.3.2"

build_number: "6"

}

[root@yb-tserver-0 cores]# yb-ts-cli --server_address=10.1.0.63:9100 status

node_instance {

permanent_uuid: "10e2fae4437c44bfba0d1b2baff5c57a"

instance_seqno: 1751773825394600

start_time_us: 1751773825394600

}

bound_rpc_addresses {

host: "10.1.0.63"

port: 9100

}

bound_http_addresses {

host: "0.0.0.0"

port: 9000

}

version_info {

git_hash: "1d80005a21157bd42050615aeca130ff3ff140ef"

build_hostname: "alma8-gcp-jenkins-builder-5yfh0t"

build_timestamp: "18 Jun 2025 22:03:25 UTC"

build_username: "jenkins"

build_clean_repo: true

build_id: "3640"

build_type: "RELEASE"

version_number: "2024.2.3.2"

build_number: "6"

}

[root@yb-tserver-0 cores]# yb-ts-cli --server_address=10.1.0.57:9100 status

node_instance {

permanent_uuid: "711e5f72311b4bf49bcef1d25d6ca91b"

instance_seqno: 1751773829747668

start_time_us: 1751773829747668

}

bound_rpc_addresses {

host: "10.1.0.57"

port: 9100

}

bound_http_addresses {

host: "0.0.0.0"

port: 9000

}

version_info {

git_hash: "1d80005a21157bd42050615aeca130ff3ff140ef"

build_hostname: "alma8-gcp-jenkins-builder-5yfh0t"

build_timestamp: "18 Jun 2025 22:03:25 UTC"

build_username: "jenkins"

build_clean_repo: true

build_id: "3640"

build_type: "RELEASE"

version_number: "2024.2.3.2"

build_number: "6"

}

PS C:\Users\mazin> kubectl get pvc -n=yugabyte

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE

datadir0-yb-master-0 Bound pvc-01ecb834-fde5-4eed-9d6e-adf72cbddcb8 10Gi RWO hostpath <unset> 47d

datadir0-yb-master-1 Bound pvc-8fed3ead-ff6e-427d-9fee-d35a42d9a6b9 10Gi RWO hostpath <unset> 47d

datadir0-yb-master-2 Bound pvc-84f69219-42e0-4f12-ad86-776a8560e6b0 10Gi RWO hostpath <unset> 47d

datadir0-yb-tserver-0 Bound pvc-924493ff-5751-487e-9654-1b6dc8381a78 10Gi RWO hostpath <unset> 47d

datadir0-yb-tserver-1 Bound pvc-ce7697d7-d999-49df-9f0a-9157ef97f963 10Gi RWO hostpath <unset> 47d

datadir0-yb-tserver-2 Bound pvc-90064a25-fd48-4f4d-8645-e82ee8158da9 10Gi RWO hostpath <unset> 47d

datadir1-yb-master-0 Bound pvc-a177c311-dfb1-45f1-8754-14dd76325c4d 10Gi RWO hostpath <unset> 47d

datadir1-yb-master-1 Bound pvc-10cc9e06-98de-4a8a-bcb6-29c7eccd6839 10Gi RWO hostpath <unset> 47d

datadir1-yb-master-2 Bound pvc-c4190753-340c-4a60-9e75-8f411cb6f27d 10Gi RWO hostpath <unset> 47d

datadir1-yb-tserver-0 Bound pvc-477c59a9-7dbe-400a-a8dc-0f5c1fe0949f 10Gi RWO hostpath <unset> 47d

datadir1-yb-tserver-1 Bound pvc-0755fa34-836c-4ad6-a471-0048908e187b 10Gi RWO hostpath <unset> 47d

datadir1-yb-tserver-2 Bound pvc-ef81c5fe-f3b5-46ee-a341-3803f849c02b 10Gi RWO hostpath <unset> 47d

・POD削除コマンド

kubectl delete pod -n=yugabyte

PS C:\Users\mazin> kubectl delete pod yb-master-0 -n=yugabyte

pod "yb-master-0" deleted

PS C:\Users\mazin>

PS C:\Users\mazin> kubectl get pod -n=yugabyte

NAME READY STATUS RESTARTS AGE

yb-master-0 3/3 Running 0 12s

yb-master-1 3/3 Running 47 (7d22h ago) 55d

yb-master-2 3/3 Running 47 (7d22h ago) 55d

yb-tserver-0 3/3 Running 48 (7d22h ago) 55d

yb-tserver-1 3/3 Running 48 (7d22h ago) 55d

yb-tserver-2 3/3 Running 48 (7d22h ago) 55d

PS C:\Users\mazin> kubectl get secret -n=yugabyte

NAME TYPE DATA AGE

sh.helm.release.v1.yugabyte.v1 helm.sh/release.v1 1 60d

yugabyte-master-gflags Opaque 1 60d

yugabyte-tserver-gflags Opaque 1 60d

PS C:\Users\mazin> kubectl get service -n=yugabyte

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

yb-master-ui LoadBalancer 10.108.152.174 localhost 7000:30282/TCP 60d

yb-masters ClusterIP None <none> 7000/TCP,7100/TCP,15433/TCP 60d

yb-tserver-service LoadBalancer 10.111.156.2 localhost 6379:32580/TCP,9042:30878/TCP,5433:30284/TCP 60d

yb-tservers ClusterIP None <none> 9000/TCP,12000/TCP,11000/TCP,13000/TCP,9100/TCP,6379/TCP,9042/TCP,5433/TCP,15433/TCP 60d

yugabyted-ui-service LoadBalancer 10.99.41.42 localhost 15433:31540/TCP 60d

PS C:\Users\mazin> kubectl get configmap -n=yugabyte

NAME DATA AGE

kube-root-ca.crt 1 60d

yugabyte-master-hooks 6 60d

yugabyte-tserver-hooks 6 60d

・Podの詳細

S C:\Users\mazin> kubectl describe pod yb-master-0 -n=yugabyte

Name: yb-master-0

Namespace: yugabyte

Priority: 0

Service Account: default

Node: docker-desktop/192.168.65.3

Start Time: Sun, 24 Aug 2025 11:22:43 +0900

Labels: app=yb-master

apps.kubernetes.io/pod-index=0

chart=yugabyte

component=yugabytedb

controller-revision-hash=yb-master-788f5cb89c

heritage=Helm

release=yugabyte

statefulset.kubernetes.io/pod-name=yb-master-0

yugabytedUi=true

Annotations: checksum/gflags: 34b8186db404bc89508076fb2808b55bf1c379c4958176b39b7a596fc1aa36cc

Status: Running

IP: 10.1.0.151

IPs:

IP: 10.1.0.151

Controlled By: StatefulSet/yb-master

Containers:

yb-master:

Container ID: docker://b33e7b16b88fa56c3bfb5c49abdf609fbe0ffc45b25512cbed0878e952bb33a9

Image: yugabytedb/yugabyte:2024.2.3.2-b6

Image ID: docker-pullable://yugabytedb/yugabyte@sha256:b153366d21408e2e83c8a388b245279b9d72f56800145c6d07b5c8fdaee0e08d

Ports: 7000/TCP, 7100/TCP, 15433/TCP

Host Ports: 0/TCP, 0/TCP, 0/TCP

Command:

/sbin/tini

--

Args:

/bin/bash

-c

if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then

/home/yugabyte/tools/k8s_preflight.py all

fi && \

echo "disk check at: $(date)" \

| tee "/mnt/disk0/disk.check" "/mnt/disk1/disk.check" \

&& sync "/mnt/disk0/disk.check" "/mnt/disk1/disk.check" && \

if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then

PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \

dnscheck \

--addr="${HOSTNAME}.yb-masters.${NAMESPACE}.svc.cluster.local" \

--port="7100"

fi && \

if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then

PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \

dnscheck \

--addr="${HOSTNAME}.yb-masters.${NAMESPACE}.svc.cluster.local:7100" \

--port="7100"

fi && \

if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then

PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \

dnscheck \

--addr="0.0.0.0" \

--port="7000"

fi && \

if [[ -f /home/yugabyte/tools/k8s_parent.py ]]; then

k8s_parent="/home/yugabyte/tools/k8s_parent.py"

else

k8s_parent=""

fi && \

mkdir -p /tmp/yugabyte/master/conf && \

envsubst < /opt/master/conf/server.conf.template > /tmp/yugabyte/master/conf/server.conf && \

exec ${k8s_parent} /home/yugabyte/bin/yb-master \

--flagfile /tmp/yugabyte/master/conf/server.conf

State: Running

Started: Mon, 01 Sep 2025 20:59:45 +0900

Last State: Terminated

Reason: Error

Exit Code: 137

Started: Thu, 28 Aug 2025 21:49:45 +0900

Finished: Thu, 28 Aug 2025 22:15:25 +0900

Ready: True

Restart Count: 2

Limits:

cpu: 2

memory: 2Gi

Requests:

cpu: 500m

memory: 512Mi

Liveness: exec [bash -v -c echo "disk check at: $(date)" \

| tee "/mnt/disk0/disk.check" "/mnt/disk1/disk.check" \

&& sync "/mnt/disk0/disk.check" "/mnt/disk1/disk.check";

exit_code="$?";

echo "disk check exited with: ${exit_code}";

exit "${exit_code}"

] delay=0s timeout=1s period=10s #success=1 #failure=3

Environment:

POD_IP: (v1:status.podIP)

HOSTNAME: yb-master-0 (v1:metadata.name)

NAMESPACE: yugabyte (v1:metadata.namespace)

YBDEVOPS_CORECOPY_DIR: /mnt/disk0/cores

Mounts:

/mnt/disk0 from datadir0 (rw)

/mnt/disk1 from datadir1 (rw)

/opt/debug_hooks_config from debug-hooks-volume (rw)

/opt/master/conf from master-gflags (rw)

/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qx8lv (ro)

yb-cleanup:

Container ID: docker://0ca3c3efd73a4797d9be1445bb479a147d39e00347da3188bdbdcff8aea5c3da

Image: yugabytedb/yugabyte:2024.2.3.2-b6

Image ID: docker-pullable://yugabytedb/yugabyte@sha256:b153366d21408e2e83c8a388b245279b9d72f56800145c6d07b5c8fdaee0e08d

Port: <none>

Host Port: <none>

Command:

/sbin/tini

--

Args:

/bin/bash

-c

while true; do

sleep 3600;

/home/yugabyte/scripts/log_cleanup.sh;

done

State: Running

Started: Mon, 01 Sep 2025 20:59:48 +0900

Last State: Terminated

Reason: Error

Exit Code: 143

Started: Thu, 28 Aug 2025 21:49:48 +0900

Finished: Thu, 28 Aug 2025 22:15:24 +0900

Ready: True

Restart Count: 2

Environment:

USER: yugabyte

Mounts:

/home/yugabyte/ from datadir0 (rw,path="yb-data")

/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qx8lv (ro)

/var/yugabyte/cores from datadir0 (rw,path="cores")

yugabyted-ui:

Container ID: docker://2999dec1949e6ea1e72f7c7ba69512efd1fea06e9816afe375f1e4476e9ee03b

Image: yugabytedb/yugabyte:2024.2.3.2-b6

Image ID: docker-pullable://yugabytedb/yugabyte@sha256:b153366d21408e2e83c8a388b245279b9d72f56800145c6d07b5c8fdaee0e08d

Port: <none>

Host Port: <none>

Command:

/sbin/tini

--

Args:

/bin/bash

-c

while true; do

/home/yugabyte/bin/yugabyted-ui \

-database_host=${HOSTNAME}.yb-masters.${NAMESPACE}.svc.cluster.local \

-bind_address=0.0.0.0 \

-ysql_port=5433 \

-ycql_port=9042 \

-master_ui_port=7000 \

-tserver_ui_port=9000 \

|| echo "ERROR: yugabyted-ui failed. This might be because your yugabyte \

version is older than 2.21.0. If this is the case, set yugabytedUi.enabled to false \

in helm to disable yugabyted-ui, or upgrade to a version 2.21.0 or newer."; \

echo "Attempting restart in 30s."

trap break TERM INT; \

sleep 30s & wait; \

trap - TERM INT;

done \

State: Running

Started: Mon, 01 Sep 2025 20:59:51 +0900

Last State: Terminated

Reason: Error

Exit Code: 143

Started: Thu, 28 Aug 2025 21:49:50 +0900

Finished: Thu, 28 Aug 2025 22:15:24 +0900

Ready: True

Restart Count: 2

Environment:

HOSTNAME: yb-master-0 (v1:metadata.name)

NAMESPACE: yugabyte (v1:metadata.namespace)

Mounts:

/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qx8lv (ro)

Conditions:

Type Status

PodReadyToStartContainers True

Initialized True

Ready True

ContainersReady True

TokenExpirationSeconds: 3607

ConfigMapName: kube-root-ca.crt

ConfigMapOptional: <nil>

DownwardAPI: true

QoS Class: Burstable

Node-Selectors: <none>

Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s

node.kubernetes.io/unreachable:NoExecute op=Exists for 300s

Events:

Type Reason Age From Message

---- ------ ---- ---- -------

Warning Unhealthy 32m (x3 over 148m) kubelet Liveness probe failed: command timed out: "bash -v -c echo \"disk check at: $(date)\" \\\n | tee \"/mnt/disk0/disk.check\" \"/mnt/disk1/disk.check\" \\\n && sync \"/mnt/disk0/disk.check\" \"/mnt/disk1/disk.check\";\nexit_code=\"$?\";\necho \"disk check exited with: ${exit_code}\";\nexit \"${exit_code}\"\n" timed out after 1s

・監視

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

namespace/kubernetes-dashboard created

serviceaccount/kubernetes-dashboard created

service/kubernetes-dashboard created

secret/kubernetes-dashboard-certs created

secret/kubernetes-dashboard-csrf created

secret/kubernetes-dashboard-key-holder created

configmap/kubernetes-dashboard-settings created

role.rbac.authorization.k8s.io/kubernetes-dashboard created

clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created

rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created

clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created

deployment.apps/kubernetes-dashboard created

service/dashboard-metrics-scraper created

Warning: spec.template.metadata.annotations[seccomp.security.alpha.kubernetes.io/pod]: non-functional in v1.27+; use the "seccompProfile" field instead

deployment.apps/dashboard-metrics-scraper created

PS C:\Users\mazin> kubectl proxy

Starting to serve on 127.0.0.1:8001

→アクセスしたところ、下記のレスポンスが返った

http://localhost:8001/

{

"paths": [

"/.well-known/openid-configuration",

"/api",

"/api/v1",

"/apis",

"/apis/",

"/apis/admissionregistration.k8s.io",

"/apis/admissionregistration.k8s.io/v1",

"/apis/apiextensions.k8s.io",

"/apis/apiextensions.k8s.io/v1",

"/apis/apiregistration.k8s.io",

"/apis/apiregistration.k8s.io/v1",

"/apis/apps",

"/apis/apps/v1",

"/apis/authentication.k8s.io",

"/apis/authentication.k8s.io/v1",

"/apis/authorization.k8s.io",

"/apis/authorization.k8s.io/v1",

"/apis/autoscaling",

"/apis/autoscaling/v1",

"/apis/autoscaling/v2",

"/apis/batch",

"/apis/batch/v1",

"/apis/certificates.k8s.io",

"/apis/certificates.k8s.io/v1",

"/apis/coordination.k8s.io",

"/apis/coordination.k8s.io/v1",

"/apis/discovery.k8s.io",

"/apis/discovery.k8s.io/v1",

"/apis/events.k8s.io",

"/apis/events.k8s.io/v1",

"/apis/flowcontrol.apiserver.k8s.io",

"/apis/flowcontrol.apiserver.k8s.io/v1",

"/apis/networking.k8s.io",

"/apis/networking.k8s.io/v1",

"/apis/node.k8s.io",

"/apis/node.k8s.io/v1",

"/apis/policy",

"/apis/policy/v1",

"/apis/rbac.authorization.k8s.io",

"/apis/rbac.authorization.k8s.io/v1",

"/apis/resource.k8s.io",

"/apis/resource.k8s.io/v1",

"/apis/scheduling.k8s.io",

"/apis/scheduling.k8s.io/v1",

"/apis/storage.k8s.io",

"/apis/storage.k8s.io/v1",

"/healthz",

"/healthz/autoregister-completion",

"/healthz/etcd",

"/healthz/log",

"/healthz/ping",

"/healthz/poststarthook/aggregator-reload-proxy-client-cert",

"/healthz/poststarthook/apiservice-discovery-controller",

"/healthz/poststarthook/apiservice-openapi-controller",

"/healthz/poststarthook/apiservice-openapiv3-controller",

"/healthz/poststarthook/apiservice-registration-controller",

"/healthz/poststarthook/apiservice-status-local-available-controller",

"/healthz/poststarthook/apiservice-status-remote-available-controller",

"/healthz/poststarthook/bootstrap-controller",

"/healthz/poststarthook/crd-informer-synced",

"/healthz/poststarthook/generic-apiserver-start-informers",

"/healthz/poststarthook/kube-apiserver-autoregistration",

"/healthz/poststarthook/priority-and-fairness-config-consumer",

"/healthz/poststarthook/priority-and-fairness-config-producer",

"/healthz/poststarthook/priority-and-fairness-filter",

"/healthz/poststarthook/rbac/bootstrap-roles",

"/healthz/poststarthook/scheduling/bootstrap-system-priority-classes",

"/healthz/poststarthook/start-apiextensions-controllers",

"/healthz/poststarthook/start-apiextensions-informers",

"/healthz/poststarthook/start-apiserver-admission-initializer",

"/healthz/poststarthook/start-cluster-authentication-info-controller",

"/healthz/poststarthook/start-kube-aggregator-informers",

"/healthz/poststarthook/start-kube-apiserver-identity-lease-controller",

"/healthz/poststarthook/start-kube-apiserver-identity-lease-garbage-collector",

"/healthz/poststarthook/start-kubernetes-service-cidr-controller",

"/healthz/poststarthook/start-legacy-token-tracking-controller",

"/healthz/poststarthook/start-service-ip-repair-controllers",

"/healthz/poststarthook/start-system-namespaces-controller",

"/healthz/poststarthook/storage-object-count-tracker-hook",

"/livez",

"/livez/autoregister-completion",

"/livez/etcd",

"/livez/log",

"/livez/ping",

"/livez/poststarthook/aggregator-reload-proxy-client-cert",

"/livez/poststarthook/apiservice-discovery-controller",

"/livez/poststarthook/apiservice-openapi-controller",

"/livez/poststarthook/apiservice-openapiv3-controller",

"/livez/poststarthook/apiservice-registration-controller",

"/livez/poststarthook/apiservice-status-local-available-controller",

"/livez/poststarthook/apiservice-status-remote-available-controller",

"/livez/poststarthook/bootstrap-controller",

"/livez/poststarthook/crd-informer-synced",

"/livez/poststarthook/generic-apiserver-start-informers",

"/livez/poststarthook/kube-apiserver-autoregistration",

"/livez/poststarthook/priority-and-fairness-config-consumer",

"/livez/poststarthook/priority-and-fairness-config-producer",

"/livez/poststarthook/priority-and-fairness-filter",

"/livez/poststarthook/rbac/bootstrap-roles",

"/livez/poststarthook/scheduling/bootstrap-system-priority-classes",

"/livez/poststarthook/start-apiextensions-controllers",

"/livez/poststarthook/start-apiextensions-informers",

"/livez/poststarthook/start-apiserver-admission-initializer",

"/livez/poststarthook/start-cluster-authentication-info-controller",

"/livez/poststarthook/start-kube-aggregator-informers",

"/livez/poststarthook/start-kube-apiserver-identity-lease-controller",

"/livez/poststarthook/start-kube-apiserver-identity-lease-garbage-collector",

"/livez/poststarthook/start-kubernetes-service-cidr-controller",

"/livez/poststarthook/start-legacy-token-tracking-controller",

"/livez/poststarthook/start-service-ip-repair-controllers",

"/livez/poststarthook/start-system-namespaces-controller",

"/livez/poststarthook/storage-object-count-tracker-hook",

"/metrics",

"/metrics/slis",

"/openapi/v2",

"/openapi/v3",

"/openapi/v3/",

"/openid/v1/jwks",

"/readyz",

"/readyz/autoregister-completion",

"/readyz/etcd",

"/readyz/etcd-readiness",

"/readyz/informer-sync",

"/readyz/log",

"/readyz/ping",

"/readyz/poststarthook/aggregator-reload-proxy-client-cert",

"/readyz/poststarthook/apiservice-discovery-controller",

"/readyz/poststarthook/apiservice-openapi-controller",

"/readyz/poststarthook/apiservice-openapiv3-controller",

"/readyz/poststarthook/apiservice-registration-controller",

"/readyz/poststarthook/apiservice-status-local-available-controller",

"/readyz/poststarthook/apiservice-status-remote-available-controller",

"/readyz/poststarthook/bootstrap-controller",

"/readyz/poststarthook/crd-informer-synced",

"/readyz/poststarthook/generic-apiserver-start-informers",

"/readyz/poststarthook/kube-apiserver-autoregistration",

"/readyz/poststarthook/priority-and-fairness-config-consumer",

"/readyz/poststarthook/priority-and-fairness-config-producer",

"/readyz/poststarthook/priority-and-fairness-filter",

"/readyz/poststarthook/rbac/bootstrap-roles",

"/readyz/poststarthook/scheduling/bootstrap-system-priority-classes",

"/readyz/poststarthook/start-apiextensions-controllers",

"/readyz/poststarthook/start-apiextensions-informers",

"/readyz/poststarthook/start-apiserver-admission-initializer",

"/readyz/poststarthook/start-cluster-authentication-info-controller",

"/readyz/poststarthook/start-kube-aggregator-informers",

"/readyz/poststarthook/start-kube-apiserver-identity-lease-controller",

"/readyz/poststarthook/start-kube-apiserver-identity-lease-garbage-collector",

"/readyz/poststarthook/start-kubernetes-service-cidr-controller",

"/readyz/poststarthook/start-legacy-token-tracking-controller",

"/readyz/poststarthook/start-service-ip-repair-controllers",

"/readyz/poststarthook/start-system-namespaces-controller",

"/readyz/poststarthook/storage-object-count-tracker-hook",

"/readyz/shutdown",

"/version"

]

}

・下記から、YugabyteDBの管理ツールへアクセス可能

kubectl --namespace yugabyte port-forward svc/yb-master-ui 7000:7000

http://localhost:7000/

※※※helmでyugabyteDBのインストール検証コマンド※※※

helminstall_yugabyte

helm install yb-demo yugabytedb/yugabyte --set resource.master.requests.cpu=0.5,resource.master.requests.memory=0.5Gi,resource.tserver.requests.cpu=0.5,resource.tserver.requests.memory=0.5Gi,replicas.master=1,replicas.tserver=1,Image.tag=2024.2.2.3-b1 --namespace yb-demo

PS C:\Windows\system32> kubectl --namespace yb-demo get pods

NAME READY STATUS RESTARTS AGE

yb-master-0 3/3 Running 0 5m46s

yb-tserver-0 3/3 Running 0 5m46s

PS C:\Windows\system32> kubectl --namespace yb-demo get services

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

yb-master-ui LoadBalancer 10.106.219.231 localhost 7000:31094/TCP 5m57s

yb-masters ClusterIP None <none> 7000/TCP,7100/TCP,15433/TCP 5m57s

yb-tserver-service LoadBalancer 10.108.132.0 localhost 6379:32223/TCP,9042:30514/TCP,5433:31413/TCP 5m57s

yb-tservers ClusterIP None <none> 9000/TCP,12000/TCP,11000/TCP,13000/TCP,9100/TCP,6379/TCP,9042/TCP,5433/TCP,15433/TCP 5m57s

yugabyted-ui-service LoadBalancer 10.109.88.69 localhost 15433:31677/TCP 5m57s

PS C:\Windows\system32> kubectl get svc --namespace yb-demo

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

yb-master-ui LoadBalancer 10.106.219.231 localhost 7000:31094/TCP 6m3s

yb-masters ClusterIP None <none> 7000/TCP,7100/TCP,15433/TCP 6m3s

yb-tserver-service LoadBalancer 10.108.132.0 localhost 6379:32223/TCP,9042:30514/TCP,5433:31413/TCP 6m3s

yb-tservers ClusterIP None <none> 9000/TCP,12000/TCP,11000/TCP,13000/TCP,9100/TCP,6379/TCP,9042/TCP,5433/TCP,15433/TCP 6m3s

yugabyted-ui-service LoadBalancer 10.109.88.69 localhost 15433:31677/TCP 6m3s

kubectl exec --namespace yb-demo -it yb-tserver-0 bash

PS C:\Windows\system32> kubectl --namespace yb-demo exec -it yb-tserver-0 -- sh -c "cd /home/yugabyte && ysqlsh -h yb-tserver-0 --echo-queries"

Defaulted container "yb-tserver" out of: yb-tserver, yb-cleanup, yugabyted-ui

ysqlsh (11.2-YB-2024.2.2.3-b0)

Type "help" for help.

yugabyte=# \l

List of databases

Name | Owner | Encoding | Collate | Ctype | Access privileges

-----------------+----------+----------+---------+-------------+-----------------------

postgres | postgres | UTF8 | C | en_US.UTF-8 |

system_platform | postgres | UTF8 | C | en_US.UTF-8 |

template0 | postgres | UTF8 | C | en_US.UTF-8 | =c/postgres +

| | | | | postgres=CTc/postgres

template1 | postgres | UTF8 | C | en_US.UTF-8 | =c/postgres +

| | | | | postgres=CTc/postgres

yugabyte | postgres | UTF8 | C | en_US.UTF-8 |

(5 rows)

yugabyte=# \du

List of roles

Role name | Attributes | Member of

--------------+------------------------------------------------------------+-----------

postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

yb_db_admin | No inheritance, Cannot login | {}

yb_extension | Cannot login | {}

yb_fdw | Cannot login | {}

yugabyte | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

yugabyte=# \q

PS C:\Windows\system32>

kubectl --namespace yb-demo port-forward svc/yb-master-ui 7000:7000

helm repo add yugabytedb https://charts.yugabyte.comhelm repo updatekubectl create namespace yugabytehelm install yugabyte yugabytedb/yugabyte --set resource.master.requests.cpu=0.5,resource.master.requests.memory=0.5Gi,resource.tserver.requests.cpu=0.5,resource.tserver.requests.memory=0.5Gi,replicas.master=3,replicas.tserver=3,Image.tag=2024.2.3.2-b6 --namespace yugabyte
1. Get YugabyteDB Pods by running this command: kubectl --namespace yugabyte get pods
2. Get list of YugabyteDB services that are running: kubectl --namespace yugabyte get services
kubectl exec --namespace yugabyte -it yb-tserver-0 -- /home/yugabyte/bin/ysqlsh -h yb-tserver-0.yb-tservers.yugabyte
6. Cleanup YugabyteDB Pods For helm 2: helm delete yugabyte --purge For helm 3: helm delete yugabyte -n yugabyte NOTE: You need to manually delete the persistent volume kubectl delete pvc --namespace yugabyte -l app=yb-master kubectl delete pvc --namespace yugabyte -l app=yb-tserver
NOTE: The yugabyted UI is now available and is enabled by default. It requires version 2.21.0 or greater.If you are using a custom image of YugabyteDB that is older than 2.21.0, please disable the UI by setting yugabytedUi.enabled to false.
PS C:\Users\mazin> kubectl --namespace yugabyte get podsNAME READY STATUS RESTARTS AGEyb-master-0 0/3 Pending 0 30syb-master-1 0/3 Pending 0 30syb-master-2 0/3 Pending 0 30syb-tserver-0 0/3 Pending 0 30syb-tserver-1 0/3 Pending 0 30syb-tserver-2 0/3 Pending 0 30sPS C:\Users\mazin> kubectl --namespace yugabyte get servicesNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEyb-master-ui LoadBalancer 10.107.228.137 localhost 7000:32531/TCP 50syb-masters ClusterIP None <none> 7000/TCP,7100/TCP,15433/TCP 50syb-tserver-service LoadBalancer 10.97.190.11 localhost 6379:30979/TCP,9042:30606/TCP,5433:30517/TCP 50syb-tservers ClusterIP None <none> 9000/TCP,12000/TCP,11000/TCP,13000/TCP,9100/TCP,6379/TCP,9042/TCP,5433/TCP,15433/TCP 50syugabyted-ui-service LoadBalancer 10.98.72.106 localhost 15433:30725/TCP 50sPS C:\Users\mazin> kubectl exec --namespace yugabyte -it yb-tserver-0 -- /home/yugabyte/bin/ysqlsh -h yb-tserver-0.yb-tservers.yugabyteDefaulted container "yb-tserver" out of: yb-tserver, yb-cleanup, yugabyted-uiError from server (BadRequest): pod yb-tserver-0 does not have a host assigned
・ログ確認kubectl logs yb-tserver-0 -f -n=yugabyte
・PodログインPS C:\Users\mazin> kubectl exec -n=yugabyte -it yb-tserver-0 -- /bin/bash --login Defaulted container "yb-tserver" out of: yb-tserver, yb-cleanup, yugabyted-ui[root@yb-tserver-0 cores]#
kubectl --namespace yugabyte port-forward svc/yb-master-ui 7000:7000→アクセス不能
・SQLコマンドモードkubectl --namespace yugabyte exec -it yb-tserver-0 -- sh -c "cd /home/yugabyte && ysqlsh -h yb-tserver-0 --echo-queries"server-0 --echo-queries"Defaulted container "yb-tserver" out of: yb-tserver, yb-cleanup, yugabyted-uiysqlsh (11.2-YB-2024.2.3.2-b0)Type "help" for help.
・すべてのタブレット サーバーを一覧表示[root@yb-tserver-0 cores]# yb-admin -master_addresses 10.1.0.64:7100,10.1.0.58:7100,10.1.0.65:7100 list_all_tablet_serversTablet Server UUID RPC Host/Port Heartbeat delay Status Reads/s Writes/s Uptime SST total size SST uncomp size SST #files Memory Broadcast Host/Port
f2b008ac01564be8b2d61efba0ece54c yb-tserver-0.yb-tservers.yugabyte.svc.cluster.local:9100 0.55s ALIVE 0.00 0.00 7482 0 B 0 B 0 53.85 MB yb-tserver-0.yb-tservers.yugabyte.svc.cluster.local:9100711e5f72311b4bf49bcef1d25d6ca91b yb-tserver-2.yb-tservers.yugabyte.svc.cluster.local:9100 0.50s ALIVE 0.00 0.60 7484 0 B 0 B 0 55.84 MB yb-tserver-2.yb-tservers.yugabyte.svc.cluster.local:910010e2fae4437c44bfba0d1b2baff5c57a yb-tserver-1.yb-tservers.yugabyte.svc.cluster.local:9100 0.55s ALIVE 0.00 0.00 7489 0 B 0 B 0 53.46 MB yb-tserver-1.yb-tservers.yugabyte.svc.cluster.local:9100

・YB マスター サーバーの一覧を表示[root@yb-tserver-0 cores]# yb-admin -master_addresses 10.1.0.64:7100,10.1.0.58:7100,10.1.0.65:7100 list_all_mastersMaster UUID RPC Host/Port State Role Broadcast Host/Port 6fccb607e7e941daa9ad9e64e9768dda yb-master-0.yb-masters.yugabyte.svc.cluster.local:7100 ALIVE FOLLOWER yb-master-0.yb-masters.yugabyte.svc.cluster.local:71008852847c45294d2283ae50c4b88af4ca yb-master-1.yb-masters.yugabyte.svc.cluster.local:7100 ALIVE FOLLOWER yb-master-1.yb-masters.yugabyte.svc.cluster.local:710093e3ac21117e49bcbd1661183554ff72 yb-master-2.yb-masters.yugabyte.svc.cluster.local:7100 ALIVE LEADER yb-master-2.yb-masters.yugabyte.svc.cluster.local:7100

・snapshot listyb-admin -master_addresses ip1:7100,ip2:7100,ip3:7100 list_snapshotsyb-admin -master_addresses 10.1.0.64:7100,10.1.0.58:7100,10.1.0.65:7100 list_snapshots
PS C:\Users\mazin> kubectl describe pod yb-master-0 -n=yugabyte | findstr IPIP: 10.1.0.64IPs: IP: 10.1.0.64 POD_IP: (v1:status.podIP)PS C:\Users\mazin> kubectl describe pod yb-master-1 -n=yugabyte | findstr IPIP: 10.1.0.58IPs: IP: 10.1.0.58 POD_IP: (v1:status.podIP)PS C:\Users\mazin> kubectl describe pod yb-master-2 -n=yugabyte | findstr IPIP: 10.1.0.65IPs: IP: 10.1.0.65 POD_IP: (v1:status.podIP)
PS C:\Users\mazin> kubectl exec -n=yugabyte -it yb-tserver-0 -- /bin/bash --login Defaulted container "yb-tserver" out of: yb-tserver, yb-cleanup, yugabyted-ui[root@yb-tserver-0 cores]# yb-admin -master_addresses 10.1.0.64:7100,10.1.0.58:7100,10.1.0.65:7100 list_snapshotsNo snapshotsNo snapshot restorations
・スナップショットのスケジュール設定 yb-admin -master_addresses 10.1.0.64:7100,10.1.0.58:7100,10.1.0.65:7100 list_snapshot_schedules
[root@yb-tserver-0 cores]# yb-admin -master_addresses 10.1.0.64:7100,10.1.0.58:7100,10.1.0.65:7100 list_snapshot_schedules{ "schedules": []}
・tablets状態確認コマンドyb-ts-cli [ --server_address=<host>:<port> ] list_tablets
¥状態表示yb-ts-cli [ --server_address=<host>:<port> ] statusyb-ts-cli --server_address=10.1.0.62:9100,10.1.0.63:9100,10.1.0.57:9100 status →コマンド十敗yb-ts-cli --server_address=10.1.0.62:9100 status
PS C:\Users\mazin> kubectl describe pod yb-tserver-0 -n=yugabyte | findstr IPIP: 10.1.0.62IPs: IP: 10.1.0.62 POD_IP: (v1:status.podIP)PS C:\Users\mazin> kubectl describe pod yb-tserver-1 -n=yugabyte | findstr IPIP: 10.1.0.63IPs: IP: 10.1.0.63 POD_IP: (v1:status.podIP)PS C:\Users\mazin> kubectl describe pod yb-tserver-2 -n=yugabyte | findstr IPIP: 10.1.0.57IPs: IP: 10.1.0.57 POD_IP: (v1:status.podIP)[root@yb-tserver-0 cores]# yb-ts-cli --server_address=10.1.0.62:9100 statusnode_instance { permanent_uuid: "f2b008ac01564be8b2d61efba0ece54c" instance_seqno: 1751773831706082 start_time_us: 1751773831706082}bound_rpc_addresses { host: "10.1.0.62" port: 9100}bound_http_addresses { host: "0.0.0.0" port: 9000}version_info { git_hash: "1d80005a21157bd42050615aeca130ff3ff140ef" build_hostname: "alma8-gcp-jenkins-builder-5yfh0t" build_timestamp: "18 Jun 2025 22:03:25 UTC" build_username: "jenkins" build_clean_repo: true build_id: "3640" build_type: "RELEASE" version_number: "2024.2.3.2" build_number: "6"}[root@yb-tserver-0 cores]# yb-ts-cli --server_address=10.1.0.63:9100 statusnode_instance { permanent_uuid: "10e2fae4437c44bfba0d1b2baff5c57a" instance_seqno: 1751773825394600 start_time_us: 1751773825394600}bound_rpc_addresses { host: "10.1.0.63" port: 9100}bound_http_addresses { host: "0.0.0.0" port: 9000}version_info { git_hash: "1d80005a21157bd42050615aeca130ff3ff140ef" build_hostname: "alma8-gcp-jenkins-builder-5yfh0t" build_timestamp: "18 Jun 2025 22:03:25 UTC" build_username: "jenkins" build_clean_repo: true build_id: "3640" build_type: "RELEASE" version_number: "2024.2.3.2" build_number: "6"}
[root@yb-tserver-0 cores]# yb-ts-cli --server_address=10.1.0.57:9100 statusnode_instance { permanent_uuid: "711e5f72311b4bf49bcef1d25d6ca91b" instance_seqno: 1751773829747668 start_time_us: 1751773829747668}bound_rpc_addresses { host: "10.1.0.57" port: 9100}bound_http_addresses { host: "0.0.0.0" port: 9000}version_info { git_hash: "1d80005a21157bd42050615aeca130ff3ff140ef" build_hostname: "alma8-gcp-jenkins-builder-5yfh0t" build_timestamp: "18 Jun 2025 22:03:25 UTC" build_username: "jenkins" build_clean_repo: true build_id: "3640" build_type: "RELEASE" version_number: "2024.2.3.2" build_number: "6"}
PS C:\Users\mazin> kubectl get pvc -n=yugabyteNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGEdatadir0-yb-master-0 Bound pvc-01ecb834-fde5-4eed-9d6e-adf72cbddcb8 10Gi RWO hostpath <unset> 47ddatadir0-yb-master-1 Bound pvc-8fed3ead-ff6e-427d-9fee-d35a42d9a6b9 10Gi RWO hostpath <unset> 47ddatadir0-yb-master-2 Bound pvc-84f69219-42e0-4f12-ad86-776a8560e6b0 10Gi RWO hostpath <unset> 47ddatadir0-yb-tserver-0 Bound pvc-924493ff-5751-487e-9654-1b6dc8381a78 10Gi RWO hostpath <unset> 47ddatadir0-yb-tserver-1 Bound pvc-ce7697d7-d999-49df-9f0a-9157ef97f963 10Gi RWO hostpath <unset> 47ddatadir0-yb-tserver-2 Bound pvc-90064a25-fd48-4f4d-8645-e82ee8158da9 10Gi RWO hostpath <unset> 47ddatadir1-yb-master-0 Bound pvc-a177c311-dfb1-45f1-8754-14dd76325c4d 10Gi RWO hostpath <unset> 47ddatadir1-yb-master-1 Bound pvc-10cc9e06-98de-4a8a-bcb6-29c7eccd6839 10Gi RWO hostpath <unset> 47ddatadir1-yb-master-2 Bound pvc-c4190753-340c-4a60-9e75-8f411cb6f27d 10Gi RWO hostpath <unset> 47ddatadir1-yb-tserver-0 Bound pvc-477c59a9-7dbe-400a-a8dc-0f5c1fe0949f 10Gi RWO hostpath <unset> 47ddatadir1-yb-tserver-1 Bound pvc-0755fa34-836c-4ad6-a471-0048908e187b 10Gi RWO hostpath <unset> 47ddatadir1-yb-tserver-2 Bound pvc-ef81c5fe-f3b5-46ee-a341-3803f849c02b 10Gi RWO hostpath <unset> 47d
・POD削除コマンドkubectl delete pod -n=yugabytePS C:\Users\mazin> kubectl delete pod yb-master-0 -n=yugabytepod "yb-master-0" deletedPS C:\Users\mazin> PS C:\Users\mazin> kubectl get pod -n=yugabyte NAME READY STATUS RESTARTS AGEyb-master-0 3/3 Running 0 12syb-master-1 3/3 Running 47 (7d22h ago) 55dyb-master-2 3/3 Running 47 (7d22h ago) 55dyb-tserver-0 3/3 Running 48 (7d22h ago) 55dyb-tserver-1 3/3 Running 48 (7d22h ago) 55dyb-tserver-2 3/3 Running 48 (7d22h ago) 55d
PS C:\Users\mazin> kubectl get secret -n=yugabyteNAME TYPE DATA AGEsh.helm.release.v1.yugabyte.v1 helm.sh/release.v1 1 60dyugabyte-master-gflags Opaque 1 60dyugabyte-tserver-gflags Opaque 1 60d
PS C:\Users\mazin> kubectl get service -n=yugabyteNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEyb-master-ui LoadBalancer 10.108.152.174 localhost 7000:30282/TCP 60dyb-masters ClusterIP None <none> 7000/TCP,7100/TCP,15433/TCP 60dyb-tserver-service LoadBalancer 10.111.156.2 localhost 6379:32580/TCP,9042:30878/TCP,5433:30284/TCP 60dyb-tservers ClusterIP None <none> 9000/TCP,12000/TCP,11000/TCP,13000/TCP,9100/TCP,6379/TCP,9042/TCP,5433/TCP,15433/TCP 60dyugabyted-ui-service LoadBalancer 10.99.41.42 localhost 15433:31540/TCP 60d
PS C:\Users\mazin> kubectl get configmap -n=yugabyteNAME DATA AGEkube-root-ca.crt 1 60dyugabyte-master-hooks 6 60dyugabyte-tserver-hooks 6 60d
・Podの詳細S C:\Users\mazin> kubectl describe pod yb-master-0 -n=yugabyteName: yb-master-0Namespace: yugabytePriority: 0Service Account: defaultNode: docker-desktop/192.168.65.3Start Time: Sun, 24 Aug 2025 11:22:43 +0900Labels: app=yb-master apps.kubernetes.io/pod-index=0 chart=yugabyte component=yugabytedb controller-revision-hash=yb-master-788f5cb89c heritage=Helm release=yugabyte statefulset.kubernetes.io/pod-name=yb-master-0 yugabytedUi=trueAnnotations: checksum/gflags: 34b8186db404bc89508076fb2808b55bf1c379c4958176b39b7a596fc1aa36ccStatus: RunningIP: 10.1.0.151IPs: IP: 10.1.0.151Controlled By: StatefulSet/yb-masterContainers: yb-master: Container ID: docker://b33e7b16b88fa56c3bfb5c49abdf609fbe0ffc45b25512cbed0878e952bb33a9 Image: yugabytedb/yugabyte:2024.2.3.2-b6 Image ID: docker-pullable://yugabytedb/yugabyte@sha256:b153366d21408e2e83c8a388b245279b9d72f56800145c6d07b5c8fdaee0e08d Ports: 7000/TCP, 7100/TCP, 15433/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP Command: /sbin/tini -- Args: /bin/bash -c if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then /home/yugabyte/tools/k8s_preflight.py all fi && \ echo "disk check at: $(date)" \ | tee "/mnt/disk0/disk.check" "/mnt/disk1/disk.check" \ && sync "/mnt/disk0/disk.check" "/mnt/disk1/disk.check" && \ if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \ dnscheck \ --addr="${HOSTNAME}.yb-masters.${NAMESPACE}.svc.cluster.local" \ --port="7100" fi && \
if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \ dnscheck \ --addr="${HOSTNAME}.yb-masters.${NAMESPACE}.svc.cluster.local:7100" \ --port="7100" fi && \
if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \ dnscheck \ --addr="0.0.0.0" \ --port="7000" fi && \
if [[ -f /home/yugabyte/tools/k8s_parent.py ]]; then k8s_parent="/home/yugabyte/tools/k8s_parent.py" else k8s_parent="" fi && \ mkdir -p /tmp/yugabyte/master/conf && \ envsubst < /opt/master/conf/server.conf.template > /tmp/yugabyte/master/conf/server.conf && \ exec ${k8s_parent} /home/yugabyte/bin/yb-master \ --flagfile /tmp/yugabyte/master/conf/server.conf
State: Running Started: Mon, 01 Sep 2025 20:59:45 +0900 Last State: Terminated Reason: Error Exit Code: 137 Started: Thu, 28 Aug 2025 21:49:45 +0900 Finished: Thu, 28 Aug 2025 22:15:25 +0900 Ready: True Restart Count: 2 Limits: cpu: 2 memory: 2Gi Requests: cpu: 500m memory: 512Mi Liveness: exec [bash -v -c echo "disk check at: $(date)" \ | tee "/mnt/disk0/disk.check" "/mnt/disk1/disk.check" \ && sync "/mnt/disk0/disk.check" "/mnt/disk1/disk.check";exit_code="$?";echo "disk check exited with: ${exit_code}";exit "${exit_code}"] delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: POD_IP: (v1:status.podIP) HOSTNAME: yb-master-0 (v1:metadata.name) NAMESPACE: yugabyte (v1:metadata.namespace) YBDEVOPS_CORECOPY_DIR: /mnt/disk0/cores Mounts: /mnt/disk0 from datadir0 (rw) /mnt/disk1 from datadir1 (rw) /opt/debug_hooks_config from debug-hooks-volume (rw) /opt/master/conf from master-gflags (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qx8lv (ro) yb-cleanup: Container ID: docker://0ca3c3efd73a4797d9be1445bb479a147d39e00347da3188bdbdcff8aea5c3da Image: yugabytedb/yugabyte:2024.2.3.2-b6 Image ID: docker-pullable://yugabytedb/yugabyte@sha256:b153366d21408e2e83c8a388b245279b9d72f56800145c6d07b5c8fdaee0e08d Port: <none> Host Port: <none> Command: /sbin/tini -- Args: /bin/bash -c while true; do sleep 3600; /home/yugabyte/scripts/log_cleanup.sh; done
State: Running Started: Mon, 01 Sep 2025 20:59:48 +0900 Last State: Terminated Reason: Error Exit Code: 143 Started: Thu, 28 Aug 2025 21:49:48 +0900 Finished: Thu, 28 Aug 2025 22:15:24 +0900 Ready: True Restart Count: 2 Environment: USER: yugabyte Mounts: /home/yugabyte/ from datadir0 (rw,path="yb-data") /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qx8lv (ro) /var/yugabyte/cores from datadir0 (rw,path="cores") yugabyted-ui: Container ID: docker://2999dec1949e6ea1e72f7c7ba69512efd1fea06e9816afe375f1e4476e9ee03b Image: yugabytedb/yugabyte:2024.2.3.2-b6 Image ID: docker-pullable://yugabytedb/yugabyte@sha256:b153366d21408e2e83c8a388b245279b9d72f56800145c6d07b5c8fdaee0e08d Port: <none> Host Port: <none> Command: /sbin/tini -- Args: /bin/bash -c while true; do /home/yugabyte/bin/yugabyted-ui \ -database_host=${HOSTNAME}.yb-masters.${NAMESPACE}.svc.cluster.local \ -bind_address=0.0.0.0 \ -ysql_port=5433 \ -ycql_port=9042 \ -master_ui_port=7000 \ -tserver_ui_port=9000 \ || echo "ERROR: yugabyted-ui failed. This might be because your yugabyte \ version is older than 2.21.0. If this is the case, set yugabytedUi.enabled to false \ in helm to disable yugabyted-ui, or upgrade to a version 2.21.0 or newer."; \ echo "Attempting restart in 30s." trap break TERM INT; \ sleep 30s & wait; \ trap - TERM INT; done \
State: Running Started: Mon, 01 Sep 2025 20:59:51 +0900 Last State: Terminated Reason: Error Exit Code: 143 Started: Thu, 28 Aug 2025 21:49:50 +0900 Finished: Thu, 28 Aug 2025 22:15:24 +0900 Ready: True Restart Count: 2 Environment: HOSTNAME: yb-master-0 (v1:metadata.name) NAMESPACE: yugabyte (v1:metadata.namespace) Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qx8lv (ro)Conditions: Type Status PodReadyToStartContainers True Initialized True Ready True ContainersReady True TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: trueQoS Class: BurstableNode-Selectors: <none>Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300sEvents: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Unhealthy 32m (x3 over 148m) kubelet Liveness probe failed: command timed out: "bash -v -c echo \"disk check at: $(date)\" \\\n | tee \"/mnt/disk0/disk.check\" \"/mnt/disk1/disk.check\" \\\n && sync \"/mnt/disk0/disk.check\" \"/mnt/disk1/disk.check\";\nexit_code=\"$?\";\necho \"disk check exited with: ${exit_code}\";\nexit \"${exit_code}\"\n" timed out after 1s
・監視kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard createdserviceaccount/kubernetes-dashboard createdservice/kubernetes-dashboard createdsecret/kubernetes-dashboard-certs createdsecret/kubernetes-dashboard-csrf createdsecret/kubernetes-dashboard-key-holder createdconfigmap/kubernetes-dashboard-settings createdrole.rbac.authorization.k8s.io/kubernetes-dashboard createdclusterrole.rbac.authorization.k8s.io/kubernetes-dashboard createdrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard createdclusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard createddeployment.apps/kubernetes-dashboard createdservice/dashboard-metrics-scraper createdWarning: spec.template.metadata.annotations[seccomp.security.alpha.kubernetes.io/pod]: non-functional in v1.27+; use the "seccompProfile" field insteaddeployment.apps/dashboard-metrics-scraper created
PS C:\Users\mazin> kubectl proxyStarting to serve on 127.0.0.1:8001→アクセスしたところ、下記のレスポンスが返った
http://localhost:8001/{ "paths": [ "/.well-known/openid-configuration", "/api", "/api/v1", "/apis", "/apis/", "/apis/admissionregistration.k8s.io", "/apis/admissionregistration.k8s.io/v1", "/apis/apiextensions.k8s.io", "/apis/apiextensions.k8s.io/v1", "/apis/apiregistration.k8s.io", "/apis/apiregistration.k8s.io/v1", "/apis/apps", "/apis/apps/v1", "/apis/authentication.k8s.io", "/apis/authentication.k8s.io/v1", "/apis/authorization.k8s.io", "/apis/authorization.k8s.io/v1", "/apis/autoscaling", "/apis/autoscaling/v1", "/apis/autoscaling/v2", "/apis/batch", "/apis/batch/v1", "/apis/certificates.k8s.io", "/apis/certificates.k8s.io/v1", "/apis/coordination.k8s.io", "/apis/coordination.k8s.io/v1", "/apis/discovery.k8s.io", "/apis/discovery.k8s.io/v1", "/apis/events.k8s.io", "/apis/events.k8s.io/v1", "/apis/flowcontrol.apiserver.k8s.io", "/apis/flowcontrol.apiserver.k8s.io/v1", "/apis/networking.k8s.io", "/apis/networking.k8s.io/v1", "/apis/node.k8s.io", "/apis/node.k8s.io/v1", "/apis/policy", "/apis/policy/v1", "/apis/rbac.authorization.k8s.io", "/apis/rbac.authorization.k8s.io/v1", "/apis/resource.k8s.io", "/apis/resource.k8s.io/v1", "/apis/scheduling.k8s.io", "/apis/scheduling.k8s.io/v1", "/apis/storage.k8s.io", "/apis/storage.k8s.io/v1", "/healthz", "/healthz/autoregister-completion", "/healthz/etcd", "/healthz/log", "/healthz/ping", "/healthz/poststarthook/aggregator-reload-proxy-client-cert", "/healthz/poststarthook/apiservice-discovery-controller", "/healthz/poststarthook/apiservice-openapi-controller", "/healthz/poststarthook/apiservice-openapiv3-controller", "/healthz/poststarthook/apiservice-registration-controller", "/healthz/poststarthook/apiservice-status-local-available-controller", "/healthz/poststarthook/apiservice-status-remote-available-controller", "/healthz/poststarthook/bootstrap-controller", "/healthz/poststarthook/crd-informer-synced", "/healthz/poststarthook/generic-apiserver-start-informers", "/healthz/poststarthook/kube-apiserver-autoregistration", "/healthz/poststarthook/priority-and-fairness-config-consumer", "/healthz/poststarthook/priority-and-fairness-config-producer", "/healthz/poststarthook/priority-and-fairness-filter", "/healthz/poststarthook/rbac/bootstrap-roles", "/healthz/poststarthook/scheduling/bootstrap-system-priority-classes", "/healthz/poststarthook/start-apiextensions-controllers", "/healthz/poststarthook/start-apiextensions-informers", "/healthz/poststarthook/start-apiserver-admission-initializer", "/healthz/poststarthook/start-cluster-authentication-info-controller", "/healthz/poststarthook/start-kube-aggregator-informers", "/healthz/poststarthook/start-kube-apiserver-identity-lease-controller", "/healthz/poststarthook/start-kube-apiserver-identity-lease-garbage-collector", "/healthz/poststarthook/start-kubernetes-service-cidr-controller", "/healthz/poststarthook/start-legacy-token-tracking-controller", "/healthz/poststarthook/start-service-ip-repair-controllers", "/healthz/poststarthook/start-system-namespaces-controller", "/healthz/poststarthook/storage-object-count-tracker-hook", "/livez", "/livez/autoregister-completion", "/livez/etcd", "/livez/log", "/livez/ping", "/livez/poststarthook/aggregator-reload-proxy-client-cert", "/livez/poststarthook/apiservice-discovery-controller", "/livez/poststarthook/apiservice-openapi-controller", "/livez/poststarthook/apiservice-openapiv3-controller", "/livez/poststarthook/apiservice-registration-controller", "/livez/poststarthook/apiservice-status-local-available-controller", "/livez/poststarthook/apiservice-status-remote-available-controller", "/livez/poststarthook/bootstrap-controller", "/livez/poststarthook/crd-informer-synced", "/livez/poststarthook/generic-apiserver-start-informers", "/livez/poststarthook/kube-apiserver-autoregistration", "/livez/poststarthook/priority-and-fairness-config-consumer", "/livez/poststarthook/priority-and-fairness-config-producer", "/livez/poststarthook/priority-and-fairness-filter", "/livez/poststarthook/rbac/bootstrap-roles", "/livez/poststarthook/scheduling/bootstrap-system-priority-classes", "/livez/poststarthook/start-apiextensions-controllers", "/livez/poststarthook/start-apiextensions-informers", "/livez/poststarthook/start-apiserver-admission-initializer", "/livez/poststarthook/start-cluster-authentication-info-controller", "/livez/poststarthook/start-kube-aggregator-informers", "/livez/poststarthook/start-kube-apiserver-identity-lease-controller", "/livez/poststarthook/start-kube-apiserver-identity-lease-garbage-collector", "/livez/poststarthook/start-kubernetes-service-cidr-controller", "/livez/poststarthook/start-legacy-token-tracking-controller", "/livez/poststarthook/start-service-ip-repair-controllers", "/livez/poststarthook/start-system-namespaces-controller", "/livez/poststarthook/storage-object-count-tracker-hook", "/metrics", "/metrics/slis", "/openapi/v2", "/openapi/v3", "/openapi/v3/", "/openid/v1/jwks", "/readyz", "/readyz/autoregister-completion", "/readyz/etcd", "/readyz/etcd-readiness", "/readyz/informer-sync", "/readyz/log", "/readyz/ping", "/readyz/poststarthook/aggregator-reload-proxy-client-cert", "/readyz/poststarthook/apiservice-discovery-controller", "/readyz/poststarthook/apiservice-openapi-controller", "/readyz/poststarthook/apiservice-openapiv3-controller", "/readyz/poststarthook/apiservice-registration-controller", "/readyz/poststarthook/apiservice-status-local-available-controller", "/readyz/poststarthook/apiservice-status-remote-available-controller", "/readyz/poststarthook/bootstrap-controller", "/readyz/poststarthook/crd-informer-synced", "/readyz/poststarthook/generic-apiserver-start-informers", "/readyz/poststarthook/kube-apiserver-autoregistration", "/readyz/poststarthook/priority-and-fairness-config-consumer", "/readyz/poststarthook/priority-and-fairness-config-producer", "/readyz/poststarthook/priority-and-fairness-filter", "/readyz/poststarthook/rbac/bootstrap-roles", "/readyz/poststarthook/scheduling/bootstrap-system-priority-classes", "/readyz/poststarthook/start-apiextensions-controllers", "/readyz/poststarthook/start-apiextensions-informers", "/readyz/poststarthook/start-apiserver-admission-initializer", "/readyz/poststarthook/start-cluster-authentication-info-controller", "/readyz/poststarthook/start-kube-aggregator-informers", "/readyz/poststarthook/start-kube-apiserver-identity-lease-controller", "/readyz/poststarthook/start-kube-apiserver-identity-lease-garbage-collector", "/readyz/poststarthook/start-kubernetes-service-cidr-controller", "/readyz/poststarthook/start-legacy-token-tracking-controller", "/readyz/poststarthook/start-service-ip-repair-controllers", "/readyz/poststarthook/start-system-namespaces-controller", "/readyz/poststarthook/storage-object-count-tracker-hook", "/readyz/shutdown", "/version" ]}
・下記から、YugabyteDBの管理ツールへアクセス可能
kubectl --namespace yugabyte port-forward svc/yb-master-ui 7000:7000
http://localhost:7000/






※※※helmでyugabyteDBのインストール検証コマンド※※※helminstall_yugabyte
helm install yb-demo yugabytedb/yugabyte --set resource.master.requests.cpu=0.5,resource.master.requests.memory=0.5Gi,resource.tserver.requests.cpu=0.5,resource.tserver.requests.memory=0.5Gi,replicas.master=1,replicas.tserver=1,Image.tag=2024.2.2.3-b1 --namespace yb-demo
PS C:\Windows\system32> kubectl --namespace yb-demo get podsNAME READY STATUS RESTARTS AGEyb-master-0 3/3 Running 0 5m46syb-tserver-0 3/3 Running 0 5m46sPS C:\Windows\system32> kubectl --namespace yb-demo get servicesNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEyb-master-ui LoadBalancer 10.106.219.231 localhost 7000:31094/TCP 5m57syb-masters ClusterIP None <none> 7000/TCP,7100/TCP,15433/TCP 5m57syb-tserver-service LoadBalancer 10.108.132.0 localhost 6379:32223/TCP,9042:30514/TCP,5433:31413/TCP 5m57syb-tservers ClusterIP None <none> 9000/TCP,12000/TCP,11000/TCP,13000/TCP,9100/TCP,6379/TCP,9042/TCP,5433/TCP,15433/TCP 5m57syugabyted-ui-service LoadBalancer 10.109.88.69 localhost 15433:31677/TCP 5m57sPS C:\Windows\system32> kubectl get svc --namespace yb-demoNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEyb-master-ui LoadBalancer 10.106.219.231 localhost 7000:31094/TCP 6m3syb-masters ClusterIP None <none> 7000/TCP,7100/TCP,15433/TCP 6m3syb-tserver-service LoadBalancer 10.108.132.0 localhost 6379:32223/TCP,9042:30514/TCP,5433:31413/TCP 6m3syb-tservers ClusterIP None <none> 9000/TCP,12000/TCP,11000/TCP,13000/TCP,9100/TCP,6379/TCP,9042/TCP,5433/TCP,15433/TCP 6m3syugabyted-ui-service LoadBalancer 10.109.88.69 localhost 15433:31677/TCP 6m3s
kubectl exec --namespace yb-demo -it yb-tserver-0 bash
PS C:\Windows\system32> kubectl --namespace yb-demo exec -it yb-tserver-0 -- sh -c "cd /home/yugabyte && ysqlsh -h yb-tserver-0 --echo-queries"Defaulted container "yb-tserver" out of: yb-tserver, yb-cleanup, yugabyted-uiysqlsh (11.2-YB-2024.2.2.3-b0)Type "help" for help.
yugabyte=# \l List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges-----------------+----------+----------+---------+-------------+----------------------- postgres | postgres | UTF8 | C | en_US.UTF-8 | system_platform | postgres | UTF8 | C | en_US.UTF-8 | template0 | postgres | UTF8 | C | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres template1 | postgres | UTF8 | C | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres yugabyte | postgres | UTF8 | C | en_US.UTF-8 |(5 rows)
yugabyte=# \du List of roles Role name | Attributes | Member of--------------+------------------------------------------------------------+----------- postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {} yb_db_admin | No inheritance, Cannot login | {} yb_extension | Cannot login | {} yb_fdw | Cannot login | {} yugabyte | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
yugabyte=# \qPS C:\Windows\system32>
kubectl --namespace yb-demo port-forward svc/yb-master-ui 7000:7000