Docs Menu
Docs Home
/
Enterprise Kubernetes 演算子
/ /

サービス メッシュのないマルチクラスター Ops Manager

項目一覧

  • 前提条件
  • ソースコード
  • 手順

MongoDB Ops Manager は、データのバックアップ、データベースのパフォーマンスの監視などのワークロードを容易にする役割を果たします。 マルチクラスターMongoDB Ops Managerとアプリケーション データベースの配置を、データセンター全体またはゾーンの障害に対して回復性のあるものにするには、 MongoDB Ops Managerアプリケーションとアプリケーション データベースを複数のKubernetesクラスターに配置します。

次の手順を開始する前に、次のアクションを実行してください。

  • kubectl をインストールします。

  • GKE クラスターの手順または同等の手順を完了します。

  • TLS 証明書 の手順または同等の手順を実行します。

  • 外部 DNS 手順または同等の手順を完了します。

  • MongoDB演算子の配置 手順を完了します。

  • 必要な環境変数を次のように設定します。

# This script builds on top of the environment configured in the setup guides.
# It depends (uses) the following env variables defined there to work correctly.
# If you don't use the setup guide to bootstrap the environment, then define them here.
# ${K8S_CLUSTER_0_CONTEXT_NAME}
# ${K8S_CLUSTER_1_CONTEXT_NAME}
# ${K8S_CLUSTER_2_CONTEXT_NAME}
# ${OM_NAMESPACE}
export S3_OPLOG_BUCKET_NAME=s3-oplog-store
export S3_SNAPSHOT_BUCKET_NAME=s3-snapshot-store
# If you use your own S3 storage - set the values accordingly.
# By default we install Minio to handle S3 storage and here are set the default credentials.
export S3_ENDPOINT="minio.tenant-tiny.svc.cluster.local"
export S3_ACCESS_KEY="console"
export S3_SECRET_KEY="console123"
export OPS_MANAGER_VERSION="8.0.5"
export APPDB_VERSION="8.0.5-ent"

含まれているすべてのソースコードはMongoDB Kubernetes Operatorリポジトリにあります。

1
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: om-cert
spec:
dnsNames:
- ${OPS_MANAGER_EXTERNAL_DOMAIN}
duration: 240h0m0s
issuerRef:
name: my-ca-issuer
kind: ClusterIssuer
renewBefore: 120h0m0s
secretName: cert-prefix-om-cert
usages:
- server auth
- client auth
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: om-db-cert
spec:
dnsNames:
- "*.${APPDB_CLUSTER_0_EXTERNAL_DOMAIN}"
- "*.${APPDB_CLUSTER_1_EXTERNAL_DOMAIN}"
- "*.${APPDB_CLUSTER_2_EXTERNAL_DOMAIN}"
duration: 240h0m0s
issuerRef:
name: my-ca-issuer
kind: ClusterIssuer
renewBefore: 120h0m0s
secretName: cert-prefix-om-db-cert
usages:
- server auth
- client auth
EOF
2
mkdir -p certs
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get secret cert-prefix-om-cert -o jsonpath="{.data['tls\.crt']}" | base64 --decode > certs/tls.crt
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get secret cert-prefix-om-cert -o jsonpath="{.data['tls\.key']}" | base64 --decode > certs/tls.key
gcloud compute ssl-certificates create om-certificate --certificate=certs/tls.crt --private-key=certs/tls.key
3

このロードバランサーは、すべての 3 クラスターにわたる Ops Manager のすべてのレプリカ間でトラフィックを分散します。

gcloud compute firewall-rules create fw-ops-manager-hc \
--action=allow \
--direction=ingress \
--target-tags=mongodb \
--source-ranges=130.211.0.0/22,35.191.0.0/16 \
--rules=tcp:8443
gcloud compute health-checks create https om-healthcheck \
--use-serving-port \
--request-path=/monitor/health
gcloud compute backend-services create om-backend-service \
--protocol HTTPS \
--health-checks om-healthcheck \
--global
gcloud compute url-maps create om-url-map \
--default-service om-backend-service
gcloud compute target-https-proxies create om-lb-proxy \
--url-map om-url-map \
--ssl-certificates=om-certificate
gcloud compute forwarding-rules create om-forwarding-rule \
--global \
--target-https-proxy=om-lb-proxy \
--ports=443
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
fw-ops-manager-hc default INGRESS 1000 tcp:8443 False
NAME PROTOCOL
om-healthcheck HTTPS
NAME BACKENDS PROTOCOL
om-backend-service HTTPS
NAME DEFAULT_SERVICE
om-url-map backendServices/om-backend-service
NAME SSL_CERTIFICATES URL_MAP REGION CERTIFICATE_MAP
om-lb-proxy om-certificate om-url-map
4
ip_address=$(gcloud compute forwarding-rules describe om-forwarding-rule --global --format="get(IPAddress)")
gcloud dns record-sets create "${OPS_MANAGER_EXTERNAL_DOMAIN}" --zone="${DNS_ZONE}" --type="A" --ttl="300" --rrdatas="${ip_address}"
5
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" --namespace "${OM_NAMESPACE}" create secret generic om-admin-user-credentials \
--from-literal=Username="admin" \
--from-literal=Password="Passw0rd@" \
--from-literal=FirstName="Jane" \
--from-literal=LastName="Doe"
6
kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" -f - <<EOF
apiVersion: mongodb.com/v1
kind: MongoDBOpsManager
metadata:
name: om
spec:
topology: MultiCluster
version: "${OPS_MANAGER_VERSION}"
adminCredentials: om-admin-user-credentials
externalConnectivity:
type: ClusterIP
annotations:
cloud.google.com/neg: '{"exposed_ports": {"8443":{}}}'
opsManagerURL: "https://${OPS_MANAGER_EXTERNAL_DOMAIN}"
security:
certsSecretPrefix: cert-prefix
tls:
ca: ca-issuer
clusterSpecList:
- clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
members: 1
- clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
members: 2
applicationDatabase:
version: "${APPDB_VERSION}"
topology: MultiCluster
security:
certsSecretPrefix: cert-prefix
tls:
ca: ca-issuer
clusterSpecList:
- clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
members: 2
externalAccess:
externalDomain: "${APPDB_CLUSTER_0_EXTERNAL_DOMAIN}"
externalService:
annotations:
external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_0_EXTERNAL_DOMAIN}"
- clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
members: 2
externalAccess:
externalDomain: "${APPDB_CLUSTER_1_EXTERNAL_DOMAIN}"
externalService:
annotations:
external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_1_EXTERNAL_DOMAIN}"
- clusterName: "${K8S_CLUSTER_2_CONTEXT_NAME}"
members: 1
externalAccess:
externalDomain: "${APPDB_CLUSTER_2_EXTERNAL_DOMAIN}"
externalService:
annotations:
external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_2_EXTERNAL_DOMAIN}"
backup:
enabled: false
EOF
7

アプリケーション データベースと Ops Manager の配置の両方が完了するまで待機します。

echo "Waiting for Application Database to reach Pending phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Pending opsmanager/om --timeout=30s
echo "Waiting for Ops Manager to reach Pending phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Pending opsmanager/om --timeout=600s
Waiting for Application Database to reach Pending phase...
mongodbopsmanager.mongodb.com/om condition met
Waiting for Ops Manager to reach Pending phase...
mongodbopsmanager.mongodb.com/om condition met
8
svcneg0=$(kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get svcneg -o=jsonpath='{.items[0].metadata.name}')
gcloud compute backend-services add-backend om-backend-service \
--global \
--network-endpoint-group="${svcneg0}" \
--network-endpoint-group-zone="${K8S_CLUSTER_0_ZONE}" \
--balancing-mode RATE --max-rate-per-endpoint 5
svcneg1=$(kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get svcneg -o=jsonpath='{.items[0].metadata.name}')
gcloud compute backend-services add-backend om-backend-service \
--global \
--network-endpoint-group="${svcneg1}" \
--network-endpoint-group-zone="${K8S_CLUSTER_1_ZONE}" \
--balancing-mode RATE --max-rate-per-endpoint 5
9
echo "Waiting for Application Database to reach Running phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s
echo; echo "Waiting for Ops Manager to reach Running phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=900s
echo; echo "MongoDBOpsManager resource"
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get opsmanager/om
echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
Waiting for Application Database to reach Running phase...
mongodbopsmanager.mongodb.com/om condition met
Waiting for Ops Manager to reach Running phase...
mongodbopsmanager.mongodb.com/om condition met
MongoDBOpsManager resource
NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS
om 8.0.5 Running Running Disabled 15m
Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0-lucian
NAME READY STATUS RESTARTS AGE
om-0-0 1/1 Running 0 12m
om-db-0-0 3/3 Running 0 3m50s
om-db-0-1 3/3 Running 0 4m38s
Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1-lucian
NAME READY STATUS RESTARTS AGE
om-1-0 1/1 Running 0 12m
om-1-1 1/1 Running 0 8m46s
om-db-1-0 3/3 Running 0 2m2s
om-db-1-1 3/3 Running 0 2m54s
10
kubectl kustomize "github.com/minio/operator/resources/?timeout=120&ref=v5.0.12" | \
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f -
kubectl kustomize "github.com/minio/operator/examples/kustomization/tenant-tiny?timeout=120&ref=v5.0.12" | \
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f -
# add two buckets to the tenant config
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "tenant-tiny" patch tenant/myminio \
--type='json' \
-p="[{\"op\": \"add\", \"path\": \"/spec/buckets\", \"value\": [{\"name\": \"${S3_OPLOG_BUCKET_NAME}\"}, {\"name\": \"${S3_SNAPSHOT_BUCKET_NAME}\"}]}]"
11
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" create secret generic s3-access-secret \
--from-literal=accessKey="${S3_ACCESS_KEY}" \
--from-literal=secretKey="${S3_SECRET_KEY}"
# minio TLS secrets are signed with the default k8s root CA
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" create secret generic s3-ca-cert \
--from-literal=ca.crt="$(kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n kube-system get configmap kube-root-ca.crt -o jsonpath="{.data.ca\.crt}")"
12
kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" -f - <<EOF
apiVersion: mongodb.com/v1
kind: MongoDBOpsManager
metadata:
name: om
spec:
topology: MultiCluster
version: "${OPS_MANAGER_VERSION}"
adminCredentials: om-admin-user-credentials
externalConnectivity:
type: ClusterIP
annotations:
cloud.google.com/neg: '{"exposed_ports": {"8443":{}}}'
opsManagerURL: "https://${OPS_MANAGER_EXTERNAL_DOMAIN}"
security:
certsSecretPrefix: cert-prefix
tls:
ca: ca-issuer
clusterSpecList:
- clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
members: 1
backup:
members: 0
- clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
members: 2
backup:
members: 0
- clusterName: "${K8S_CLUSTER_2_CONTEXT_NAME}"
members: 0
backup:
members: 1
applicationDatabase:
version: "${APPDB_VERSION}"
topology: MultiCluster
security:
certsSecretPrefix: cert-prefix
tls:
ca: ca-issuer
clusterSpecList:
- clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
members: 2
externalAccess:
externalDomain: "${APPDB_CLUSTER_0_EXTERNAL_DOMAIN}"
externalService:
annotations:
external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_0_EXTERNAL_DOMAIN}"
- clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
members: 2
externalAccess:
externalDomain: "${APPDB_CLUSTER_1_EXTERNAL_DOMAIN}"
externalService:
annotations:
external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_1_EXTERNAL_DOMAIN}"
- clusterName: "${K8S_CLUSTER_2_CONTEXT_NAME}"
members: 1
externalAccess:
externalDomain: "${APPDB_CLUSTER_2_EXTERNAL_DOMAIN}"
externalService:
annotations:
external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_2_EXTERNAL_DOMAIN}"
backup:
enabled: true
s3Stores:
- name: my-s3-block-store
s3SecretRef:
name: "s3-access-secret"
pathStyleAccessEnabled: true
s3BucketEndpoint: "${S3_ENDPOINT}"
s3BucketName: "${S3_SNAPSHOT_BUCKET_NAME}"
customCertificateSecretRefs:
- name: s3-ca-cert
key: ca.crt
s3OpLogStores:
- name: my-s3-oplog-store
s3SecretRef:
name: "s3-access-secret"
s3BucketEndpoint: "${S3_ENDPOINT}"
s3BucketName: "${S3_OPLOG_BUCKET_NAME}"
pathStyleAccessEnabled: true
customCertificateSecretRefs:
- name: s3-ca-cert
key: ca.crt
EOF
13
echo; echo "Waiting for Backup to reach Running phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.backup.phase}'=Running opsmanager/om --timeout=1200s
echo "Waiting for Application Database to reach Running phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=600s
echo; echo "Waiting for Ops Manager to reach Running phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=600s
echo; echo "MongoDBOpsManager resource"
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get opsmanager/om
echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
echo; echo "Pods running in cluster ${K8S_CLUSTER_2_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
Waiting for Backup to reach Running phase...
mongodbopsmanager.mongodb.com/om condition met
Waiting for Application Database to reach Running phase...
mongodbopsmanager.mongodb.com/om condition met
Waiting for Ops Manager to reach Running phase...
mongodbopsmanager.mongodb.com/om condition met
MongoDBOpsManager resource
NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS
om 8.0.4 Running Running Running 14m
Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0-lucian
NAME READY STATUS RESTARTS AGE
om-0-0 1/1 Running 0 11m
om-db-0-0 3/3 Running 0 5m35s
om-db-0-1 3/3 Running 0 6m20s
Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1-lucian
NAME READY STATUS RESTARTS AGE
om-1-0 1/1 Running 0 11m
om-1-1 1/1 Running 0 8m28s
om-db-1-0 3/3 Running 0 3m52s
om-db-1-1 3/3 Running 0 4m48s
Pods running in cluster gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2-lucian
NAME READY STATUS RESTARTS AGE
om-2-backup-daemon-0 1/1 Running 0 2m
om-db-2-0 3/3 Running 0 2m55s
14

認証情報を構成するには、 MongoDB Ops Manager組織を作成 し、 MongoDB Ops Manager UIで プログラムAPIキーを生成 し、 ロードバランサーIPのシークレットを作成する必要があります。詳細については、「 Kubernetes Operator の認証情報の作成 」を参照してください。

戻る

TLS 証明書