Docs Menu
Docs Home
/
엔터프라이즈 Kubernetes 운영자
/ /

서비스 메시가 없는 멀티 클러스터 Ops Manager

이 페이지의 내용

  • 전제 조건
  • 소스 코드
  • 절차

MongoDB Ops Manager 데이터 백업, 데이터베이스 성능 모니터링 등과 같은 워크로드를 원활하게 처리하는 역할을 합니다. 멀티 클러스터 MongoDB Ops Manager 및 애플리케이션 데이터베이스 배포서버 전체 데이터 센터 또는 구역 장애에 대해 복원력 있게 만들려면 MongoDB Ops Manager 애플리케이션 및 애플리케이션 데이터베이스를 여러 Kubernetes 클러스터에 배포 .

다음 절차를 시작하기 전에 다음 조치를 수행하세요.

  • kubectl를 설치합니다.

  • GKE 클러스터 절차 또는 이에 상응하는 절차를 완료합니다.

  • TLS 인증서 절차 또는 이에 상응하는 절차를 완료합니다.

  • ExternalDNS 절차 또는 이에 상응하는 절차를 완료합니다.

  • MongoDB 연산자 배포 절차를 완료합니다.

  • 다음과 같이 필수 환경 변수를 설정합니다.

# This script builds on top of the environment configured in the setup guides.
# It depends (uses) the following env variables defined there to work correctly.
# If you don't use the setup guide to bootstrap the environment, then define them here.
# ${K8S_CLUSTER_0_CONTEXT_NAME}
# ${K8S_CLUSTER_1_CONTEXT_NAME}
# ${K8S_CLUSTER_2_CONTEXT_NAME}
# ${OM_NAMESPACE}
export S3_OPLOG_BUCKET_NAME=s3-oplog-store
export S3_SNAPSHOT_BUCKET_NAME=s3-snapshot-store
# If you use your own S3 storage - set the values accordingly.
# By default we install Minio to handle S3 storage and here are set the default credentials.
export S3_ENDPOINT="minio.tenant-tiny.svc.cluster.local"
export S3_ACCESS_KEY="console"
export S3_SECRET_KEY="console123"
export OPS_MANAGER_VERSION="8.0.5"
export APPDB_VERSION="8.0.5-ent"

포함된 모든 소스 코드 MongoDB Kubernetes Operator 리포지토리에서 찾을 수 있습니다.

1
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: om-cert
spec:
dnsNames:
- ${OPS_MANAGER_EXTERNAL_DOMAIN}
duration: 240h0m0s
issuerRef:
name: my-ca-issuer
kind: ClusterIssuer
renewBefore: 120h0m0s
secretName: cert-prefix-om-cert
usages:
- server auth
- client auth
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: om-db-cert
spec:
dnsNames:
- "*.${APPDB_CLUSTER_0_EXTERNAL_DOMAIN}"
- "*.${APPDB_CLUSTER_1_EXTERNAL_DOMAIN}"
- "*.${APPDB_CLUSTER_2_EXTERNAL_DOMAIN}"
duration: 240h0m0s
issuerRef:
name: my-ca-issuer
kind: ClusterIssuer
renewBefore: 120h0m0s
secretName: cert-prefix-om-db-cert
usages:
- server auth
- client auth
EOF
2
mkdir -p certs
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get secret cert-prefix-om-cert -o jsonpath="{.data['tls\.crt']}" | base64 --decode > certs/tls.crt
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get secret cert-prefix-om-cert -o jsonpath="{.data['tls\.key']}" | base64 --decode > certs/tls.key
gcloud compute ssl-certificates create om-certificate --certificate=certs/tls.crt --private-key=certs/tls.key
3

이 로드 밸런서 모든 3 클러스터에 있는 Ops Manager 의 모든 복제본 간에 트래픽을 분산합니다.

gcloud compute firewall-rules create fw-ops-manager-hc \
--action=allow \
--direction=ingress \
--target-tags=mongodb \
--source-ranges=130.211.0.0/22,35.191.0.0/16 \
--rules=tcp:8443
gcloud compute health-checks create https om-healthcheck \
--use-serving-port \
--request-path=/monitor/health
gcloud compute backend-services create om-backend-service \
--protocol HTTPS \
--health-checks om-healthcheck \
--global
gcloud compute url-maps create om-url-map \
--default-service om-backend-service
gcloud compute target-https-proxies create om-lb-proxy \
--url-map om-url-map \
--ssl-certificates=om-certificate
gcloud compute forwarding-rules create om-forwarding-rule \
--global \
--target-https-proxy=om-lb-proxy \
--ports=443
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
fw-ops-manager-hc default INGRESS 1000 tcp:8443 False
NAME PROTOCOL
om-healthcheck HTTPS
NAME BACKENDS PROTOCOL
om-backend-service HTTPS
NAME DEFAULT_SERVICE
om-url-map backendServices/om-backend-service
NAME SSL_CERTIFICATES URL_MAP REGION CERTIFICATE_MAP
om-lb-proxy om-certificate om-url-map
4
ip_address=$(gcloud compute forwarding-rules describe om-forwarding-rule --global --format="get(IPAddress)")
gcloud dns record-sets create "${OPS_MANAGER_EXTERNAL_DOMAIN}" --zone="${DNS_ZONE}" --type="A" --ttl="300" --rrdatas="${ip_address}"
5
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" --namespace "${OM_NAMESPACE}" create secret generic om-admin-user-credentials \
--from-literal=Username="admin" \
--from-literal=Password="Passw0rd@" \
--from-literal=FirstName="Jane" \
--from-literal=LastName="Doe"
6
kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" -f - <<EOF
apiVersion: mongodb.com/v1
kind: MongoDBOpsManager
metadata:
name: om
spec:
topology: MultiCluster
version: "${OPS_MANAGER_VERSION}"
adminCredentials: om-admin-user-credentials
externalConnectivity:
type: ClusterIP
annotations:
cloud.google.com/neg: '{"exposed_ports": {"8443":{}}}'
opsManagerURL: "https://${OPS_MANAGER_EXTERNAL_DOMAIN}"
security:
certsSecretPrefix: cert-prefix
tls:
ca: ca-issuer
clusterSpecList:
- clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
members: 1
- clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
members: 2
applicationDatabase:
version: "${APPDB_VERSION}"
topology: MultiCluster
security:
certsSecretPrefix: cert-prefix
tls:
ca: ca-issuer
clusterSpecList:
- clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
members: 2
externalAccess:
externalDomain: "${APPDB_CLUSTER_0_EXTERNAL_DOMAIN}"
externalService:
annotations:
external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_0_EXTERNAL_DOMAIN}"
- clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
members: 2
externalAccess:
externalDomain: "${APPDB_CLUSTER_1_EXTERNAL_DOMAIN}"
externalService:
annotations:
external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_1_EXTERNAL_DOMAIN}"
- clusterName: "${K8S_CLUSTER_2_CONTEXT_NAME}"
members: 1
externalAccess:
externalDomain: "${APPDB_CLUSTER_2_EXTERNAL_DOMAIN}"
externalService:
annotations:
external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_2_EXTERNAL_DOMAIN}"
backup:
enabled: false
EOF
7

애플리케이션 데이터베이스와 Ops Manager 배포가 모두 완료될 때까지 기다립니다.

echo "Waiting for Application Database to reach Pending phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Pending opsmanager/om --timeout=30s
echo "Waiting for Ops Manager to reach Pending phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Pending opsmanager/om --timeout=600s
Waiting for Application Database to reach Pending phase...
mongodbopsmanager.mongodb.com/om condition met
Waiting for Ops Manager to reach Pending phase...
mongodbopsmanager.mongodb.com/om condition met
8
svcneg0=$(kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get svcneg -o=jsonpath='{.items[0].metadata.name}')
gcloud compute backend-services add-backend om-backend-service \
--global \
--network-endpoint-group="${svcneg0}" \
--network-endpoint-group-zone="${K8S_CLUSTER_0_ZONE}" \
--balancing-mode RATE --max-rate-per-endpoint 5
svcneg1=$(kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get svcneg -o=jsonpath='{.items[0].metadata.name}')
gcloud compute backend-services add-backend om-backend-service \
--global \
--network-endpoint-group="${svcneg1}" \
--network-endpoint-group-zone="${K8S_CLUSTER_1_ZONE}" \
--balancing-mode RATE --max-rate-per-endpoint 5
9
echo "Waiting for Application Database to reach Running phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s
echo; echo "Waiting for Ops Manager to reach Running phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=900s
echo; echo "MongoDBOpsManager resource"
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get opsmanager/om
echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
Waiting for Application Database to reach Running phase...
mongodbopsmanager.mongodb.com/om condition met
Waiting for Ops Manager to reach Running phase...
mongodbopsmanager.mongodb.com/om condition met
MongoDBOpsManager resource
NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS
om 8.0.5 Running Running Disabled 15m
Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0-lucian
NAME READY STATUS RESTARTS AGE
om-0-0 1/1 Running 0 12m
om-db-0-0 3/3 Running 0 3m50s
om-db-0-1 3/3 Running 0 4m38s
Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1-lucian
NAME READY STATUS RESTARTS AGE
om-1-0 1/1 Running 0 12m
om-1-1 1/1 Running 0 8m46s
om-db-1-0 3/3 Running 0 2m2s
om-db-1-1 3/3 Running 0 2m54s
10
kubectl kustomize "github.com/minio/operator/resources/?timeout=120&ref=v5.0.12" | \
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f -
kubectl kustomize "github.com/minio/operator/examples/kustomization/tenant-tiny?timeout=120&ref=v5.0.12" | \
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f -
# add two buckets to the tenant config
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "tenant-tiny" patch tenant/myminio \
--type='json' \
-p="[{\"op\": \"add\", \"path\": \"/spec/buckets\", \"value\": [{\"name\": \"${S3_OPLOG_BUCKET_NAME}\"}, {\"name\": \"${S3_SNAPSHOT_BUCKET_NAME}\"}]}]"
11
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" create secret generic s3-access-secret \
--from-literal=accessKey="${S3_ACCESS_KEY}" \
--from-literal=secretKey="${S3_SECRET_KEY}"
# minio TLS secrets are signed with the default k8s root CA
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" create secret generic s3-ca-cert \
--from-literal=ca.crt="$(kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n kube-system get configmap kube-root-ca.crt -o jsonpath="{.data.ca\.crt}")"
12
kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" -f - <<EOF
apiVersion: mongodb.com/v1
kind: MongoDBOpsManager
metadata:
name: om
spec:
topology: MultiCluster
version: "${OPS_MANAGER_VERSION}"
adminCredentials: om-admin-user-credentials
externalConnectivity:
type: ClusterIP
annotations:
cloud.google.com/neg: '{"exposed_ports": {"8443":{}}}'
opsManagerURL: "https://${OPS_MANAGER_EXTERNAL_DOMAIN}"
security:
certsSecretPrefix: cert-prefix
tls:
ca: ca-issuer
clusterSpecList:
- clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
members: 1
backup:
members: 0
- clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
members: 2
backup:
members: 0
- clusterName: "${K8S_CLUSTER_2_CONTEXT_NAME}"
members: 0
backup:
members: 1
applicationDatabase:
version: "${APPDB_VERSION}"
topology: MultiCluster
security:
certsSecretPrefix: cert-prefix
tls:
ca: ca-issuer
clusterSpecList:
- clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
members: 2
externalAccess:
externalDomain: "${APPDB_CLUSTER_0_EXTERNAL_DOMAIN}"
externalService:
annotations:
external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_0_EXTERNAL_DOMAIN}"
- clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
members: 2
externalAccess:
externalDomain: "${APPDB_CLUSTER_1_EXTERNAL_DOMAIN}"
externalService:
annotations:
external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_1_EXTERNAL_DOMAIN}"
- clusterName: "${K8S_CLUSTER_2_CONTEXT_NAME}"
members: 1
externalAccess:
externalDomain: "${APPDB_CLUSTER_2_EXTERNAL_DOMAIN}"
externalService:
annotations:
external-dns.alpha.kubernetes.io/hostname: "{podName}.${APPDB_CLUSTER_2_EXTERNAL_DOMAIN}"
backup:
enabled: true
s3Stores:
- name: my-s3-block-store
s3SecretRef:
name: "s3-access-secret"
pathStyleAccessEnabled: true
s3BucketEndpoint: "${S3_ENDPOINT}"
s3BucketName: "${S3_SNAPSHOT_BUCKET_NAME}"
customCertificateSecretRefs:
- name: s3-ca-cert
key: ca.crt
s3OpLogStores:
- name: my-s3-oplog-store
s3SecretRef:
name: "s3-access-secret"
s3BucketEndpoint: "${S3_ENDPOINT}"
s3BucketName: "${S3_OPLOG_BUCKET_NAME}"
pathStyleAccessEnabled: true
customCertificateSecretRefs:
- name: s3-ca-cert
key: ca.crt
EOF
13
echo; echo "Waiting for Backup to reach Running phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.backup.phase}'=Running opsmanager/om --timeout=1200s
echo "Waiting for Application Database to reach Running phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=600s
echo; echo "Waiting for Ops Manager to reach Running phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=600s
echo; echo "MongoDBOpsManager resource"
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get opsmanager/om
echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
echo; echo "Pods running in cluster ${K8S_CLUSTER_2_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${OM_NAMESPACE}" get pods
Waiting for Backup to reach Running phase...
mongodbopsmanager.mongodb.com/om condition met
Waiting for Application Database to reach Running phase...
mongodbopsmanager.mongodb.com/om condition met
Waiting for Ops Manager to reach Running phase...
mongodbopsmanager.mongodb.com/om condition met
MongoDBOpsManager resource
NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS
om 8.0.4 Running Running Running 14m
Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0-lucian
NAME READY STATUS RESTARTS AGE
om-0-0 1/1 Running 0 11m
om-db-0-0 3/3 Running 0 5m35s
om-db-0-1 3/3 Running 0 6m20s
Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1-lucian
NAME READY STATUS RESTARTS AGE
om-1-0 1/1 Running 0 11m
om-1-1 1/1 Running 0 8m28s
om-db-1-0 3/3 Running 0 3m52s
om-db-1-1 3/3 Running 0 4m48s
Pods running in cluster gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2-lucian
NAME READY STATUS RESTARTS AGE
om-2-backup-daemon-0 1/1 Running 0 2m
om-db-2-0 3/3 Running 0 2m55s
14

자격 증명 구성하려면 MongoDB Ops Manager 조직 생성하고, MongoDB Ops Manager UI 에서 프로그래밍 방식의 API 키를 생성하고, 로드 밸런서 IP 로 시크릿 을 생성해야 합니다. 자세한 학습 은 Kubernetes Operator에 대한 자격 증명 생성 을 참조하세요.

돌아가기

TLS 인증서

이 페이지의 내용