Docs 菜单
Docs 主页
/
Enterprise Kubernetes Operator
/ /

没有服务网格的多集群分片集群

在此页面上

  • 先决条件
  • 源代码
  • 步骤

您可以将MongoDB分片集群分布在多个Kubernetes集群上。通过多集群功能,您可以:

  • 通过将部署分布在多个Kubernetes集群(每个集群位于不同的地理地区)中,提高部署的韧性。

  • 通过在不同Kubernetes集群中部署指定分片的主节点 (primary node in the replica set)节点来配置地理分片的部署,这些集群距离依赖于该数据的应用程序或客户端更近,从而减少延迟。

  • 调整部署以提高性能。示例,您可以为不同Kubernetes集群中的所有分片或指定分片部署只读分析节点,也可以自定义资源分配。

在开始以下过程之前,请执行以下操作:

  • 安装 kubectl

  • 安装 Mongosh

  • 完成 GKE 集群过程或类似步骤。

  • 完成外部 DNS 程序或同等程序。

  • 完成 TLS 证书程序 或同等程序。

  • 完成部署MongoDB Operator程序。

  • 完成多集群MongoDB Ops Manager过程。如果您使用Cloud Manager而不是MongoDB Ops Manager,则可以跳过此步骤。

  • 按如下方式设置所需的环境变量:

# This script builds on top of the environment configured in the setup guides.
# It depends (uses) the following env variables defined there to work correctly.
# If you don't use the setup guide to bootstrap the environment, then define them here.
# ${K8S_CLUSTER_0_CONTEXT_NAME}
# ${K8S_CLUSTER_1_CONTEXT_NAME}
# ${K8S_CLUSTER_2_CONTEXT_NAME}
# ${MDB_NAMESPACE}
export SC_RESOURCE_NAME=mdb-sh
export MONGODB_VERSION="8.0.5-ent"

您可以在MongoDB Kubernetes Operator存储库中找到所有包含的源代码。

1

运行以下命令,为每个分片、您的mongos和配置服务器生成所需的 TLS 证书。

kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: mdb-sh-cert
spec:
dnsNames:
- "*.${MDB_NAMESPACE}.svc.cluster.local"
duration: 240h0m0s
issuerRef:
name: my-ca-issuer
kind: ClusterIssuer
renewBefore: 120h0m0s
secretName: cert-prefix-mdb-sh-cert
usages:
- server auth
- client auth
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: mdb-sh-0-cert
spec:
dnsNames:
- "*.${MDB_NAMESPACE}.svc.cluster.local"
duration: 240h0m0s
issuerRef:
name: my-ca-issuer
kind: ClusterIssuer
renewBefore: 120h0m0s
secretName: cert-prefix-mdb-sh-0-cert
usages:
- server auth
- client auth
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: mdb-sh-1-cert
spec:
dnsNames:
- "*.${MDB_NAMESPACE}.svc.cluster.local"
duration: 240h0m0s
issuerRef:
name: my-ca-issuer
kind: ClusterIssuer
renewBefore: 120h0m0s
secretName: cert-prefix-mdb-sh-1-cert
usages:
- server auth
- client auth
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: mdb-sh-2-cert
spec:
dnsNames:
- "*.${MDB_NAMESPACE}.svc.cluster.local"
duration: 240h0m0s
issuerRef:
name: my-ca-issuer
kind: ClusterIssuer
renewBefore: 120h0m0s
secretName: cert-prefix-mdb-sh-2-cert
usages:
- server auth
- client auth
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: mdb-sh-config-cert
spec:
dnsNames:
- "*.${MDB_NAMESPACE}.svc.cluster.local"
duration: 240h0m0s
issuerRef:
name: my-ca-issuer
kind: ClusterIssuer
renewBefore: 120h0m0s
secretName: cert-prefix-mdb-sh-config-cert
usages:
- server auth
- client auth
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: mdb-sh-mongos-cert
spec:
dnsNames:
- "*.${MDB_NAMESPACE}.svc.cluster.local"
duration: 240h0m0s
issuerRef:
name: my-ca-issuer
kind: ClusterIssuer
renewBefore: 120h0m0s
secretName: cert-prefix-mdb-sh-mongos-cert
usages:
- server auth
- client auth
EOF
2

运行以下命令以部署自定义资源。

kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" -f - <<EOF
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
name: ${SC_RESOURCE_NAME}
spec:
shardCount: 3
# we don't specify mongodsPerShardCount, mongosCount and configServerCount as they don't make sense for multi-cluster
topology: MultiCluster
type: ShardedCluster
version: ${MONGODB_VERSION}
opsManager:
configMapRef:
name: mdb-org-project-config
credentials: mdb-org-owner-credentials
persistent: true
backup:
mode: enabled
externalAccess: {}
security:
certsSecretPrefix: cert-prefix
tls:
ca: ca-issuer
authentication:
enabled: true
modes: ["SCRAM"]
mongos:
clusterSpecList:
- clusterName: ${K8S_CLUSTER_0_CONTEXT_NAME}
members: 2
configSrv:
clusterSpecList:
- clusterName: ${K8S_CLUSTER_0_CONTEXT_NAME}
members: 3 # config server will have 3 members in main cluster
- clusterName: ${K8S_CLUSTER_1_CONTEXT_NAME}
members: 1 # config server will have additional non-voting, read-only member in this cluster
memberConfig:
- votes: 0
priority: "0"
shard:
clusterSpecList:
- clusterName: ${K8S_CLUSTER_0_CONTEXT_NAME}
members: 3 # each shard will have 3 members in this cluster
- clusterName: ${K8S_CLUSTER_1_CONTEXT_NAME}
members: 1 # each shard will have additional non-voting, read-only member in this cluster
memberConfig:
- votes: 0
priority: "0"
EOF
3

运行以下命令,确认所有资源均已启动并运行。

echo; echo "Waiting for MongoDB to reach Running phase..."
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" wait --for=jsonpath='{.status.phase}'=Running "mdb/${SC_RESOURCE_NAME}" --timeout=900s
echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" get pods
echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" get pods
echo; echo "Pods running in cluster ${K8S_CLUSTER_2_CONTEXT_NAME}"
kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" get pods
4

运行以下命令,在分片集群中创建用户和凭证。

kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: sc-user-password
type: Opaque
stringData:
password: password
---
apiVersion: mongodb.com/v1
kind: MongoDBUser
metadata:
name: sc-user
spec:
passwordSecretKeyRef:
name: sc-user-password
key: password
username: "sc-user"
db: "admin"
mongodbResourceRef:
name: ${SC_RESOURCE_NAME}
roles:
- db: "admin"
name: "root"
EOF
kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" wait --for=jsonpath='{.status.phase}'=Updated -n "${MDB_NAMESPACE}" mdbu/sc-user --timeout=300s
5

运行以下命令,验证分片集群中的MongoDB资源是否可访问。

external_ip="$(kubectl get --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" svc "${SC_RESOURCE_NAME}-mongos-0-0-svc-external" -o=jsonpath="{.status.loadBalancer.ingress[0].ip}")"
mkdir -p certs
kubectl get --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${MDB_NAMESPACE}" cm/ca-issuer -o=jsonpath='{.data.ca-pem}' > certs/ca.crt
mongosh --host "${external_ip}" --username sc-user --password password --tls --tlsCAFile certs/ca.crt --tlsAllowInvalidHostnames --eval "db.runCommand({connectionStatus : 1})"
{
authInfo: {
authenticatedUsers: [ { user: 'sc-user', db: 'admin' } ],
authenticatedUserRoles: [ { role: 'root', db: 'admin' } ]
},
ok: 1,
'$clusterTime': {
clusterTime: Timestamp({ t: 1741702735, i: 1 }),
signature: {
hash: Binary.createFromBase64('kVqqNDHTI1zxYrPsU0QaYqyksJA=', 0),
keyId: Long('7480555706358169606')
}
},
operationTime: Timestamp({ t: 1741702735, i: 1 })
}

后退

多集群副本集