Elastic:运用 Elastic Stack 对 Kubernetes 进行监控 (三)

Metricbeat 是安装在服务器上的轻量级的摄入器,用于定期从主机和运行的服务中收集指标。 这代表了监视我们的堆栈的可观察性的第一支柱。

默认情况下,Metricbeat 捕获系统指标,但还包括大量模块,以捕获有关服务的特定指标,例如代理(NGINX),消息总线(RabbitMQ,Kafka),数据库(MongoDB,MySQL,Redis)和许多其他(查找完整的 在这里列出


先决条件 - kube-state-metrics


首先,我们需要安装 kube-state-metrics,这是一个监听 Kubernetes API 的服务,以公开有关每个 Object 状态的一组有用的指标。

要安装 kube-state-metrics,只需运行以下命令:

kubectl apply -f https://raw.githubusercontent.com/gjeanmart/kauri-content/master/spring-boot-simple/k8s/kube-state-metrics.yml
$ kubectl apply -f https://raw.githubusercontent.com/gjeanmart/kauri-content/master/spring-boot-simple/k8s/kube-state-metrics.yml
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
serviceaccount/kube-state-metrics created
service/kube-state-metrics created

对于中国区域的开发者来说,如果在上面的命令中遇到问题(比如不能访问那个地址),你们可以先下载 yaml 文件到本地,然后再接着执行上面的命令:

kubectl apply -f kube-state-metrics.yml

配置

为了在 Kubernetes 环境上安装 Metricbeat,我们需要安装 DaemonSet(在每个节点上都安装了摄入器)并配置设置。

首先,我们将 Metricbeat 配置写入到 metricbeat.yml 文件中,该文件位于 DaemonSet pod 容器的 /etc/metricbeat.yml 中。

该文件包含我们的 Metricbeat 设置。 我们将 Elasticsearch 连接(端点,用户名,密码)配置为输出,Kibana 连接(以导入先前存在的仪表板),要在拉取期间启用的模块以及索引生命周期文件(rollup,retention)等配置为输出。 。

metricbeat.settings.configmap.yml

# metricbeat.settings.configmap.yml
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: monitoring
  name: metricbeat-config
  labels:
    app: metricbeat
data:
  metricbeat.yml: |-

    # Configure modules
    metricbeat.modules:
      - module: system
        period: ${PERIOD}
        metricsets: ["cpu", "load", "memory", "network", "process", "process_summary", "core", "diskio", "socket"]
        processes: ['.*']
        process.include_top_n:
          by_cpu: 5      # include top 5 processes by CPU
          by_memory: 5   # include top 5 processes by memory

      - module: system
        period: ${PERIOD}
        metricsets:  ["filesystem", "fsstat"]
        processors:
        - drop_event.when.regexp:
            system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'

      - module: docker
        period: ${PERIOD}
        hosts: ["unix:///var/run/docker.sock"]
        metricsets: ["container", "cpu", "diskio", "healthcheck", "info", "memory", "network"]

      - module: kubernetes
        period: ${PERIOD}
        host: ${NODE_NAME}
        hosts: ["localhost:10255"]
        metricsets: ["node", "system", "pod", "container", "volume"]

      - module: kubernetes
        period: ${PERIOD}
        host: ${NODE_NAME}
        metricsets: ["state_node", "state_deployment", "state_replicaset", "state_pod", "state_container"]
        hosts: ["kube-state-metrics.kube-system.svc.cluster.local:8080"]

    # Configure specific service module based on k8s deployment
    metricbeat.autodiscover:
      providers:
        - type: kubernetes
          host: ${NODE_NAME}
          templates:
            - condition.equals:
                kubernetes.labels.app: mongo
              config:
                - module: mongodb
                  period: ${PERIOD}
                  hosts: ["mongo.default:27017"]
                  metricsets: ["dbstats", "status", "collstats", "metrics", "replstatus"]

    # Connection to ElasticSearch
    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}

    # Connection to Kibana to import pre-existing dashboards
    setup.kibana:
      host: '${KIBANA_HOST:kibana}:${KIBANA_PORT:5601}'

    # Import pre-existing dashboards
    setup.dashboards.enabled: true

    # Configure indice lifecycle
    setup.ilm:
      policy_file: /etc/indice-lifecycle.json
---

Elasticsearch 索引生命周期表示你要根据索引的大小或年龄将一组规则应用于索引。 因此,例如,有可能每天或每次超过 1GB 时都将索引翻转(创建新文件),我们还可以根据规则配置不同的阶段(对于活动的读/写索引为热,对于只读和删除为冷 删除索引)。 监视可以每天生成大量数据,也许每天超过 10GB,因此,为了防止在云存储上花费大量资金,我们可以使用索引生命周期轻松配置数据保留。

在下面的文件中,我们配置为每天或每次超过 2GB 时翻转索引,并删除所有30天之前的索引文件。 我们只保留30天的监控数据

metricbeat.indice-lifecycle.configmap.yml

# metricbeat.indice-lifecycle.configmap.yml
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: monitoring
  name: metricbeat-indice-lifecycle
  labels:
    app: metricbeat
data:
  indice-lifecycle.json: |-
    {
      "policy": {
        "phases": {
          "hot": {
            "actions": {
              "rollover": {
                "max_size": "2GB" ,
                "max_age": "1d"
              }
            }
          },
          "delete": {
            "min_age": "30d",
            "actions": {
              "delete": {}
            }
          }
        }
      }
    }
---

下一部分是 DaemonSet,它描述了部署在 k8s 集群的每个节点上的Metricbeat代理。 我们特别注意到环境变量和访问 ConfigMap 的卷

metricbeat.daemonset.yml

# metricbeat.daemonset.yml
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  namespace: monitoring
  name: metricbeat
  labels:
    app: metricbeat
spec:
  selector:
    matchLabels:
      app: metricbeat  
  template:
    metadata:
      labels:
        app: metricbeat
    spec:
      serviceAccountName: metricbeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: metricbeat
        image: docker.elastic.co/beats/metricbeat:7.6.2
        args: [
          "-c", "/etc/metricbeat.yml",
          "-e",
          "-system.hostfs=/hostfs",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch-client.monitoring.svc.cluster.local
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          valueFrom:
            secretKeyRef:
              name: elasticsearch-pw-elastic
              key: password
        - name: KIBANA_HOST
          value: kibana.monitoring.svc.cluster.local
        - name: KIBANA_PORT
          value: "5601"
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: PERIOD
          value: "10s"
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/metricbeat.yml
          readOnly: true
          subPath: metricbeat.yml
        - name: indice-lifecycle
          mountPath: /etc/indice-lifecycle.json
          readOnly: true
          subPath: indice-lifecycle.json
        - name: dockersock
          mountPath: /var/run/docker.sock
        - name: proc
          mountPath: /hostfs/proc
          readOnly: true
        - name: cgroup
          mountPath: /hostfs/sys/fs/cgroup
          readOnly: true
      volumes:
      - name: proc
        hostPath:
          path: /proc
      - name: cgroup
        hostPath:
          path: /sys/fs/cgroup
      - name: dockersock
        hostPath:
          path: /var/run/docker.sock
      - name: config
        configMap:
          defaultMode: 0600
          name: metricbeat-config
      - name: indice-lifecycle
        configMap:
          defaultMode: 0600
          name: metricbeat-indice-lifecycle
      - name: data
        hostPath:
          path: /var/lib/metricbeat-data
          type: DirectoryOrCreate
---

最后一部分是更通用的部分,该部分将 k8s 资源授予 Metricbeat 代理访问权限。

metricbeat.permissions.yml

# metricbeat.permissions.yml
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: metricbeat
subjects:
- kind: ServiceAccount
  name: metricbeat
  namespace: monitoring
roleRef:
  kind: ClusterRole
  name: metricbeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: metricbeat
  labels:
    app: metricbeat
rules:
- apiGroups: [""]
  resources:
  - nodes
  - namespaces
  - events
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
  resources:
  - replicasets
  verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
  resources:
  - statefulsets
  - deployments
  verbs: ["get", "list", "watch"]
- apiGroups:
  - ""
  resources:
  - nodes/stats
  verbs:
  - get
---
apiVersion: v1
kind: ServiceAccount
metadata:
  namespace: monitoring
  name: metricbeat
  labels:
    app: metricbeat
---

安装及结果

现在,我们可以部署 Metricbeat:

kubectl apply  -f metricbeat.settings.configmap.yml \
                 -f metricbeat.indice-lifecycle.configmap.yml \
                 -f metricbeat.daemonset.yml \
                 -f metricbeat.permissions.yml
$ kubectl apply  -f metricbeat.settings.configmap.yml \
>                  -f metricbeat.indice-lifecycle.configmap.yml \
>                  -f metricbeat.daemonset.yml \
>                  -f metricbeat.permissions.yml
configmap/metricbeat-config created
configmap/metricbeat-indice-lifecycle created
clusterrolebinding.rbac.authorization.k8s.io/metricbeat created
clusterrole.rbac.authorization.k8s.io/metricbeat created
serviceaccount/metricbeat created

等到metricbeat pod正在运行,你应该能够在Kibana中观察指标。

kubectl get all -n monitoring -l app=metricbeat
$ kubectl get all -n monitoring -l app=metricbeat
NAME                   READY   STATUS    RESTARTS   AGE
pod/metricbeat-gtwsr   1/1     Running   0          118s

NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/metricbeat   1         1         1       1            1           <none>          118s

在设置中,我们将属性 setup.dashboards.enabled 设置为true,以导入先前存在的仪表板。 从左侧菜单转到“Dashboard”,你应该看到大约50个 Metricbeat 仪表板的列表。

我们启用了模块 kubernetes,因此仪表板 [Metricbeat Kubernetes] OverviewECS 里应该有展示:

点击 [Metricbeat Kubernetes] Overview ECS 链接:

我们还启用了模块 mongodb,现在在仪表板上查看 [Metricbeat MongoDB] Overview ECS:

 

下一步

在接下来的一篇文章中,我们将详述如何安装 Filebeat 并配置 Filebeat。请详细阅读文章 “运用Elastic Stack对Kubernetes进行监控 (四)”。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值