在今天的这篇文章中,我们现在将安装 Filebeat,它是一个轻量级的代理,可以在 k8s 环境(节点和 pod 日志)中收集日志数据并将其转发到 Elasticsearch。 此外,可以将特定模块配置为解析和可视化来自常见应用程序或系统(数据库,消息总线)的日志格式。
配置
与 Metricbeat 相似,Filebeat 需要一个设置文件来配置与 Elasticsearch 的连接(端点,用户名,密码),与 Kibana 的连接(以导入先前存在的仪表板)以及从 k8s 环境的每个容器收集和解析日志的方式 。
以下 ConfigMap 表示捕获日志所需的所有设置(在此处查找更多内容以自定义此配置)。
filebeat.settings.configmap.yml
# filebeat.settings.configmap.yml
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: monitoring
name: filebeat-config
labels:
app: filebeat
data:
filebeat.yml: |-
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
in_cluster: true
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
filebeat.modules:
- module: system
syslog:
enabled: true
auth:
enabled: true
filebeat.autodiscover:
providers:
- type: kubernetes
templates:
- condition.equals:
kubernetes.labels.app: mongo
config:
- module: mongodb
enabled: true
log:
input:
type: docker
containers.ids:
- ${data.kubernetes.container.id}
processors:
- drop_event:
when.or:
- and:
- regexp:
message: '^\d+\.\d+\.\d+\.\d+ '
- equals:
fileset.name: error
- and:
- not:
regexp:
message: '^\d+\.\d+\.\d+\.\d+ '
- equals:
fileset.name: access
- add_cloud_metadata:
- add_kubernetes_metadata:
- add_docker_metadata:
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
setup.kibana:
host: '${KIBANA_HOST:kibana}:${KIBANA_PORT:5601}'
setup.dashboards.enabled: true
setup.template.enabled: true
setup.ilm:
policy_file: /etc/indice-lifecycle.json
---
我们还将启动时的索引生命周期配置为每天滚动索引并删除30天前的索引。
filebeat.indice-lifecycle.configmap.yml
# filebeat.indice-lifecycle.configmap.yml
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: monitoring
name: filebeat-indice-lifecycle
labels:
app: filebeat
data:
indice-lifecycle.json: |-
{
"policy": {
"phases": {
"hot": {
"actions": {
"rollover": {
"max_size": "5GB" ,
"max_age": "1d"
}
}
},
"delete": {
"min_age": "30d",
"actions": {
"delete": {}
}
}
}
}
}
---
以下 DaemonSet 文件允许在k8s集群的每个节点上部署代理,以根据上面配置的设置收集日志。
filebeat.daemonset.yml
#filebeat.daemonset.yml
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
namespace: monitoring
name: filebeat
labels:
app: filebeat
spec:
selector:
matchLabels:
app: filebeat
template:
metadata:
labels:
app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.6.2
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch-client.monitoring.svc.cluster.local
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-pw-elastic
key: password
- name: KIBANA_HOST
value: kibana.monitoring.svc.cluster.local
- name: KIBANA_PORT
value: "5601"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: filebeat-indice-lifecycle
mountPath: /etc/indice-lifecycle.json
readOnly: true
subPath: indice-lifecycle.json
- name: data
mountPath: /usr/share/filebeat/data
- name: varlog
mountPath: /var/log
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: dockersock
mountPath: /var/run/docker.sock
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: filebeat-indice-lifecycle
configMap:
defaultMode: 0600
name: filebeat-indice-lifecycle
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: data
emptyDir: {}
---
最后,我们需要授予 Filebeat 权限以访问群集的某些资源。
filebeat.permissions.yml
# filebeat.permissions.yml
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
namespace: monitoring
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: monitoring
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
namespace: monitoring
name: filebeat
labels:
app: filebeat
rules:
- apiGroups: [""]
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: monitoring
name: filebeat
labels:
app: filebeat
---
安装及结果
现在,我们可以部署 Filebeat 了:
kubectl apply -f filebeat.settings.configmap.yml \
-f filebeat.indice-lifecycle.configmap.yml \
-f filebeat.daemonset.yml \
-f filebeat.permissions.yml
$ kubectl apply -f filebeat.settings.configmap.yml \
> -f filebeat.indice-lifecycle.configmap.yml \
> -f filebeat.daemonset.yml \
> -f filebeat.permissions.yml
configmap/filebeat-config created
configmap/filebeat-indice-lifecycle created
daemonset.apps/filebeat created
clusterrolebinding.rbac.authorization.k8s.io/filebeat created
clusterrole.rbac.authorization.k8s.io/filebeat created
serviceaccount/filebeat created
等到filebeat pod 正在运行,你应该能够观察Kibana中的日志。
kubectl get all -n monitoring -l app=filebeat
$ kubectl get all -n monitoring -l app=filebeat
NAME READY STATUS RESTARTS AGE
pod/filebeat-k7qqk 1/1 Running 0 50s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/filebeat 1 1 1 1 1 <none> 51s
现在 Filebeat 已启动并正在运行,你可以通过不同方式观察日志。 从左侧菜单中,单击“Logs”,你可以看到从每个节点和容器打印的所有日志的汇总视图。 你可以按附加到日志的任何属性(例如 kubernetes 标签)过滤日志,并在一段时间内导航:
Filebeat 还带有导入到 Kibana 的预构建仪表板,请转到 Dashboard,你应该有很多 Filebeat 仪表板可用。 我们启用了mongodb 模块,因此仪表板 “[Filebeat MongoDB] Overview ECS”。 它基于日志(错误率)概述了 MongoDB 的状态。
下一步
在下一步,我们将展示如何安装 APM,并配置 APM。请详细阅读文章 “运用 Elastic Stack 对 Kubernetes 进行监控 (五)”