Solutions:Elastic SIEM - 适用于家庭和企业的安全防护 ( 三)

本文详细介绍了在Ubuntu OS上安装配置Packetbeat和Auditbeat的过程,包括如何修改配置文件以增强事件处理能力,以及如何在Kibana中查看和分析收集到的数据。此外,还探讨了安装Filebeat和其他Beats的方法,以及在SIEM中展示数据的方式。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

这篇文章是我们之前文章 “Elastic SIEM - 适用于家庭和企业的安全防护”系列的续集。在今天的这篇文章中,我们接着来讲述数据的导入:Auditbeat 及 Packetbeat。我们系统的配置请参阅之前的文章 “Solutions:Elastic SIEM - 适用于家庭和企业的安全防护 ( 二)”。在今天的安装中,我们将把我们的 Auditbeat 及 Packetbeat 安装于 Ubuntu OS 之上。

 

安装 Packetbeat

我们可以参照文章 “Beats:如何安装 Packetbeat” 来在 Ubuntu OS 上安装 Packetbeat。根据一致性的需求,按照我们之前的文章 “Solutions:Elastic SIEM - 适用于家庭和企业的安全防护 ( 二)”,我们需要加入相应的 processors 到我们的 packetbeat.yml 文件中:

processors: 
  - add_host_metadata:
      netinfo.enabled: true
      geo: # These Geo configurations are optional
        location: 39.931854, 116.470528
        continent_name: Asia
        country_iso_code: CN
        region_name: Beijing
        region_iso_code: CN-BJ
        city_name: Beijing city
        name: myLocation
  - add_locale: ~ 
  - add_cloud_metadata: ~ 
  - add_fields: 
      when.network.source.ip: private 
      fields: 
        source.geo.location: 
          lat: 39.931854 
          lon: 116.470528
        source.geo.continent_name: Asia
        source.geo.country_iso_code: CN
        source.geo.region_name: Beijing
        source.geo.region_iso_code: CN-BJ
        source.geo.city_name: Beijing city
        source.geo.name: myLocation
      target: '' 
  - add_fields: 
      when.network.destination.ip: private 
      fields: 
        destination.geo.location: 
          lat: 39.931854 
          lon: 116.470528 
        destination.geo.continent_name: Asia
        destination.geo.country_iso_code: CN
        destination.geo.region_name: Beijing
        destination.geo.region_iso_code: CN-BJ
        destination.geo.city_name: Beijing city
        destination.geo.name: myLocation
      target: ''

我同时把 packetbeat.yml 里的 name 字段设置为 ubuntu。这样整个 packetbeat.yml 的文件如下:

#################### Packetbeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The packetbeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/packetbeat/index.html

#============================== Network device ================================

# Select the network interface to sniff the data. On Linux, you can use the
# "any" keyword to sniff on all connected interfaces.
packetbeat.interfaces.device: any

#================================== Flows =====================================

# Set `enabled: false` or comment out all options to disable flows reporting.
packetbeat.flows:
  # Set network flow timeout. Flow is killed if no packet is received before being
  # timed out.
  timeout: 30s

  # Configure reporting period. If set to -1, only killed flows will be reported
  period: 10s

#========================== Transaction protocols =============================

packetbeat.protocols:
- type: icmp
  # Enable ICMPv4 and ICMPv6 monitoring. Default: false
  enabled: true

- type: amqp
  # Configure the ports where to listen for AMQP traffic. You can disable
  # the AMQP protocol by commenting out the list of ports.
  ports: [5672]

- type: cassandra
  #Cassandra port for traffic monitoring.
  ports: [9042]

- type: dhcpv4
  # Configure the DHCP for IPv4 ports.
  ports: [67, 68]

- type: dns
  # Configure the ports where to listen for DNS traffic. You can disable
  # the DNS protocol by commenting out the list of ports.
  ports: [53]

- type: http
  # Configure the ports where to listen for HTTP traffic. You can disable
  # the HTTP protocol by commenting out the list of ports.
  ports: [80, 8080, 8000, 5000, 8002]

- type: memcache
  # Configure the ports where to listen for memcache traffic. You can disable
  # the Memcache protocol by commenting out the list of ports.
  ports: [11211]

- type: mysql
  # Configure the ports where to listen for MySQL traffic. You can disable
  # the MySQL protocol by commenting out the list of ports.
  ports: [3306,3307]

- type: pgsql
  # Configure the ports where to listen for Pgsql traffic. You can disable
  # the Pgsql protocol by commenting out the list of ports.
  ports: [5432]

- type: redis
  # Configure the ports where to listen for Redis traffic. You can disable
  # the Redis protocol by commenting out the list of ports.
  ports: [6379]

- type: thrift
  # Configure the ports where to listen for Thrift-RPC traffic. You can disable
  # the Thrift-RPC protocol by commenting out the list of ports.
  ports: [9090]

- type: mongodb
  # Configure the ports where to listen for MongoDB traffic. You can disable
  # the MongoDB protocol by commenting out the list of ports.
  ports: [27017]

- type: nfs
  # Configure the ports where to listen for NFS traffic. You can disable
  # the NFS protocol by commenting out the list of ports.
  ports: [2049]

- type: tls
  # Configure the ports where to listen for TLS traffic. You can disable
  # the TLS protocol by commenting out the list of ports.
  ports:
    - 443   # HTTPS
    - 993   # IMAPS
    - 995   # POP3S
    - 5223  # XMPP over SSL

    - 8443
    - 8883  # Secure MQTT
    - 9243  # Elasticsearch

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
name: "ubuntu"

# The tags of the shipper are included in their own field with each
# transaction published.
tags: ["ubuntu", "HomePC"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "192.168.43.220:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using Packetbeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["192.168.43.220:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors: 
  - add_host_metadata:
      netinfo.enabled: true
      geo: # These Geo configurations are optional
        location: 39.931854, 116.470528
        continent_name: Asia
        country_iso_code: CN
        region_name: Beijing
        region_iso_code: CN-BJ
        city_name: Beijing city
        name: myLocation
  - add_locale: ~ 
  - add_cloud_metadata: ~ 
  - add_fields: 
      when.network.source.ip: private 
      fields: 
        source.geo.location: 
          lat: 39.931854 
          lon: 116.470528
        source.geo.continent_name: Asia
        source.geo.country_iso_code: CN
        source.geo.region_name: Beijing
        source.geo.region_iso_code: CN-BJ
        source.geo.city_name: Beijing city
        source.geo.name: myLocation
      target: '' 
  - add_fields: 
      when.network.destination.ip: private 
      fields: 
        destination.geo.location: 
          lat: 39.931854 
          lon: 116.470528 
        destination.geo.continent_name: Asia
        destination.geo.country_iso_code: CN
        destination.geo.region_name: Beijing
        destination.geo.region_iso_code: CN-BJ
        destination.geo.city_name: Beijing city
        destination.geo.name: myLocation
      target: ''

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== X-Pack Monitoring ===============================
# packetbeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
monitoring.enabled: true

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Packetbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

#================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

如果我们的 Packetbeat 安装正确的话,那么,我可以在 Kibana 的 Dashboard 中看到如下的画面:

安装 Auditbeat

我们可以参照之前的文章 “Beats:  Elastic 中的 Auditbeat 使用介绍” 来在 Ubuntu OS 上进行安装。针对 Auditbeat 的安装,我们需要做一点点小的修改。我们在 /etc/auditbeat/audit.rules.d 目录下,把默认的样本 rule 打开:

sudo mv sample-rules.conf.disabled rules.conf

这个 rules 的内容如下:

## If you are on a 64 bit platform, everything should be running
## in 64 bit mode. This rule will detect any use of the 32 bit syscalls
## because this might be a sign of someone exploiting a hole in the 32
## bit API.
-a always,exit -F arch=b32 -S all -F key=32bit-abi

## Executions.
-a always,exit -F arch=b64 -S execve,execveat -k exec

## Identity changes.
-w /etc/group -p wa -k identity
-w /etc/passwd -p wa -k identity
-w /etc/gshadow -p wa -k identity

## Unauthorized access attempts.
-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EACCES -k access
-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access

根据一致性的需求,按照我们之前的文章“Solutions:Elastic SIEM - 适用于家庭和企业的安全防护 ( 二)”,我们需要加入相应的processors到我们的auditbeat.yml文件中:修改后的auditbeat.yml文件为:

###################### Auditbeat Configuration Example #########################

# This is an example configuration file highlighting only the most common
# options. The auditbeat.reference.yml file from the same directory contains all
# the supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/auditbeat/index.html

#==========================  Modules configuration =============================
auditbeat.modules:

- module: auditd
  # Load audit rules from separate files. Same format as audit.rules(7).
  audit_rule_files: [ '${path.config}/audit.rules.d/*.conf' ]
  audit_rules: |
    ## Define audit rules here.
    ## Create file watches (-w) or syscall audits (-a or -A). Uncomment these
    ## examples or add your own rules.

    ## If you are on a 64 bit platform, everything should be running
    ## in 64 bit mode. This rule will detect any use of the 32 bit syscalls
    ## because this might be a sign of someone exploiting a hole in the 32
    ## bit API.
    #-a always,exit -F arch=b32 -S all -F key=32bit-abi

    ## Executions.
    #-a always,exit -F arch=b64 -S execve,execveat -k exec

    ## External access (warning: these can be expensive to audit).
    #-a always,exit -F arch=b64 -S accept,bind,connect -F key=external-access

    ## Identity changes.
    #-w /etc/group -p wa -k identity
    #-w /etc/passwd -p wa -k identity
    #-w /etc/gshadow -p wa -k identity

    ## Unauthorized access attempts.
    #-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EACCES -k access
    #-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access

- module: file_integrity
  paths:
  - /bin
  - /usr/bin
  - /sbin
  - /usr/sbin
  - /etc

- module: system
  datasets:
    - host    # General host information, e.g. uptime, IPs
    - login   # User logins, logouts, and system boots.
    - package # Installed, updated, and removed packages
    - process # Started and stopped processes
    - socket  # Opened and closed sockets
    - user    # User information

  # How often datasets send state updates with the
  # current state of the system (e.g. all currently
  # running processes, all open sockets).
  state.period: 12h

  # Enabled by default. Auditbeat will read password fields in
  # /etc/passwd and /etc/shadow and store a hash locally to
  # detect any changes.
  user.detect_password_changes: true

  # File patterns of the login record files.
  login.wtmp_file_pattern: /var/log/wtmp*
  login.btmp_file_pattern: /var/log/btmp*

#==================== Elasticsearch template setting ==========================
setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
name: ubuntu

# The tags of the shipper are included in their own field with each
# transaction published.
tags: ["ubuntu", "HomePC"] 

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "192.168.43.220:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using Auditbeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["192.168.43.220:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors: 
  - add_host_metadata:
      netinfo.enabled: true
      geo: # These Geo configurations are optional
        location: 39.931854, 116.470528
        continent_name: Asia
        country_iso_code: CN
        region_name: Beijing
        region_iso_code: CN-BJ
        city_name: Beijing city
        name: myLocation
  - add_locale: ~ 
  - add_cloud_metadata: ~ 
  - add_fields: 
      when.network.source.ip: private 
      fields: 
        source.geo.location: 
          lat: 39.931854 
          lon: 116.470528
        source.geo.continent_name: Asia
        source.geo.country_iso_code: CN
        source.geo.region_name: Beijing
        source.geo.region_iso_code: CN-BJ
        source.geo.city_name: Beijing city
        source.geo.name: myLocation
      target: '' 
  - add_fields: 
      when.network.destination.ip: private 
      fields: 
        destination.geo.location: 
          lat: 39.931854 
          lon: 116.470528 
        destination.geo.continent_name: Asia
        destination.geo.country_iso_code: CN
        destination.geo.region_name: Beijing
        destination.geo.region_iso_code: CN-BJ
        destination.geo.city_name: Beijing city
        destination.geo.name: myLocation
      target: ''

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== X-Pack Monitoring ===============================
# auditbeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
monitoring.enabled: true

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Auditbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

#================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

等我们的auditbeat已经被成功运行起来后,我们可以在Kibana的Dashboard中看到如下的画面:

安装 Filebeat 等其它的 Beats

在很多的情况下,我们还需要根据我们自己的需求安装更多的 beats,比如 Filebeat。这对于很多的情况下是非常有用的。我们可以打开我们的 Kibana:

我们点击 Add events 按钮:

在上面,它显示了许多的应用场景。我们可以根据自己的需求来按照上面的要求来进行安装。大多数情况下,都是安装 Filebeat,并启动相应的模块:

 

在 SIEM 里展示数据

我打开在 Kibana 中的 SIEM 应用:

我可以同时看到三个来源的数据:Auditbeat,Winlogbeat 及 Packetbeat。

我们点击 Hosts tab:

我们看见有三个 hosts。这其中的原因是在中途我修改了在配置文件 yml 文件里的 name 从默认值 liuxg 到 ubuntu:

 

我们点击上面的 ubuntu 超链接,我们可以看到如下的画面:

在上面,我们可以看到关于这个 ubuntu host 的统计情况。我们可以从下面的 Authentications, Uncommon processes, Events 及 External alerts 来进行搜索:

由于我们添加了 Packetbeat 和 Auditbeat,我们可以得到 Network 的情况:

在上面显示了我们电脑的位置。也显示了 Network 的所有的 events。

在上面,如果我们点击 “Detections”,我们会发现如下的错误:

我们点击上面的链接,可以看到对我们的配置有一定的要求

我们按照上面的需求来重新进行配置:

  • 在 elasticsearch.yml 中,我们添加 xpack.security.enabled,并且设置它为 true,也即:
    • xpack.security.enabled: true
    • 在 Basic 授权的情况下,我们必须也设置 xpack.security.transport.ssl.enabled 为 true,否则 elasticsearch 也启动不了。
    • 由于我们已经启动了Elasticsearch安全,我们必须安装 “Elasticsearch:设置Elastic账户安全” 来设置账户安全。
    • 同时,我们也必须配置 kibana.yml 启用安全访问
    • 同时,我们也必须配置我们的各个 beats 的 Elasticsearch 访问用户名及密码,然后重新启动 beats
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["192.168.43.220:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "elastic"
  password: "changeme"
  • 在 kibana.yml 中,设置 xpack.encryptedSavedObjects.encryptionKey: 'fhjskloppd678ehkdfdlliverpoolfcr'

经过上面的修改后,我们再重新点击 “Detections”,我们可以看到如下的画面:

好了,今天我们就先讲到这里。在接下来的文章中,我们将着重介绍如何创建一个 rule。敬请阅读文章 “Solutions:Elastic SIEM - 适用于家庭和企业的安全防护 ( 四)”。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值