Elastic Stack and Filebeat
This section explains how to log to a Docker installation of the Elastic Stack (Elasticsearch, Logstash and Kibana), using Filebeat to send log contents to the stack. Log data is persisted in a Docker volume called "monitoring-data".
This configuration enables you to experiment with local license server logging to the Elastic Stack. However, it is not recommended for production use, due to insufficient high availability, backup and indexing functionality.
This section is split into the following steps:
1. | Preparing the Directory Structure |
2. | Preparing the Local License Server |
3. | Creating the “docker-compose” File |
4. | Creating the Elasticsearch Files |
5. | Adding a Logstash Configuration |
6. | Building the Elastic Stack |
7. | Preparing to Use Filebeat |
8. | Sending Log Entries to Logstash |
Preparing the Directory Structure
This demo uses the following directory structure:
Directory structure |
demo/ | ----- docker-compose.yml | ----- elasticsearch/ | | | ---- Dockerfile | | | ---- elasticsearch.yml | ----- filebeat/ | ----- server/ | | | --- flexnetls.jar | | | --- producer-settings.xml | | | --- local-configuration.yaml | ----- logstash.conf |
Preparing the Local License Server
Follow the steps below to prepare the server to log to a Docker installation. This step assumes that Docker is installed.
To prepare the local license server for logging to a Docker installation:
1. | Copy the local license server files—flexnetls.jar, producer-settings.xml, and local-configuration.yaml—into the server directory. |
2. | Configure the server for logging in JSON format. Add the following entry to the top section of local-configuration.yaml: |
loggingStyle: JSON_ROLLOVER
This causes logs to be written in JSON format to the default location of $HOME/flexnetls/$PUBLISHER/logs.
3. | Start the license server using the following command from the server directory: |
$ java -jar flexnetls.jar
4. | Confirm that log files with a .json extension are being created. |
Creating the “docker-compose” File
Use the sample below to create the docker-compose.yml file. Note that this demo uses the producer name “acme”.
docker-compose.yml |
version: '2.2' services: elasticsearch: build: context: elasticsearch/ container_name: elasticsearch volumes: - monitoring-data:/usr/share/elasticsearch/data ports: - "9200:9200" - "9300:9300" environment: ES_JAVA_OPTS: "-Xmx512m -Xms512m" logstash: image: docker.elastic.co/logstash/logstash:7.9.1 container_name: logstash ports: - "5000:5000" - "5044:5044" - "12201:12201/udp" expose: - "5044/tcp" - "12201/udp" logging: driver: "json-file" environment: LS_JAVA_OPTS: "-Xmx256m -Xms256m" volumes: - ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf depends_on: - elasticsearch kibana: image: docker.elastic.co/kibana/kibana:7.9.1 ports: - "5601:5601" depends_on: - elasticsearch volumes: monitoring-data: driver: local |
Creating the Elasticsearch Files
Create the following files in the elasticsearch directory.
elasticsearch/Dockerfile |
FROM elasticsearch:7.9.1
COPY ./elasticsearch.yml /usr/share/elasticsearch/config/elasticsearch.yml
# FileRealm user account, useful for startup polling. RUN bin/elasticsearch-users useradd -r superuser -p esuser admin
RUN yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm \ && yum update -y \ && yum install -y jq \ && yum upgrade |
elasticsearch/elasticsearch.yml |
cluster.name: "docker-cluster" network.host: 0.0.0.0
# minimum_master_nodes need to be explicitly set when bound on a public IP # set to 1 to allow single node clusters # Details: https://github.com/elastic/elasticsearch/pull/17288 discovery.zen.minimum_master_nodes: 1
## Use single node discovery in order to disable production mode and avoid bootstrap checks ## see https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html discovery.type: single-node
node.data : true discovery.seed_hosts : [] |
Adding a Logstash Configuration
Add a Logstash configuration such as the following to the demo directory:
logstash.conf |
input { beats { port => 5044 } }
output { stdout { codec => rubydebug }
if ( "lls-logs" in [tags] ) { elasticsearch { hosts => ["elasticsearch:9200"] id => "lls-logs" index => "lls-logs-%{+YYYY.MM.dd}" codec => "json" } } } |
You can now build the Elastic Stack by executing this command in the demo directory:
$ docker-compose build
Download and expand Filebeat into the filebeat directory. This tool will send JSON log entries to the Elastic Stack. You can obtain a copy from www.elastic.co/downloads/beats/filebeat. On Linux you can also use your package manager (DEB, RPM) to install Filebeat. Using the package manager will install it as a system service with systemd bindings. In this case the configuration can be found in /etc/filebeat/.
The Filebeat distribution contains an example filebeat.yml file. Replace it with this:
filebeat/filebeat.yml |
filebeat.registry.path: ${HOME}/.filebeat-registry
filebeat.config: modules: path: ${path.config}/modules.d/*.yml reload.enabled: false
filebeat.inputs: - type: log json.keys_under_root: true json.overwrite_keys: true json.add_error_key: true encoding: utf-8 tags: ["lls-logs"] index : "%{[agent.name]}-lls-%{+yyyy.MM.dd}" paths: - ${HOME}/flexnetls/acme/logs/*.json
output.logstash: hosts: ["localhost:5044"] |
Note:This sample uses the producer name “acme”. In your file, replace “acme” with your producer name, as found in producer-settings.xml.
Sending Log Entries to Logstash
Perform the steps below to log entries to Logstash. You can then view the entries in Kibana.
To send log entries to Logstash
1. | Bring up the Elastic Stack by executing this command from the demo directory: |
$ docker-compose up -d
2. | Start the license server (if it is not already running). |
3. | Start Filebeat to start sending log entries to Logstash: |
$ ./filebeat -e
4. | Open Kibana with a browser by going to http://localhost:5601. In Kibana's home page, click Connect to your Elasticsearch index. |
5. | Create an index pattern of 'lls-logs-*'. When asked, set '@timestamp' as the primary time field. For information on index patterns, see www.elastic.co/guide/en/kibana/current/tutorial-define-index.html. |
6. | Click Discover (in the grid menu); this should display license server log entries. Consult the Kibana documentation for information about searching the log entries. |