ELK stack 5.x [Swipe-SOA]
Jan 20, 2017
8 min read
22 views
Share


1. NGINX

Installation and configuration

apt update
apt install nginx

Use openssl to create an admin user, called kibanaadmin (you should use another name), that can access the Kibana web interface:

sudo -v
echo "kibanaadmin:`openssl passwd -apr1`" | sudo tee -a /etc/nginx/htpasswd.users

Change the Nginx default server block /etc/nginx/sites-available/default:

server {
    listen 80 default_server;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl default_server;

    # Replace with your hostname
    server_name elk.company.com;

    ssl on;

    # Replace with your paths to certs
    ssl_certificate /path/to/keys/cert.crt;
    ssl_certificate_key /path/to/keys/key.key;

    location / {
        auth_basic "Restricted Access";
        auth_basic_user_file /etc/nginx/htpasswd.users;

        proxy_pass http://localhost:5601;
    }

    # Path for letsencrypt temporary files
    location /.well-known {
        root   /var/www/html;
    }
}

Generate certificates

Self-signed [not-recomended]:

sudo mkdir /path/to/keys
cd /path/to/keys
sudo openssl req -x509 -nodes -newkey rsa:4096 -keyout key.key -out cert.crt -days 365

sudo nginx -t
sudo service nginx restart

Letsencrypt:

 sudo add-apt-repository ppa:certbot/certbot
 sudo apt-get update
 sudo apt-get install certbot

# Replace with your webroot and hostname
certbot certonly --webroot -w /var/www/html -d elk.company.com

# Certbot will generate certs and show path to them (paste this path to web-server config)

Add CRON tasks to renew automatically Letsencrypt certs:

sudo crontab -e

# Check or renew certs twice per day
0 12 * * 2,6 certbot renew --post-hook "systemctl reload nginx"

2. Elastic apt-repos

Add Elastic apt-repos. Run next commands step by step:

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
sudo apt install apt-transport-https
echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
sudo apt update

3. Elasticsearch

Elasticsearch (receives input messages from Logstash and stores them).

Installation

Note

Elasticsearch requires Java 8. Java 9 is not supported. Use the official Oracle distribution or an open-source distribution such as OpenJDK.

Install Java 8:

# Ubuntu 16.04
sudo apt install openjdk-8-jre

# Ubuntu 14.04
# use any method of Java 8 installation

Install Elasticsearch from repos:

sudo apt install elasticsearch

Elasticsearch will be installed in /usr/share/elasticsearch

Note

Add Elasticsearch to autorun – https://www.elastic.co/guide/en/elasticsearch/reference/5.1/deb.html#_sysv_literal_init_literal_vs_literal_systemd_literal

  • systemd [Ubuntu 16.04]:

    sudo /bin/systemctl daemon-reload
    sudo /bin/systemctl enable elasticsearch.service
    
  • init [Ubuntu 14.04]:

    sudo update-rc.d elasticsearch defaults 95 10
    

Configuration

Note

Elasticsearch will assign the entire heap specified in /etc/elasticsearch/jvm.options via the Xms (minimum heap size) and Xmx (maximum heap size) settings.

Main config-file /etc/elasticsearch/elasticsearch.yml:

# path to directory where to store the data (separate multiple locations by comma)
path.data: /path/to/data

# Use a descriptive name for the node:
node.name: node-1

Start Elasticsearch:

sudo service elasticsearch start

4. Kibana

Kibana (visualises data from Elasticsearch).

Installation

Note

Full newest instruction here – https://www.elastic.co/guide/en/kibana/current/deb.html

Install Kibana from repos:

sudo apt install kibana

Note

Add Kibana to autorun – https://www.elastic.co/guide/en/kibana/current/deb.html#_sysv_literal_init_literal_vs_literal_systemd_literal

  • systemd [Ubuntu 16.04]:

    sudo /bin/systemctl daemon-reload
    sudo /bin/systemctl enable kibana.service
    
  • init [Ubuntu 14.04]:

    sudo update-rc.d kibana defaults 95 10
    

Configuration

Note

Kibana loads its configuration from the /etc/kibana/kibana.yml file by default.

Change main parameters in /etc/kibana/kibana.yml:

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# allow remote connections
server.host: "localhost"

# Elasticsearch URL
elasticsearch.url: "http://localhost:9200"

Start Kibana:

sudo service kibana start

5. Logstash

Logstash (receives input messages, filters them and sends to Elasticsearch).

Installation

Note

Logstash requires Java 8. Java 9 is not supported. Use the official Oracle distribution or an open-source distribution such as OpenJDK.

Install Logstash from repos:

sudo apt install logstash

Note

Add Logstash to autorun

  • systemd [Ubuntu 16.04]:

    sudo /bin/systemctl daemon-reload
    sudo /bin/systemctl enable logstash.service
    
  • init [Ubuntu 14.04]:

    sudo update-rc.d logstash defaults 95 10
    

Configuration

Note

Logstash will assign the entire heap specified in /etc/logstash/jvm.options via the Xms (minimum heap size) and Xmx (maximum heap size) settings.

Create config for certs generating /etc/logstash/ssl/ssl.conf:

[ req ]
distinguished_name = req_distinguished_name
x509_extensions    = v3_req
prompt             = no

[ req_distinguished_name ]
O = My Organisation

[ v3_req ]
basicConstraints = CA:TRUE
subjectAltName   = @alt_names

[ alt_names ]
DNS.1 = <HOST_DNS>

HOST_DNS – elk.my-company.com, etc. (Host, where logstash is installed and listen input connections)

Note

In [ alt_names ] section you can describe host DNS name’s or IP’s.

Generate SSL cert:

cd /etc/logstash/ssl/
sudo openssl req -x509 -newkey rsa:4096 -keyout logstash.key -out logstash.crt -days 10000 -nodes -batch -config ssl.conf

Add your config-file in conf.d-directory /etc/logstash/conf.d/my-conf.conf:

input {
  beats {
    port => "5044"
    ssl => true
    ssl_certificate => "/etc/logstash/ssl/logstash.crt"
    ssl_key => "/etc/logstash/ssl/logstash.key"
  }
}


filter {
  grok {

    # Pattern for next nginx access log format:
    #
    # log_format  main  '[$time_local] $remote_addr "$request" '
    #                   '$status $body_bytes_sent '
    #                   '"$http_user_agent" "$request_time"';

    match => { "message" => "%{WORD}\[%{NUMBER}\]: \[%{HTTPDATE:time}\] %{IPORHOST:ip} \"(?:%{WORD:verbs} %{NOTSPACE:request}(?: HTTP/%{NUMBER:http})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes:integer}|-) %{QS:agent} \"%{NUMBER:duration:float}\"" }
  }

  # Drop messages, who not match with grok pattern.
  if "_grokparsefailure" in [tags] {
    drop { }
  }

  mutate {
    add_field => { "request_clean" => "%{request}" }
  }

  mutate {
    gsub => [
      "request_clean", "\?.*", ""
    ]
  }

  # Set @timestamp same as 'time' field.
  date {
    match => [ "time", "dd/MMM/yyyy:HH:mm:ss Z" ]
  }

  # Add useragent information.
  useragent {
    source => "agent"
    target => "useragent"
  }

  # Remove not necessary fields.
  mutate {
    remove_field => [
      "[useragent][major]",
      "[useragent][minor]",
      "[useragent][os_major]",
      "[useragent][os_minor]",
      "[useragent][patch]"
    ]
  }

  # Add geoip information.
  geoip {
      source => "ip"
      fields => [
        "city_name",
        "continent_code",
        "country_code2",
        "country_name",
        "location",
        "region_name",
        "timezone"
      ]
  }
}


output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "logstash-%{[fields][env]}-%{+YYYY.MM.dd}"
  }

  # Debug mode (output on stdout).
  #stdout {
  #  codec => rubydebug
  #}
}

Note

Logstash will send messages with next index template logstash-<env_field_from_filebeat>-<timestamp>

Start Logstash:

sudo service logstash start

Note

After installation and configuration Logstash will receive messages, filter them and send to Elasticsearch

7. Filebeat

Filebeat (reads logs and delivers them to logstash).

Note

Install Filebeat on host, where logs are situated

Installation

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-amd64.deb
sudo dpkg -i filebeat-5.1.1-amd64.deb

Use the update-rc.d command to configure Filebeat to start automatically when the system boots up:

# Ubuntu 14.04
sudo update-rc.d filebeat defaults 95 10

Configuration

Create dir /etc/filebeat/ssl and put there your Logstash certificate (generated early at /etc/logstash/ssl/logstash.crt on ELK host)

Create file /etc/filebeat/filebeat.yml:

filebeat.prospectors:

- input_type: log
  paths:
    - <path_to_log(s)>
  fields:
    env: <environment>

# Different environments on same host.
#- input_type: log
#  paths:
#    - <path_to_log(s)>
#  fields:
#    env: <environment>

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["<logstash_ip>:5044"]
  ssl:
  ssl.certificate_authorities: ["/etc/filebeat/ssl/logstash.crt"]
  • path_to_log(s) – /var/log/nginx/access.log, etc.
  • environment – staging, live, some_env, etc. This field necessary for separating custom environment.
  • logstash_ip – 192.168.10.10, logstash.myhost.com, etc.

Start Filebeat:

# start in background
sudo service filebeat start

Note

After installation and configuration Filebeat will read and send messages to Logstash. When filebeat will have sent first message, you will can open WEB UI of Kibana (<elk_host_dns>:5601) and setup index with next template logstash-env_field_from_filebeat-*

8. Deleting old Elasticsearch indices

Install and setup Elasticsearch-curator

# Ubuntu 16.04

sudo sh -c "echo 'deb http://packages.elastic.co/curator/4/debian stable main' > /etc/apt/sources.list.d/elasticsearch-curator.list"

sudo apt update && sudo apt install elasticsearch-curator

Create curator config file /root/.curator/curator.yml:

---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
client:
 hosts:
 - 127.0.0.1
 port: 9200
 url_prefix:
 use_ssl: False
 certificate:
 client_cert:
 client_key:
 ssl_no_validate: False
 http_auth:
 timeout: 30
 master_only: False

logging:
 loglevel: INFO
 logfile:
 logformat: default
 blacklist: ['elasticsearch', 'urllib3']

Create action file, which describe instruction to delete indices  older than 30 days

/root/curator/del_old_swipe-staging.yml:

---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
 1:
 action: delete_indices
 description: >-
 Delete indices older than 30 days (based on index name), for logstash-swipe-staging-
 prefixed indices. Ignore the error if the filter does not result in an
 actionable list of indices (ignore_empty_list) and exit cleanly.
 options:
 ignore_empty_list: True
 timeout_override:
 continue_if_exception: False
 disable_action: False
 filters:
 - filtertype: pattern
 kind: prefix
 value: logstash-swipe-staging-
 exclude:
 - filtertype: age
 source: name
 direction: older
 timestring: '%Y.%m.%d'
 unit: days
 unit_count: 30
 exclude:

Add cron task to run curator every day:

sudo crontab -e

# Elasticsearch indices rotation
5 0 * * * root /usr/bin/curator /root/curator/del_old_swipe-staging.yml