David Vassallo's Blog

If at first you don't succeed; call it version 1.0

Category Archives: Open Source

Building a Logging Forensics Platform using ELK (Elasticsearch, Logstash, Kibana)

During a recent project we were required to build a “Logging Forensics Platform”, which is in essence a logging platform that can consume data from a variety of sources such as windows event logs, syslog, flat files and databases. The platform would then be used for queries during forensic investigations and to help follow up on Indicators of Compromise [IoC]. The amount of data generated is quite large, ranging into terabytes of logs and events. This seemed right up elasticsearch’s alley, and the more we use the system, the more adept at this sort of use case it turns out to be. This article presents some configuration scripts and research links that were used to provision the system and some tips and tricks learned during implementation and R&D of the system

Helpful Reading

The ELK stack is proving to be a very popular suite of tools, and good documentation abounds on the internet. The official documentation is extremely helpful and is a must read before starting anything. There are some additional links which are most definitely useful when using ELK for logging:

https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-4-on-ubuntu-14-04

http://edgeofsanity.net/article/2012/12/26/elasticsearch-for-logging.html

General Architecture

One of the main advantage of ELK is it’s flexibility. There are multiple ways to achieve the desired result, so the rest of this blog post must be taken in context and adapted to your own environment where appropriate. To give some context to the configuration files which follow, below is the high level architecture which was implemented:

High Level Architecture for ELK forensic logging platform

High Level Architecture for ELK forensic logging platform

There were a couple of design considerations that led to the above architecture:

1. In this particular environment, there were major discussions around reliability vs performance during log transport. As most of you would know, this roughly would translate to a discussion around TCP vs UDP transport. UDP is a lot faster since there’s less overhead, however that same overhead allows TCP to be a lot more reliable and prevent loss of events/logs when there is a disconnection or problem. The resulting architecture uses both, with the reasoning being that most syslog clients are network nodes like routers, switches and firewalls, which are very verbose and hence performance is more of an issue. However, on the actual servers, we opted for TCP to ensure absolutely no events are lost, and the servers do in fact tend to be less verbose so the reliability gains are worth it for high value assets such as domain controllers.

2. The logstash defaults of creating a separate daily index in elasticsearch are actually the most sane settings we found, especially for backup and performance purposes, so we didnt change these

3. One of the nicest features of elasticsearch are the analyzers [1], which allow you to do “fuzzy searches” and return results with a “relevance score” [2]. However, when running queries against log data, this can actually be a drawback since log messages more often than not are very similar to each other, so keeping the analyzers on returned too many results and made forensic analysis needlessly difficult. The analyzers were therefore switched off, as can be seen in the below configuration files. Add to this a slight performance bonus for switching off analyzers and the solution was a very good fit.

NXLog Windows Configuration:

We ran across an issue when installing the NXLog client on windows servers. After restarting the NXLog service we would see an error in the logs along the lines of:

apr_sockaddr_info_get() failed

I never figured out the root cause of this error, even inserting the appropriate hostnames and DNS entries did not help. However using the below configuration and re-installing the client got rid of the error.

One point of interest in the above configuration is on line 30. By default NXLog will monitor the Application, Security, and System event logs. In this configuration sample one can see an example of also monitoring the post-2003 style event log “containers” where windows now stores application specific logs that are useful to monitor.

Also note the use of the to_json module, which converts the messages to JSON format. We will use this later when configuring logstash.

NXLog installation instructions on alienvault:

One of the requirements of the environment where this forensic logging platform was installed, is to integrate with AlienVault Enterprise appliance to be able to send logs from these systems to the elasticsearch nodes. Here are the steps that worked for us:

1. Installation did not work via pre-built binaries. Luckily, building from source is very easy. Download the tar.gz source package from NXLog community site here

2. Before proceeding, install dependencies and pre-requisites:

apt-get update
 apt-get install build-essential libapr1-dev libpcre3-dev libssl-dev libexpat1-dev

3. Extract the TAR.GZ file, and change directory into the extracted tar.gz folder and run the usual configure, make, install:

./configure
 make
 make install

NXLog configuration (Linux):

The rest of the configuration for AlienVault servers is the same as a generic Linux host, with the exception that in the below config file we monitor OSSEC logs, which you may need to change depending on what you would like to monitor

Logstash configuration on elasticsearch:

This leaves us with the logstash configuration necessary to receive and parse these events. As noted above, we first need to switch off the elasticsearch analyzers. There are a couple of ways to do this, the easiest way we found was to modify the index template [3] that logstash uses and switch of analyzers from there. This is very simple to do:

– First, change default template to remove analysis of text/string fields:

vim ~/logstash-1.4.2/lib/logstash/outputs/elasticsearch/elasticsearch-template.json

– Change the default “string” mapping to not_analysed (line 14 in the default configuration file in v1.4.2)

analyzed –> not_analyzed

– Point logstash configuration to the new template (see line 121 in the logstash sample configuration below)

– If need be, delete any existing logstash indices / Restart logstash

Also note lines 27-32 in the above config file. This has to do with the fact that we are converting messages into JSON format in the NXLog client. The logstash documentation [4] states that:

For nxlog users, you’ll want to set this to “CP1252”.

In a future article we’ll go into a bit more depth into the above logstash configuration, and how we can use it to parse messages into meaningful data

References:

[1] Elasticsearch Analyzers: http://www.elastic.co/guide/en/elasticsearch/reference/1.4/indices-analyze.html

[2] Elasticsearch relevance: http://www.elastic.co/guide/en/elasticsearch/guide/master/controlling-relevance.html

[3] Elasticsearch Index Templates: http://www.elastic.co/guide/en/elasticsearch/reference/1.x/indices-templates.html

[4] Logstash JSON documentation: http://logstash.net/docs/1.4.2/codecs/json

AlienVault: Monitoring individual sensor Events Per Second [EPS]

In a distributed AlienVault environment, it is important to be able to monitor individual sensor’s output. In our case, the requirements was to:

  • Monitor each sensor’s generated events over a configurable interval
  • If the number of generated events of a sensor goes below a configured threshold, then notify the user via email

There are several sensor monitoring options built right into alienvault, including monitoring the /var/log/alienvault/agent/agent.log which contains “EPS” information. However, in this case we use the database to calculate, for each sensor, the number of generated events. We are not interested in exactly which alienvault plugin generated the event, just the global number of events a particular sensor has generated.

The custom script can be used without installing any pre-requisites on a central SIEM server which sensors feed information into. The script (EPS_Script.py) depends on the following two configuration file:

  • /etc/ossim/ossim_setup.conf: this file already exists in a default AlienVault installation and should not be changed. EPS_Script.py uses this only to lookup the database settings
  • /etc/ossim/eps_monitor.conf: this file must be created and is used to store specific settings for the EPS_Script. A sample eps_monitor.conf script can be found below:

The config file is simple. It contains SMTP server details which are used to send emails, and default settings like the interval over which to query the number of events, and the default threshold (number of events). If the number of events generated is below the threshold, the user gets alerted. In the [thresholds] section a user can also define thresholds for individual sensors, enabling further flexibility.

The actual EPS_Script.py code is shown below:

The script starts off by parsing it’s configuration files. Next, the script retrieves a list of configured sensors from the sensor table in the alievault database. The ID of these sensors is stored in binary form, therefore we convert this into hex. The main work of the script is performed in the for loop starting at line 45. Each sensor can have multiple “devices” bound to it. From what we can tell, these devices correspond roughly to each OSSEC agent installed on the network (Environment > Detection > HIDS > Agents)

Selection_137

Note : In light of the above, the script may only be monitoring events generated by OSSEC agents. Further testing is required to confirm / deny this

For each sensor retrieved from the previous query, the script retrieves the list of associated “devices” or agents. In a distributed environment with multiple sensors, different agents/devices can point to different sensors so this step is important to “map” agents to the appropriate sensors else the EPS count will be skewed.

The script then counts all entries made by devices mapped to a specific sensor in the specified time interval and if below the specified threshold, sends and email to the appropriate recipients

Selection_136

All that’s left is to run the script periodically in a cron job. Normally the cron job interval should be the same as the one specified in the EPS monitoring script configuration file

Follow

Get every new post delivered to your Inbox.

Join 231 other followers