David Vassallo's Blog

If at first you don't succeed; call it version 1.0

First Steps in applying machine learning to InfoSec – WEKA

The intersection between machine learning [ML] and information security [InfoSec] is currently quite a hot topic. The allure of this intersection is easy to see, security analysts are drowning in alerts and data which need to be painstakingly investigated and if necessary acted upon. This is no easy processes and as was seen in the now infamous Target hack, more often than not alarms go by unnoticed. ML promises to alleviate the torrent of alerts and logs and (ideally) present to the analyst only those alerts which are really worthwhile investigating.

This is by no means an easy task however the rise of several enabling factors has made this goal reachable to the average InfoSec professional:

  • Cloud Computing
  • Big Data technologies such as Hadoop
  • Python (and other language) libraries like Scikit-Learn [1] which abstract away the nuances of Machine Learning and Data Mining
  • Distributed Data/Log collection and search technologies such as ElasticSearch [2]

From personal experience the process of learning about machine learning can be daunting, especially to those not of a mathematical background. However, in this series of articles I plan on outlining my learning process and enumerating the various excellent resources that are freely available on the internet to help anyone interested in getting started in this exciting field.

A good introduction into this field is a talk by @j_monty and @rsevey about “Using Machine Learning Solutions to Solve Serious Security Problems” which can be found here:

https://youtu.be/48O6L_DfE2o

The talk really whets your appetite for this field. A small distinction that should be pointed out is the difference between “machine learning” and “data mining“. Data mining is the process of turning raw data into actionable information, while machine learning is one of the many tools/algorithms that help in this process. The presenters mention using WEKA [3] to get started in the field and get to grips with understanding the data that will eventually power our algorithms and machine learning. Before anything else, it will be very useful to manually try some data mining techniques to understand our data, which algorithms to apply to this data for best results and understand the challenges and rewards of doing so. This will allow us to better understand which machine learning algorithms we can later apply to infosec related data such as logs, pcaps and so on.

So it would seem WEKA is as good a place as any to get started! Some quick research turns up a hidden gem…. an online course from the creators of WEKA on how to use the program:

https://weka.waikato.ac.nz/dataminingwithweka/preview

The course may not be open when reading this, however the course videos are still available on YouTube and this should be your first stop:

http://www.cs.waikato.ac.nz/ml/weka/mooc/dataminingwithweka/

Note: if you need to find the datasets the instructor is using (the WEKA installation from the Ubuntu repositories do not include these), then you can find them here:

http://storm.cis.fordham.edu/~gweiss/data-mining/datasets.html

References

[1] Scikit-learn : http://scikit-learn.org/stable/

[2] ElasticSearch: https://www.elastic.co/

[3] WEKA: http://www.cs.waikato.ac.nz/ml/weka/

 

Building a Logging Forensics Platform using ELK (Elasticsearch, Logstash, Kibana)

During a recent project we were required to build a “Logging Forensics Platform”, which is in essence a logging platform that can consume data from a variety of sources such as windows event logs, syslog, flat files and databases. The platform would then be used for queries during forensic investigations and to help follow up on Indicators of Compromise [IoC]. The amount of data generated is quite large, ranging into terabytes of logs and events. This seemed right up elasticsearch’s alley, and the more we use the system, the more adept at this sort of use case it turns out to be. This article presents some configuration scripts and research links that were used to provision the system and some tips and tricks learned during implementation and R&D of the system

Helpful Reading

The ELK stack is proving to be a very popular suite of tools, and good documentation abounds on the internet. The official documentation is extremely helpful and is a must read before starting anything. There are some additional links which are most definitely useful when using ELK for logging:

https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-4-on-ubuntu-14-04

http://edgeofsanity.net/article/2012/12/26/elasticsearch-for-logging.html

General Architecture

One of the main advantage of ELK is it’s flexibility. There are multiple ways to achieve the desired result, so the rest of this blog post must be taken in context and adapted to your own environment where appropriate. To give some context to the configuration files which follow, below is the high level architecture which was implemented:

High Level Architecture for ELK forensic logging platform

High Level Architecture for ELK forensic logging platform

There were a couple of design considerations that led to the above architecture:

1. In this particular environment, there were major discussions around reliability vs performance during log transport. As most of you would know, this roughly would translate to a discussion around TCP vs UDP transport. UDP is a lot faster since there’s less overhead, however that same overhead allows TCP to be a lot more reliable and prevent loss of events/logs when there is a disconnection or problem. The resulting architecture uses both, with the reasoning being that most syslog clients are network nodes like routers, switches and firewalls, which are very verbose and hence performance is more of an issue. However, on the actual servers, we opted for TCP to ensure absolutely no events are lost, and the servers do in fact tend to be less verbose so the reliability gains are worth it for high value assets such as domain controllers.

2. The logstash defaults of creating a separate daily index in elasticsearch are actually the most sane settings we found, especially for backup and performance purposes, so we didnt change these

3. One of the nicest features of elasticsearch are the analyzers [1], which allow you to do “fuzzy searches” and return results with a “relevance score” [2]. However, when running queries against log data, this can actually be a drawback since log messages more often than not are very similar to each other, so keeping the analyzers on returned too many results and made forensic analysis needlessly difficult. The analyzers were therefore switched off, as can be seen in the below configuration files. Add to this a slight performance bonus for switching off analyzers and the solution was a very good fit.

NXLog Windows Configuration:

We ran across an issue when installing the NXLog client on windows servers. After restarting the NXLog service we would see an error in the logs along the lines of:

apr_sockaddr_info_get() failed

I never figured out the root cause of this error, even inserting the appropriate hostnames and DNS entries did not help. However using the below configuration and re-installing the client got rid of the error.

One point of interest in the above configuration is on line 30. By default NXLog will monitor the Application, Security, and System event logs. In this configuration sample one can see an example of also monitoring the post-2003 style event log “containers” where windows now stores application specific logs that are useful to monitor.

Also note the use of the to_json module, which converts the messages to JSON format. We will use this later when configuring logstash.

NXLog installation instructions on alienvault:

One of the requirements of the environment where this forensic logging platform was installed, is to integrate with AlienVault Enterprise appliance to be able to send logs from these systems to the elasticsearch nodes. Here are the steps that worked for us:

1. Installation did not work via pre-built binaries. Luckily, building from source is very easy. Download the tar.gz source package from NXLog community site here

2. Before proceeding, install dependencies and pre-requisites:

apt-get update
 apt-get install build-essential libapr1-dev libpcre3-dev libssl-dev libexpat1-dev

3. Extract the TAR.GZ file, and change directory into the extracted tar.gz folder and run the usual configure, make, install:

./configure
 make
 make install

NXLog configuration (Linux):

The rest of the configuration for AlienVault servers is the same as a generic Linux host, with the exception that in the below config file we monitor OSSEC logs, which you may need to change depending on what you would like to monitor

Logstash configuration on elasticsearch:

This leaves us with the logstash configuration necessary to receive and parse these events. As noted above, we first need to switch off the elasticsearch analyzers. There are a couple of ways to do this, the easiest way we found was to modify the index template [3] that logstash uses and switch of analyzers from there. This is very simple to do:

– First, change default template to remove analysis of text/string fields:

vim ~/logstash-1.4.2/lib/logstash/outputs/elasticsearch/elasticsearch-template.json

– Change the default “string” mapping to not_analysed (line 14 in the default configuration file in v1.4.2)

analyzed –> not_analyzed

– Point logstash configuration to the new template (see line 121 in the logstash sample configuration below)

– If need be, delete any existing logstash indices / Restart logstash

Also note lines 27-32 in the above config file. This has to do with the fact that we are converting messages into JSON format in the NXLog client. The logstash documentation [4] states that:

For nxlog users, you’ll want to set this to “CP1252”.

In a future article we’ll go into a bit more depth into the above logstash configuration, and how we can use it to parse messages into meaningful data

References:

[1] Elasticsearch Analyzers: http://www.elastic.co/guide/en/elasticsearch/reference/1.4/indices-analyze.html

[2] Elasticsearch relevance: http://www.elastic.co/guide/en/elasticsearch/guide/master/controlling-relevance.html

[3] Elasticsearch Index Templates: http://www.elastic.co/guide/en/elasticsearch/reference/1.x/indices-templates.html

[4] Logstash JSON documentation: http://logstash.net/docs/1.4.2/codecs/json

Follow

Get every new post delivered to your Inbox.

Join 232 other followers