ELK : exporting to CSV

Note: the following requires the “jq” json parser, available from: http://stedolan.github.io/jq/

1. Run the desired query through the Kibana WebUI

2. Expand the additional options pane by clicking on the arrow underneath the graph as indicated in the below screenshot:

blog_csv_export

3. Select “Request” and copy the request displayed:

blog_csv_export_2

4. Open a linux terminal and use the following command, pasting the above where indicated, and changing the template name as appropriate:

curl -XGET ‘http://192.168.12.68:9200/template_name-*/_search?pretty’ -d ‘—–PASTE HERE———‘ | jq -r ‘.hits.hits[]._source | del(.tags) | [to_entries[]] | map(.value) | @csv’ > /tmp/output.csv

Note: “size” in the copy/pasted text needs to be modified according to how many records are to be exported.

The csv output will be stored in the /tmp/output.csv file, which can be downloaded via SFTP and manipulated as necessary

Note: Since ELK deals with unstructured data, it may be the case that the csv file does not have the same number of columns for each entry, especially if different types of records are queried. This is to be expected and is by design. In order to get a list of column names for each row, the following command can be run:

curl -XGET ‘http://192.168.12.68:9200/template_name-*/_search?pretty’ -d ‘—–PASTE HERE———‘ | jq -r ‘.hits.hits[]._source | del(.tags) | [to_entries[]] | map(.key) | @csv’ > /tmp/output.csv

The above command will generate the /tmp/output.csv file but will contain column headings for each row rather than actual data.