Canary Files for Legitimate Access Abuse using WEF & ELK

Canary Files for Legitimate Access Abuse using WEF & ELK

Network security monitoring and endpoint security defenders face monumental tasks in attempting to detect computer breaches. Many face lack of support from upper-management, money, time, incorrect sensor placement, physical resources, and any other problem you can think of. In addition to lack of support and resources there is also the issue of vulnerabilities in software completely out of their control (https://www.troyhunt.com/everything-you-need-to-know-about3/) along with devices that leave the network for extended periods of time (ie: user on vacation, traveling, or working remotely).

However, despite these disadvantages (and more later) there is hope in a simple yet effective solution! There is a builtin and free resource that will detect legitimate access abuse (ie: lateral movement, recon of network/shares/files, etc...).
I will be assuming that an attacker and an insider threat are the same "threat". This is because an attacker will, at some point, gain legitimate credentials just as an insider threat would have legitimate credentials.

**If you are already familiar with canary files and the necessity of them in computer security monitoring then skip to the 
Prerequisites & Solution sections**
Thus introducing computer security "canary files" (for the purpose of this article I will only discuss canary files, however there are many other computer security canaries you can use). The name for computer security canaries are based on the real word canaries used in coal mines to detect carbon monoxide (https://arlweb.msha.gov/century/canary/canary.asp). In this real world example, if a canary was showing signs of distress then this was a "clear signal" of danger (carbon monoxide).

Just as coal miners may not have had the ability to detect danger (in their case physical limitations of the human body -- carbon monoxide is colorless, odorless, and tasteless) many network/endpoint defenders face limitations in order to detect breaches that are out of their control! With the use of a canary file we are assuming that if this file is accessed in any way that it would be of malicious/malign intent. This allows you to perform additional duties while having something "watching your back" just as the coal miners could continue to work without having to worry about detecting carbon monoxide. 

Were real canaries the only way to detect trouble/danger? Doubtful... Are computer security canaries the only way to detect a breach/compromise? Nope.. The goal is to provide an easy win (Zero 2 Hero) in detecting a breach/compromise. Especially, if you face any of the following limitations:
  1. No internet access -- Most canaries require some sort of internet access. This article even covers if a device leaves the network and never connects to the internet but a canary is accessed.
  2. No third party software allowed (or wanted that may increase attack surface)  -- We will be using Windows logs builtin to the Windows operating system that use Kerberos/default windows authentication (so it already is in existence in your network)
  3. You do not have a team of people monitoring your network 24x7 or you are just one of a few persons monitoring everything while also wearing 10 other hats.
  4. Detecting breaches does not make your company money (unless you are a re-seller/product/vendor -- obviously...) and the companies requirements are access and up-time. You have to assume your admins are performing all sorts of unwanted activities that may not be "malicious" but would trigger many other alerts from other products (ie:AV).. On a management network everything is an anomaly...everyone is installing software, everyone is troubleshooting issues at odd hours in odd locations of the world (ie: while on vacation).
There has been a lot of work done with canary files in computer security already. Specifically https://github.com/thinkst/canarytokens has a large and great use case of different types of canaries and they have been discussing/using these for 3+ years. They even have tokens for HTTP URL, DNS, QR Code, and more.  Also, there is already public discussions/blogs of canaries using builtin windows resources but these seem to be limited to ransomware (ie: https://www.eventsentry.com/blog/2016/03/defeating-ransomware-with-eventsentry-auditing.html).

So.. You may be asking yourself.. Nate why are you re-inventing the wheel.. and to that I say: I am not, I am just polishing the wheel someone already invented. Trust me, I would much rather use someone else's work due to my own limited time.

I am proposing/showing an alternate solution simply due to the fact that one of the best solutions requires third party software and most other discussions on using windows logs are centered around ransomware. I want to show that this requires no software checklist approval, money, and little to no resources. Leveraging something that is already builtin to Microsoft Windows OS is a huge hurdle to overcome when getting buy-in from your other IT departments (the one's who will probably have the permissions to deploy it) as well as if you are in a restrictive environment such as the Government :)

Proposed Solution

Using builtin windows event logs (EventID:4663) that is deployed with a custom group policy and sent to the Elastic ELK stack (all are free) we are able to create our own builtin canary files. These builtin windows logs will provide additional benefits (for the purpose of canary files) than Sysmon and other Windows Logs (ie: 4688). Please note, that sysmon should always be used if you can use it.. Just in the case of canaries, Windows EventID:4663 provides a broader scope of detection possibilities (shown later).


  1. You already have a Windows Event Forwarding (WEF) server setup. If not then please see the following (and reach out with any questions):
    • https://medium.com/@palantir/windows-event-forwarding-for-network-defense-cb208d5ff86f
    • https://github.com/palantir/windows-event-forwarding/blob/master/group-policy-objects/README.md
    • https://docs.microsoft.com/en-us/windows/threat-protection/use-windows-event-forwarding-to-assist-in-instrusion-detection
    • https://blogs.technet.microsoft.com/jepayne/2015/11/23/monitoring-what-matters-windows-event-forwarding-for-everyone-even-if-you-already-have-a-siem/
    • https://mva.microsoft.com/en-US/training-courses-embed/event-forwarding-and-log-analysis-16506/Video-Audit-Policy-KBwQ6FGmC_6204300474
    • https://blogs.technet.microsoft.com/wincat/2008/08/11/quick-and-dirty-large-scale-eventing-for-windows/
    • https://msdn.microsoft.com/en-us/library/windows/desktop/bb870973(v=vs.85).aspx
    • http://syspanda.com/index.php/2017/03/01/setting-up-windows-event-forwarder-server-wef-domain-part-13/
    • https://www.root9b.com/sites/default/files/whitepapers/R9B_blog_005_whitepaper_01.pdf
    • https://github.com/defendthehoneypot ---- DoD STIG GPOs
  2. You already have an Elastic ELK setup. If not then please see the following (and reach out with any questions):
    • https://cyberwardog.blogspot.com/2017/02/setting-up-pentesting-i-mean-threat_98.html
    • https://github.com/rocknsm/rock
    • https://github.com/Cyb3rWard0g/HELK
    • https://github.com/philhagen/sof-elk
    • http://blog.securityonion.net/2017/12/security-onion-elastic-stack-beta-3.html
    • https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-ubuntu-14-04
  3. You have canary files that may peak an un-wanted entities interest.That would result in them opening/copy/etc the file. For the purpose of this article and continuity we will use the file: "c:\users\public\documents\new-login-information.txt"...however, you may use any file that fits your purpose.

    You may accomplish deploying these files in many ways. I recommend creating a file and deploying/pushing it via a GPO (group policy objects) to all or targeted computers.

    Some examples of files would be files that would seem they contain passwords, network diagrams, PE's/EXE's of interest (ie: psexec), and whatever else you can dream.

Solution (overview)

First I will show you what GPO's to deploy in order to enable the windows event logs that will allow us to determine if these canary files have been accessed/read/etc...

Secondly, I will show how to create a WEF subscription to "ONLY" forward the canary files from the first step. This will allow us to use these event logs (that may be of high volume and or network-bandwidth to ship/transfer. This will be useful if you have many small locations with limited bandwidth and or just limited data storage.

Finally, I will show you what the events look like in ELK. Also, will show you the events versus Sysmon and other builtin windows logs.

Solution (detailed)

  1. On your Active Directory (AD) server create a group policy with whatever name you would like to whichever (or all) computers/devices you want to perform the canary monitoring on.
    This group policy will enable Object Access File System Auditing and define what would cause this event to occur via SACLs.
    We define Advanced Audit of Object Access for the File System. Then we define the SACLs in Global Object Access Auditing.

    SACL resource from Microsoft:
    I recommend NOT naming the GPO similar to my examples.. ie: do not use "canary" or some other such that may tip your hat too easily.
  2. On your WEF server, create a subscription to look for EventID:4663 and the canary files you created in Prerequisite step 3.
    example: we will use "c:\users\public\documents\new-login-information.txt"
    Now we only want (for the sake of bandwidth and storage and event "overload") to get 4663 events that contain our canary files. Therefore, we will use an "advanced" XML WEF subscription (https://blogs.technet.microsoft.com/askds/2011/09/26/advanced-xml-filtering-in-the-windows-event-viewer/)
    as shown via the code here:

    You can still use other existing subscriptions for 4663 that you have. This is only an example to collect 4663 with specific canary files. Adding this subscription would not impact or degrade any 4663 subscriptions you already ahve in place.
    If you do not have logstash configs to pull and send the windows events or are specifically looking for ones for windows, I have some here that will get you started or use resources mentioned above in Prereq 2:
  3. Now lets see what the events look like in Elastic ELK.  Assuming you have a windows log forwarder, whether NXLog (https://nxlog.co/) or WinLog Beats (https://www.elastic.co/downloads/beats/winlogbeat), setup on your WEF server that is sending to your ELK stack.  An example NXLog setup would look like:

In the Kibana (ELK) example I have applied the search to remove "AccessMask:0x20000" to determine if the canary file was actually "read" versus just a directory browsed (ie: SACL check).
Also, because I have included all events with the canary file's name -- I have excluded EventID:4656 due to its similar scenario as 4663.
I also show that Sysmon and Windows Log 4688 will only show if a process spawns original access to the canary file.  For example, if someone opens up command prompt and then views/lists/etc the canary file, then nothing will log that except 4663. These other events only show that I accessed the canary file by explorer from notepad. However, here are all the ways I accessed the canary file that EventID:4663 showed:
  1. Opened in explorer via notepad.
  2. Accessed/read after internet explorer was already opened.
  3. Accessed/read after command prompt was already opened.
  4. Accessed/read after powershell was already openeded.


This event collection for canary file access will work even if a computer is offline while the file is accessed because once a computer is connected back to your AD the log will be forwarded to WEF and thus into ELK. (Do not worry, blog post to eventually be written regarding how to detect if using mimikatz or some NSA tool to unlink/hide/delete/modify windows events)..............................

False positives may include:
-user just browsing out of curiousty but not with maliciously/malign intent.
-backup software? -- but this should be easy to whitelist by ProcessName != $BackupSoftware
-AV software? -- but this should be easy to whitelist by ProcessName != $AVSoftware

Additional information on EventID:4663


Windows Event Forwarding & .ETL (ETW)

What’s so useful about ETL (ETW)? 

One of the most useful (.etl) log files is: WMI-Activity/Trace. It is an Event Tracing for Windows (ETW) log which gets written to a ‘.etl’ file. The information inside of this log can be extremely useful to anyone wishing to monitor WMI, as it logs each query, new class, consumer, etc. We touched on this in our recent talk at Bloomcon (https://youtu.be/H3t_kHQG1Js?t=99)
Note: monitor this blog for a future post just about WMI events.

What’s the problem? 

Unfortunately, WEF (Windows Event Forwarding) and many other event forwarding solutions cannot subscribe directly to ‘.etl’ files..). However, the ‘.etl’ file can be read and converted into a channel that WEF can subscribe to. To help facilitate this, I wrote a PowerShell script, and it is available on github (https://github.com/acalarch/ETL-to-EVTX). Use at your own risk 😅!

The script in action!

In summary, the script will query whatever (.etl) file you give it every 15 seconds and write those events to a new channel. Actually, it can do this for any (.etl) file! You just have to configure it to do so.

To prepare your etl file for the script, all you have to do is change some of the channel options. Getting an error in Windows Event Viewer is normal after you make these changes. It doesn't like displaying ETL with "overwrite events as needed". 

Settings for the ETL file/channel.

Other Solutions

My solution is kind of a poor-man’s solution to this problem. Here are some more:

You can read more about ETW here:

You can read a whole lot more about WEF and Windows Events by reading our slide deck or watching our talk:



Malicious [.reg] Files

The Problem

Criminals and red teams have been known to use .hta, .vbs, .vbe, .js, .jse, .html, .bat, .cmd files to break into a computer/network. However, you don't hear too much about [.reg] files, which will be interpreted by RegEdit to make changes to the registry. On a default installation of Windows, the user does not need special admin privileges to add keys to HKCU\Software\Microsoft\Windows\CurrentVersion\Run. So, if they receive an email and are tricked into running the [.reg] file, they could be adding an 'evil' key. Currently, gmail is not blocking these files by default. You may want to check your email provider / gateway!
Here is an example of what a malicious [.reg] key might look like.. this one just launches calc. 
"Malicious" Reg Key, Adds key that will run calc.exe via mshta.exe when the victim logs in 


You can monitor this activity with Windows Event ID 4688 where the command line details for the event contain "regedit.exe" and endswith ".reg". Additionally, you can always monitor runkeys by enabling the "object access" policy or using a tool such as sysmon. Also, you should always monitor events created by mshta; wscript; cscript; regsvr32.exe and scrobj.dll as these (incomplete list I'm sure) can be used to create persistence in run keys. 

Adam Swan / @acalarch
Nate Guagenti / @neu5ron


VBA Obfuscation and Macro Obfuscation

Visual Basic Obfuscation via Line Continuation

Be careful while writing YARA signatures for Microsoft Office Macros. A simple technique used to bypass detection of “sub document_open()” for instance is to break it up with the VBA line continuation character “_” (underscore).

We’ve seen this break a few office malware signatures… so you may wish to check your vendor.

If your vendor is only looking for "document_open" or the equivalent VBA of auto-open then you will be ok. However, if vendor is looking for the surrounding parentheses or preceding "sub" then you may want to double check.

Below are three examples:

**split among 3 lines

**split among several lines

**VirtualProtect (commonly used when executing shellcode) being imported from Kernel32 split among several lines

# Yara Rule
rule VBALineContinuationObfuscation
   Author = "@acalarch, @neu5ron"
   Description = "Identifies potential VBA Obfuscation via empty line continuation, must provide yara an uncompressed vba project”
      $a = {20 5F 0D 0A 20 5F 0D 0A}


Adam Swan / @acalarch
Nate Guagenti / @neu5ron


WEF Server, Add Missing Channels


In collaboration with Adam Swan (@acalarch), in our spare time, we have been setting up Windows Event Forwarding collections and looking at the thousands of windows logs. We then rate their corresponding volume and level of confidence related to information security (as well as for windows system admins and helpdesk). We also have been collaborating this same information with Florian Roth (@cyb3rops) who is working on essentially the same thing.

This post assumes that you have set up the basics of Windows Event Forwarding / Windows Event Framework / Windows Event Collection / Windows Event Subscriptions. (relevant side note: Microsoft apparently hasn’t identified a common lexicon when talking about windows events & subscriptions). 

The Problem
While we were attempting to collect logs from certain clients we would notice that they had software that had windows event log locations which were not on the WEF server. When you create a subscription to a computer and you go to select an event channel to pull from, the list of event channels is populated by what is available on the subscriber (the server collecting the events). Therefore, if a channel exists on a client being collected from but not on the subscriber, the channel will not be available (see figure x). Also, performing registry hack may sometimes cause instability. (ie: In HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\WINEVT\Publishers;HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\WINEVT\Channels;HKLM\System\CurrentControlSet\Control\WMI\Autologger).
Client on the left, Sysmon channel available. Subscriber on the right, Sysmon Channel Not Available. How do I subscribe to a channel that doesn’t exist locally??

The Solution

The solution, is to add the missing channels to the subscriber. You can do this by installing the manifest for missing windows event channel OR the much simpler way of adding the channel subscription in XML form. Also, https://blogs.technet.microsoft.com/russellt/2016/05/18/creating-custom-windows-event-forwarding-logs/ may be of help. First we’ll dig into a little bit of background knowledge for your sanity and then we’ll provide a step-by-step on how to create a provider for either type.

Background on Windows Events

There are basically two ways to create a channel… Classic and Manifest Based [https://msdn.microsoft.com/en-us/library/windows/desktop/aa364161(v=vs.85).aspx]. This post will describe how to create the channel for either.

Creating a Classic Channel:

This will only work for Logs that are NOT under the“Application and Services Logs” path.
This is an easy one liner in powershell “new-eventlog –Logname “ThelognameIwanttocreate” –Source “TheSourceIwishTocreate”

Creating a Manifest Based Channel:

This Involves Two Steps:
  1. Collect the manifest off of a computer where the channel exists already or from the executable that creates the channel
  2. Install Manifest using wevtutil im manifest.name

Background for Manifest
The manifest for a channel is an XML document that describes a provider. The provider name becomes, the name for your channel as you are used to seeing in the Windows Event log. [https://msdn.microsoft.com/en-us/library/windows/desktop/dd996930(v=vs.85).aspx]

Obtaining the Manifest You Need
There are several ways of going about getting the manifest. It may be published online. In the case of sysmon, simply running “sysmon –m” will install the event manifest and nothing else. Other times it can be quite tricky to find it. Here are some methods we’ve tested and had success with.

Windows PerfView

The easiest way is to try your luck with PerfView by Windows [https://www.microsoft.com/en-us/download/details.aspx?id=28567].
Perfview has a command “dumpRegisteredManifest”. This command will dump the Manifest for the specified channel into the current working directory. This worked for most channels, we tested.
Running “perfview /nogui /accepteula userCommand DumpRegisteredManifest [Channel-Name]” on a host to obtain the desired manifest.
Notepad++ (or any of your favorite IDE/text editor)
Another way to obtain the windows event manifest is to search for it inside the executable you believe contains the manifest. Notepad++ has a decent search utility that will allow you to search for the manifest. Try keywords that should exist in the manifest for each of the executables associate with the channel such as “eventman.xsd” (you may also want to try “e.v.e.n.t.m.a.n.\..x.s.d” as the manifest may be stored in Unicode).
Sysmon Manifest found within the executable.

Installing the Manifest

Luckily installing the manifest is a simple one liner. The resources could not be found error should be expected as we are installing the manifest without installing sysmon, the channel will still appear in the windows event viewer.
“wevtutil im mymanifest.whatever”

Final Thoughts

It’d really be nice if Microsoft would go ahead and just make the manifests exportable without installing additional tools. Additionally, if you are a developer be a scholar like Mark Russinovich (sysmon) and publish your manifest or make it easily installable.
tag:Add Client Log Channel to WEF Server


Download all of Malware-Traffic-Analysis.net PCAPs

Download PCAPs from www.malware-traffic-analysis.net
http://www.malware-traffic-analysis.net/ is an excellent resource that a lot of people in the infosec community use. Hats off to @malware_traffic for creating a valuable resource for the community.

I have always wanted to download all the PCAPs from the site to run locally for different purposes. The PCAPs are useful for a variety of reasons. Including using to replay/re-run in order to check your IPS and/or IDS, passive dns implementation, collecting more malware samples, training exercises, etc..

So I wrote a python script last night to do that. I was going to release the script online, but I thought "wellp if a good amount of people run this script than it will cause a lot of unnecessary traffic to Brad's (@mawlare_traffic) site".
Instead of releasing the script I decided to just create a GitHub repo and upload all the PCAPs there.

Just run the following command to download all of the PCAPs.
git clone https://github.com/neu5ron/malware-traffic-analysis-pcaps.git

If anyone has any comments, expletives, or any other feedback then please comment.


Setup ElasticSearch Logstash and Kibana ELK with Bro

I would not follow this installation process anymore, but you may use it for a few notes. As logstash-forwarder has changed file locations and TLS configuration. Kibana has change ALOT from v3 to v6.

This tutorial will install ELK stack and input Bro HTTP, SSL, Conn, DNS, Files, and DHCP logs with GeoIP and using Kibana over HTTPS.

This documentation is assuming you are using Ubuntu as the server. I was using a 64GB RAM server with 6 cores.

Server Installation:

#Install Java
sudo add-apt-repository -y ppa:webupd8team/java;
sudo apt-get update;
sudo apt-get -y install oracle-java7-installer;

#Install ElasticSearch
wget -O - https://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -;
echo 'deb http://packages.elasticsearch.org/elasticsearch/1.4/debian stable main' | sudo tee /etc/apt/sources.list.d/elasticsearch.list;sudo apt-get update;
sudo apt-get -y install elasticsearch;

sudo vi /etc/elasticsearch/elasticsearch.yml
#Add the following line somewhere in the file, to disable dynamic scripts:
script.disable_dynamic: true
#Find the line that specifies network.host and uncomment it so it looks like this:
network.host: localhost
Save and exit elasticsearch.yml.

#Tune ElasticSearch
#Add to /etc/sysctl.conf
fs.file-max = 65536

#Add to /etc/security/limits.conf
* soft nproc 65535
* hard nproc 65535
* soft nofile 65535
* hard nofile 65535
elasticsearch - nofile 65535
elasticsearch - memlock unlimited

#Uncomment the following lines and change the values in /etc/default/elasticsearch:
#Set ES_HEAP_SIZE to half of your dedidcated RAM max 31GB

# Uncomment the line in "/etc/elasticsearch/elasticsearch.yml"
bootstrap.mlockall: true

sudo swapoff -a
#To disable it permanently, you will need to edit the /etc/fstab file and comment out any lines that contain the word swap.

#Reboot server
sudo shutdown -r now
#start elastic search:
sudo start elasticsearch restart
#autostart elasticsearch:
sudo update-rc.d elasticsearch defaults 95 10

#Install Kibana
cd ~; wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.2.tar.gz;
tar -zxvf kibana-3.1.2.tar.gz;
#Open the Kibana configuration file for editing:

vi ~/kibana-3.1.2/config.js
In the Kibana configuration file, find the line that specifies the elasticsearch, and replace the port number (9200 by default) with 80:

elasticsearch: "http://"+window.location.hostname+":80",

sudo mkdir -p /var/www/kibana;
sudo cp -R ~/kibana-3.1.2/* /var/www/kibana/;

#Install Nginx
sudo apt-get -y install nginx;
cd ~; wget https://gist.githubusercontent.com/thisismitch/2205786838a6a5d61f55/raw/f91e06198a7c455925f6e3099e3ea7c186d0b263/nginx.conf
Find and change the values of the server_name to your FQDN (or localhost if you aren't using a domain name) and root to the location where we installed Kibana, so they look like the following entries:
vi nginx.conf
 server_name           localhost;
 root /var/www/kibana;

sudo cp nginx.conf /etc/nginx/sites-available/default;
sudo apt-get install apache2-utils;
#replace $USERNAME with your username you want to use
sudo htpasswd -c /etc/nginx/conf.d/kibana.myhost.org.htpasswd $USERNAME;

#Make Kibana over SSL:
#generate certificate
sudo openssl req -x509 -sha512 -newkey rsa:4096 -keyout /etc/nginx/kibana.key -out /etc/nginx/kibana.pem -days 3560 -nodes

sudo vi /etc/nginx/sites-available/default
#change the listen on port to *:443
#and add to the file under the line that says "access_log            /var/log/nginx/kibana.myhost.org.access.log;":
 #Enable SSL
 ssl on;
 ssl_certificate /etc/nginx/kibana.pem;
 ssl_certificate_key /etc/nginx/kibana.key;
 ssl_session_timeout 30m;
 ssl_protocols TLSv1.2;
 ssl_prefer_server_ciphers on;
 ssl_session_cache shared:SSL:10m;
 ssl_stapling on;
 ssl_stapling_verify on;
 add_header Strict-Transport-Security max-age=63072000;
 add_header X-Frame-Options DENY;
 add_header X-Content-Type-Options nosniff;

#Change the line "elasticsearch: "http://"+window.location.hostname+":80"," in /var/www/kibana3/config.js to
elasticsearch: "https://"+window.location.hostname+":443",

#Restart nginx
sudo service nginx restart;

#Setup GEOIP
sudo mkdir /usr/share/GeoIP; #Create location that we will use to store the GeoIP databases/information
sudo wget http://download.maxmind.com/download/geoip/database/asnum/GeoIPASNum.dat.gz; #IPv4 ASNumber Database
sudo wget http://download.maxmind.com/download/geoip/database/asnum/GeoIPASNumv6.dat.gz; #IPv6 ASNumber Database
sudo wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCountry/GeoIP.dat.gz; #IPv4 GeoIP Country Code Database
sudo wget http://geolite.maxmind.com/download/geoip/database/GeoIPv6.dat.gz; #IPv6 GeoIP Country Code Database
sudo wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz; #IPv4 GeoIP City Database
sudo wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCityv6-beta/GeoLiteCityv6.dat.gz; #IPv6 GeoIP City Database
sudo gzip -d Geo*; #Decrompress all the databases
sudo mv Geo*.dat /usr/share/GeoIP/; #Move all the databases to the GeoIP directory

#Install LogStash

sudo apt-get install git;
echo 'deb http://packages.elasticsearch.org/logstash/1.4/debian stable main' | sudo tee /etc/apt/sources.list.d/logstash.list;
sudo apt-get update;
sudo apt-get -y install logstash;
sudo mkdir -p /etc/pki/tls/certs;
sudo mkdir /etc/pki/tls/private;
cd ~/ && git clone https://github.com/logstash-plugins/logstash-filter-translate.git;
sudo cp logstash-filter-translate/lib/logstash/filters/translate.rb /opt/logstash/lib/logstash/filters/translate.rb;
rm -rf logstash-filter-translate/;

#Server Config File
#Clone server config file
sudo apt-get install git;
git clone  https://github.com/neu5ron/siem-and-event-forwarding-configs.git;
sudo mv siem-and-event-forwarding-configs/logstash-server.conf /etc/logstash/conf.d/all_logstash.conf;

Client Installation:
wget -O - http://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -;
echo 'deb http://packages.elasticsearch.org/logstashforwarder/debian stable main' | sudo tee /etc/apt/sources.list.d/logstashforwarder.list;
sudo apt-get update;
sudo apt-get install logstash-forwarder;
cd /etc/init.d/; sudo wget https://raw.github.com/elasticsearch/logstash-forwarder/master/logstash-forwarder.init -O logstash-forwarder;
sudo chmod +x logstash-forwarder;
sudo update-rc.d logstash-forwarder defaults;
sudo mkdir -p /etc/pki/tls/certs;
cd /etc/pki/tls; sudo openssl req -x509 -batch -nodes -days 3650 -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
#Copy cert to logstash server
scp /etc/pki/tls/certs/logstash-forwarder.crt user@server_private_IP:/etc/pki/tls/certs/
#Copy key to logstash server
scp /etc/pki/tls/certs/logstash-forwarder.key user@server_private_IP:/etc/pki/tls/private/

#Client Logstash configuration file
#Clone server config file
git clone  https://github.com/neu5ron/siem-and-event-forwarding-configs.git
#Make sure you edit the file "logstash-bro_client.conf" to include the location of your bro logs and your $SERVERIP before moving the file.
sudo mv siem-and-event-forwarding-configs/logstash-bro_client.conf  /etc/logstash-forwarder

#Restart logstash
sudo service logstash-forwarder restart

#Now start logstash on the server
sudo service logstash restart


errors/logs for logstash
for server /var/log/logstash/logstash.log
for client /var/log/syslog
Logstash troubleshoot client
sudo /opt/logstash-forwarder/bin/logstash-forwarder -config=/etc/logstash-forwarder
Logstash troubleshoot server
sudo /opt/logstash/bin/logstash -f /etc/logstash/conf.d/all_logstash.conf --configtest
get list of indexs
curl -XGET 'http://localhost:9200/_aliases'
delete a specific index
curl -XDELETE 'http://{server}/{index_name}/{type_name}/'
example: curl -XDELETE 'http://localhost:9200/logstash-2014.11.18/palo_alto_traffic_log'
delete all database
curl -XDELETE 'http://localhost:9200/*'

Documentation Followed:









#Palo Alto



#Using Kibana