In my previous post, Learn More About Your Home Network with Elastic SIEM – Part 1: Setting Up Elastic SIEM, I explained how you could set up Elastic SIEM on a Raspberry Pi[ad]. The next thing you would want to do is to collect the logs from your firewall and analyze them. Before I jump into the technical details, I should warn you that you may… not be able to do the steps below if you rely on consumer products or you use the equipment provided by your ISP.

Let me go on a short rant here! Every self-respected router vendor should allow firewall logs to be sent to an external system. A common approach is to use the SYSLOG protocol to collect the logs. If your router does not have this capability… well, I would suggest you buy a new, more advanced one.

I personally invested in a tiny Netgate SG-1100 box that runs the open-source PFSense router/firewall. You can, of course, install PFSense on your own hardware if you don’t want to buy a new device. PFSense allows you to configure up to three external log servers. Logstash, that we have configured in the previous post, can play the role of an SYSLOG server and send the events to Elasticsearch. Here is how simple the configuration of the PFSense log shipping looks:

The IP address 192.168.11.72 is the address of the Raspberry Pi, where the ELK SIEM is installed and 5140 is the port that Logstash uses to listen for incoming events. Thas is all you need to configure PFSense to send the logs to the ELK SIEM.

Our next step is to configure Logstash to collect the events from PFSense and feed them into an index in Elastic. The following project from Patrick Jennings will help you with the Logstash configuration. If you follow the instructions, you will see the new index show up in Kibana like this:

The last thing we need to do is to create a dashboard in Kibana to show the data collected from the firewall. Patrick Jennings’ project has pre-configured visualizations and a dashboard for the PFSense data. Unfortunately, when you import those, Kibana warns you that those need to be updated. The reason is that they use the old JSON format used by Kibana, and the latest versions require all objects to be described using the Newline Delimited NDJSON format (for more details, visit ndjson.org). The pfSense dashboard and visualization are available in my GitHub repository for Home SIEM.

Now, keep in mind that the pfSense logs will not feed into the SIEM functionality of the Elastic stack because it is not in the Elastic Common Schema (ECS) format. What we have created is just a dashboard that visualizes the firewall log data. Also, the dashboard and the visualizations are built using the pfSense data. If you use a different router/firewall, you will need to update the configuration to visualize the data, and things may not work out of the box. I would be curious to hear feedback on how other routers can send data to ELK.

In subsequent posts, I will describe how you can use Beats to get data from the machines connected to your local network and how you can dig deeper into the collected data.

 

 

 

 

 

Last night I had some free time to play with my network, and I ran  tcpdump out of curiosity. For a while, I’ve been interested to analyze what traffic is going through my home network, and the result of my test pushed me to get to work. I have a bunch of Raspberry Pi devices in my drawers, so, the simplest thing that I can do is get one and install Elastic SIEM on it. For those of you, who don’t know what SIEM is, it stands for Security Information and Event Management. My hope was that with it, I will be able to get a better understanding of the traffic on my home network.

Installing Elastic SIEM on Raspberry Pi

The first thing I had to do is to install the ELK stack on a Raspberry Pi. There are not too many good articles that explain how to set up Elastic SIEM on your Pi. According to Elastic, Elastic SIEM is generally available in Elastic Stack 7.6. So, installing the Elastic Stack should do the work.

A couple of notes before that:

  1. The first thing to keep in mind is that 8GB is the minimum requirement for the ELK stack. You can get around with 2GB Pi, but if you want to run the whole stack (Elasticsearch, Logstash, and Kibana) on a single device, make sure that you order Raspberry Pi 4 Model B Quad Core 64 Bit with 4GB[ad]. Even this one is a stretch if you collect a lot of information. A good option would be to split the stack over two devices: one for Elasticsearch and Kibana, and another one for the Logstash service
  2. Elastic has no builds for Raspbian. Hence, in the instructions below, I will use Debian packages and will describe the steps to install those on the Pi. This will require some custom configs and scripts, so be prepared for that. Well, this article is how to hack the installation and no warranties are provided 🙂
  3. You will not be able to use the ML functionality of Elasticsearch because it is not supported on the low-powered Raspbian device
  4. Below steps assume version 7.7.0 of the ELK stack. If you are installing a different version, make sure that you replace it accordingly in the commands below
  5. Last but not least (in the spirit of no warranties), Elasticsearch has a dependency on libc6that will be ignored and will break future updates. You have to deal with this at your own risk

Here the steps to follow.

Installing Elasticsearch on Raspberry Pi

  1. Set up your Raspberry Pi first. Here are the steps to set up your Rasberry Pi. The Raspberry Pi Imager makes it even easier to set up the OS. Once again, I would recommend using Raspberry Pi 4 Model B Quad Core 64 Bit with 4GB[ad] and a larger SD card[ad] to save logs for longer.
  2. Make sure all packages are up to date, and your Raspbian OS is fully patched.
    sudo apt-get update
    sudo apt-get upgrade
  3. Install the ZIP utility, we will need that later on for the Logstash configuration.
    sudo apt-get install zip
  4. Then, install the Java JRE because Elastic Stack requires it. You can use the open-source JRE to avoid licensing troubles with Oracle’s one.
    sudo apt-get install default-jre
  5. Once this is done, go ahead and download the Debian package for Elasticsearch. Make sure that you download the package with no JDK in it.
    wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.7.0-no-jdk-amd64.deb
  6. Once this is done, go ahead and install Elasticsearch using the package manager.
    sudo dpkg -i --force-all --ignore-depends=libc6 elasticsearch-7.7.0-no-jdk-amd64.deb
  7. Next, we need to configure Elasticsearch to use the installed JDK.
    sudo vi /etc/default/elasticsearch

    Set the JAVA_HOMEto the location of the JDK. Normally this is /usr/lib/jvm/default-java. You can also set the JAVA_HOMEto the same path in the /etc/environmentfile but this is not required.

  8. Last thing you need to do if to disable the ML XPack for Elasticsearch. Change the access mode to the /etc/elasticsearchdirectory first and edit the Elasticsearch configuration file.
    sudo chmod g+w /etc/elasticsearch
    sudo vi /etc/elasticsearch/elasticsearch.yml

    Change the xpack.ml.enabledsetting to falseas follows:

    xpack.ml.enabled: false

The above steps install and configure the Elasticsearch service on your Raspberry Pi. You can start the service with:

sudo service elasticsearch start

Or check its status with:

sudo service elasticsearch status

Installing Logstash on Raspberry Pi

Installing Logstash on the Raspberry Pi turned out to be a bit more problematic than Elasticsearch. Again, Elastic doesn’t have a Logstash package that targets ARM architecture and you need to install it manually. StackOverflow posts and GitHub issues were particularly helpful for that – I listed the two I used in the References at the end of this article. Here the steps:

  1. Download the Logstash Debian package from Elastic’s repository.
    wget https://artifacts.elastic.co/downloads/logstash/logstash-7.7.0.deb
  2. Install the downloaded package using the dpkg package installer.
    sudo dpkg -i logstash-7.7.0.deb
  3. If you run Logstash at this point and encounter error similar to logstash load error: ffi/ffi -- java.lang.NullPointerException: null get Alexandre Alouit’s fix from GitHub using.
    wget https://gist.githubusercontent.com/toddysm/6b4b9c63f32a3dfc476a725561fc23af/raw/06a2409df3eba5054d7266a8227b991a87837407/fix.sh
  4. Go to /usr/share/logstash/logstash-core/lib/jars and check the version of the jruby-complete-X.X.X.X.jarJAR
  5. Open the downloaded fix.sh, and replace the version of the jruby-complete-X.X.X.X.jaron line 11 with the one from your distribution. In my case, that was  jruby-complete-9.2.11.1.jar
  6. Change the permissions of the downloaded fix.shscript, and run it.
    chmod 755 fix.sh
    sudo ./fix.sh
  7. You can run Logstash with.
    sudo service logstash start

You can check the Logstash logs in /var/log/logstash/logstash-plain.logfor information on whether Logstash is successfully started.

Installing Kibana on Raspberry Pi

Installing Kibana had different challenges. The problem is that Kibana requires an older version of NodeJS, but the one that is packed with the Debian package doesn’t run on Raspbian. What you need to do is to replace the NodeJS version after you install the Debian package. Here the steps:

  1. Download the Kinabna Debian package from Elastic’s repository.
    wget https://artifacts.elastic.co/downloads/kibana/kibana-7.7.0-amd64.deb
  2. Install the downloaded package using the dpkg package installer.
    sudo dpkg -i --force-all kibana-7.7.0-amd64.deb
  3. Move the redistributed NodeJS to another folder (or delete it completely) and create a new empty directory nodein the Kibana installation directory.
    sudo mv /usr/share/kibana/node /usr/share/kibana/node.OLD
    sudo mkdir /usr/share/kibana/node
  4. Next, download version 10.19.0 of NodeJS. This is the required version of NodeJS for Kibana 7.7.0. If you are installing another version of Kibana, you may want to check what NodeJS version it requires. The best way to do that is to start the Kibana service and it will tell you.
    wget https://nodejs.org/download/release/v10.19.0/node-v10.19.0-linux-armv7l.tar.xz
  5. Unpack the TAR and move the content to the nodedirectory under the Kibana installation directory.
    sudo tar -xJvf node-v10.19.0-linux-armv7l.tar.xz
    sudo mv ./node-v10.19.0-linux-armv7l.tar.xz/* /usr/share/kibana/node
  6. You may also want to create symlinks for the NodeJS executable and its tools.
    sudo ln -s /usr/share/kibana/node/bin/node /usr/bin/node
    sudo ln -s /usr/share/kibana/node/bin/npm /usr/bin/npm
    sudo ln -s /usr/share/kibana/node/bin/npx /usr/bin/npx
  7. Configure Kibana to accept requests on any IP address on the device.
    sudo vi /etc/kibana/kibana.yml

    Set the server.hostsetting to 0.0.0.0like this:

    server.host: "0.0.0.0"
  8. You can run Kibana with.
    sudo service kibana start

Conclusion

Although not supported, you can run the complete ELK stack on a Raspberry Pi 4[ad] device. It is not the most trivial installation, but it is not so hard either. In the following posts, I will explain how you can use the Elastic SIEM to monitor the traffic on your network.

References

Here are some additional links that you may find useful:

For a while, I’ve been planning to build a cybersecurity research environment in the cloud that I can use to experiment with and research malicious cyber activities. Well, yesterday I received the following message on my cell phone:

Hello mate, your FEDEX package with tracking code GB-6412-GH83 is waiting for you to set delivery preferences: <url_goes_here>

My curiosity to follow the link was so tempting that I said: “Now is the time to get this sandbox working!” I will write about the scam above in another blog post, but in this one, I would like to explain what I needed in the cloud and how did I set it up.

Cybersecurity Research Needs

What (I think) I need from this sandbox environment? I know that my requirements will change over time when I get deeper in the various scenarios but for now, here is what I wanted to have:

  • First, and foremost, a dedicated network where I can click on malicious links and browse dark web sites without the fear that my laptop or local network will get infected. I also need to have the ability to split the network into subnets for different purposes
  • Next, I needed pre-built VM images for various purposes. Here some examples:
    • A Windows client machine to act as an unsuspicious user. Most probably, this VM will need to have Microsoft Office and other software like Acrobat Reader installed. Other more advanced software that will help track URLs and monitor process may also be required on this machine. I will go deeper into this in a separate post
    • Linux machine with networking tools that will allow me to better understand and monitor network traffic. Kali Linux may be the right choice, but Ubuntu and CentOS may also be helpful as different agents
    • I may also need some Windows Server and Linux server images to simulate enterprise environments
  • Very important for me was to be able to create the VM I need quickly, do the work I need, and tear it down after that. Automating the process of VM creation and set up was high up on the list
  • Also, I wanted to make sure that if I forget the VM running, it will be shut down automatically to save money

With those basic requirements, I got to work setting up the cloud environment. For my experiments, I choose Microsoft Azure because I have a good relationship with the Azure team and also some credits that I can use.

Segregating Network Access

As mentioned in the previous section, I need a separate network for the VMs to avoid any possibility of infecting my laptop. Now, the question that comes to mind is: Is a single virtual network with several subnets OK or not? Having in mind that I can destroy this environment at any time and recreate it (yes, this is the automation part), I decided to go with a single VNet and the following subnets in it:

  • Sandbox subnet to be used to spin up virtual machines that can simulate user behavior. Those will be either single VMs running Windows client and Microsoft Office or set of those if I want to observe the lateral movement within the network. I may also have a Linux machine with Wireshark installed on it to watch network traffic to the web sites that host the malicious pages
  • Honeypot subnet to be used to expose vulnerable devices to the internet. Those may be Windows Server Datacenter VMs or Linux servers without outdated patches and weaker or compromised passwords
  • Frontend subnet to be used to host exploit frameworks for red team scenarios. One example can be the Social Engineering Toolkit (SET). Also, simple redirection services or other apps can be placed in this subnet
  • Public subnet that is required in Azure if I need to set up any load balancers for the exploit apps

With all this in mind, I need to make sure that traffic between those subnets is not allowed. Hence, I had to set up a Network Security Group for each subnet to disable inbound and outbound VNet traffic.

Cybersecurity Virtual Machine Images

This can be an evergrowing list depending on the scenarios but below is the list of the first few I would like to start with. I will give them specific names based on the purpose I plan to use them for:

User VM

The purpose of the User VM is to simulate an office worker (think of receptionist, accountant or, admin). Typically such machines have a Windows Client OS installed on it as well as other office software like Word, Excel, PowerPoint, Outlook and Acrobat Reader. Different browsers like Edge, Chrome, and Firefox will be also useful to test.

At this point, the question that I need to answer is whether I would like to have any other software installed on this machine that will allow me to analyze the memory, reverse engineer binaries or, monitor network traffic on it. I decided to go with a clean user machine to be able to see the exact behavior the user sees and not impact it with the availability of other tools. One other reason I thought this would be a better approach is to avoid malware that checks for the existence of specialized software.

When I started building this VM image, I also had to decide whether I want to have any anti-virus software installed on it. And of course, the question is: “What anti-virus software?” My company is a Sophos Partner, and this would be my obvious choice. I decided to build two VM images to go with: one without anti-virus and one with.

User Analysis VM

This one is the same as the User VM but with added software for malware analysis. The minimum I need installed is:

  • Wireshark for network scanning
  • Cygwin for the necessary Linux tools
  • HEXDump viewer/editor
  • Decompiler

I am pretty sure the list will grow over time and I will keep a tap on what software I have installed or prefer to use. What my intent is to build Kali Linux type of VM but for Windows 🙂

Kali VM

This one is the obvious choice. It can be used not only for offensive capabilities but also to cover your identity when accessing malicious sites. It has all the necessary tools for hackers and it is available from the Microsoft Azure Marketplace.

Tor VM

One last VM type I would like to have is a machine on which I can install the Tor browser for private browsing. Similar to the Kali VM, it can be used for hiding the identity of the machine and the user who accesses the malicious sites. It can also be used to gain access to Dark Web sites and forums for cybersecurity research purposes.

Those are the VM images I plan for now. Later on, I may decide on more.

Automating the Security Research VM Creation

Ideally, I would like to be able to create a whole security research environment with a single script. While this is certainly possible, I will not be able to do it in an hour to load the above URL. I will slowly implement the automation based on my needs going forward. However, one thing that I will do immediately is to save regular snapshots of my VMs once I install new software. I will also need to version those.

I can use those snapshots to quickly spin up new VM with the required software if the previous one gets infected (ant this will certainly happen). So, for now, my automation will be only to create a VM from an Azure Disk snapshot.

Shutting Down the Security Research VMs

My last requirement was to shut down the security research VMs when I don’t need them. Like every other person in the world, I forget things, and if I forget the VMs running, I can incur some expenses that I would not be happy to pay. While I am working on full-fledged scheduling capability for Azure resources for customers, it is still in the works and I cannot use it yet. Hence, I will leverage the built-in Azure functionality for Dev/Test workloads and will schedule a daily shutdown at 10:00 PM PST time. This way, if I forget to turn off any VM, it will not continue to run all the time.

With all this, my plan is ready and I can move on to build my security research environment.