If you have missed the news lately, cybersecurity is one of the most discussed topics nowadays. From supply chain exploits to data leaks to business email compromise (BEC) there is no break – especially during the pandemic. Many (if not all) start with an account compromise. And if you ask any cybersecurity expert, they will tell you that the best way to protect your account is to use two-factor (or multi-factor) authentication. Well, let me tell you a secret – MFA sucks! Ask the Okta guys! Even they think MFA sucks. And they are a mobile security company. Though, Randall and I have different motives to make that claim.

By the way: 2FA stands for “two-factor authentication” while MFA stands for “multi-factor authentication”. I will use those two acronyms to save on some typing. And one more, TLA means a “three-letter acronym”.

Randall goes on and on in his post about why MFA sucks. Most of his points are valid! It is an annoying, and frustrating experience. I don’t know about slow, but I would argue against being pointless – it serves a purpose, a very good purpose. Where he is mainly wrong is thinking that the solution is yet another technology (Well, the whole point of his post is to market Okta’s new technology, so, he will get a pass for that 😉 ). This new technology will not address the source of the issue – that people are scared to use MFA. Take a look at the Twitter Account Security survey – why do you think only 2.3% (at the time of this writing) of all Twitter users have MFA enabled? Here are what I think the reasons are:

  • Complexity and lack of understanding of the technology
  • Fear of losing access to the accounts

I believe people are smart enough to grasp the benefits without too many explanations. What they are not clear is how to set it up and how to make sure they don’t lose access to their accounts. In general, my frustration is with how the technology vendors have implemented MFA – without any thought about the user experience. Let me illustrate what I mean with my own experience.

The Problem With Too Many MFAs

I have set up MFA on all my important accounts. The list is long: bank accounts, credit card accounts, stock brokerage accounts, government accounts (like taxes, DMV, etc), email accounts (like Office 365 and GMail), GitHub, Twitter, Facebook, you name it. Have been doing this for years already and the list continues to grow. I am also required by former and current employers to use MFA for my work accounts – also a long list. Here is a list of MFA methods I use (and I don’t claim this to be a comprehensive list):

  • SMS
  • email (several different emails)
  • authenticators apps (here the things are getting crazy)
    • Microsoft Authenticator
    • Google Authenticator (I started with this one)
    • Entrust
    • VIP Access
    • Lastpass Authenticator
  • other vendor-specific implementations (I really don’t know how to call those, but they have their own way to do it)
    • Apple
    • WordPress
    • Facebook
  • Yubikey (I have three of those and I will ignore those in this post because the hardware key experience has been the same since the … 90s or so)

You’ll think I am crazy, right? Why do I use so many authenticators and MFA methods? I don’t want to, but the problem is that I have to!

First of all, this all evolved over the last 7-8 years since I started using MFA. It started with Google Authenticator; then Microsoft Authenticator and the Lastpass one; then a few banks added email and SMS (not very secure but interestingly some very big financial institutions still don’t offer other options!!!) while others struck deals with specific vendors to use their own authenticators – hence, I had to install one-off apps for those accounts. A couple of years ago I bought Yubukeys to secure my password managers and some important work and bank accounts.

At some point in time, I decided to unify on a single method! Or a single authenticator app and the Yubikey. That turns out to be impossible! Despite the fact that there is a standard, the authenticator vendors do not give you an easy way to transfer the seeds from one app to another. The only way to do that is to go over tens of accounts and register the new authenticator. Who wants to do that! Also, a few of my financial institutions do not offer a way to use a different MFA method than the one they approved. So, I am stuck with the one-offs. Unless I want to change to different financial institutions. But… who wants to do that!

There is one more problem with the use of so many tools. Very often the systems that are set up with MFA ask you to “enter the code from your MFA app”. The question is: which MFA app have I registered for that system? There is an argument whether having details about the app used to generate the code will compromise the security. Some companies provide that information, while others don’t. If I am allowed to use a single authenticator app for all my accounts, I don’t mind not having information on what tool have I registered. But in the current situation, giving me a hint is a requirement for usability. Anyway, if my authenticator app is compromised, not telling me what app to use will make absolutely no difference.

Another problem with the authenticator apps is that (and this is prevalent in technology) the app thinks it knows best how to name things. If I for example have two accounts for GitHub and want to use MFA, the authenticators will show them both as GitHub. What if I have ten? Which code should I use for which account is really hard to figure out.

The Problem With Switching Phones

Wait! Should I say: “The Problem With Losing Phones?” No, the problem with losing phones is next!

This is where the complexity of the MFA approach starts to show up! I don’t think any of the above-mentioned vendors really thought the whole user experience thoroughly through. I will add Duo to that list because I used it and it also sucks. Also, I will make a note here that colleagues recommended me Twillio’s Authy but I am already so deep in my current (diverse) ecosystem that I have no desire to try one more of those.

My wife (who is not in technology) has an iPhone 7 and she strongly refuses to change it because of all the trouble she needs to go through to set up her apps on a new phone. I spent a lot of time convincing her to set up at least SMS-based MFA for her bank and credit card accounts. And I think that will be the extend that she will go to. After I switched from iPhone 7 to the latest (and greatest) iPhone 13, I completely understand her fear. (As a side note: changing your phone every year is really, really, really bad for the environment. Be environmentally friendly and use your gadgets for a longer time! I have set a goal to use my phones and other gadgets for at least 5 years.) It has been a few months already and I continue to use some of the authenticators on my old phone because it is such a pain to migrate them to the new one.

Let me quickly go over the experience one by one.

Moving MFA Apps From Phone to Phone

Moving Apple MFA to my new phone was fluent. At the end of the day they are the kings of the experiences and this was expected. Moving WordPress and Facebook was also relatively simple – as long as you manage to sign in to your WordPress and Facebook accounts on your new phone, you will start getting the prompts there.

Moving the Lastpass Authenticator should have been easy but they really screwed up the flow between the actual Lastpass app and the authenticator app. I was clicking like crazy back and forth, going in circles for quite some time until something magical happened and it started working on my new phone. For the accounts where I used Entrust I had to go and register the new phone. Inconvenient, but at least I had a self-service. The problems started appearing when I got to VIP Access – I have to call my financial institution because they are the only ones that can register it. This will mean at least one hour on the phone.

Now, let’s get to the big ones!

Google Authenticator apparently has export functionality that allows you to export the seeds and import them in your new phone. If you know about that, it works like a charm too but… I just recently learned about it from Dave Bittner and Joe Carrigan from The Cyberwire.

Microsoft Authenticator should have been the easiest one (they claim). As long as you are signed in with your Microsoft Account, you should be able to get all the codes on your new phone. Well, king of! This works for other MFA accounts except for Microsoft work and school accounts. With all due respect to my colleagues from Azure Active Directory – the work and school account move sucks! You just need to go and register the new phone with those. Really disappointing!

“Insecure” MFA Methods When Switching Phones

Let me write about the non-secure ways to use MFA!

As I mentioned above, I also use email and SMS for certain accounts. The email experience also sucks! OK, I will be honest – this is certainly my own problem and few people may have this one but this is my rant so I will go with it. I have created many email accounts collected over time (about the reasons for that in some other post). One or another email account is used for MFA depending on what email address I’ve used for registration on a particular website. Those emails are synchronized to a single email account but… the synchronization is on schedule – about 30 mins or so. Now, every MFA code normally expires within 10 mins. Either, I miss the time window to enter the code or I need to login into that particular email account to get the code or I need to force the sync manually (yeah, I can do that but it is annoying. Right, Randall?!). Switching my phone has nothing to do with my emails so – there is no impact.

And the last one – SMS! Well, SMS doesn’t suck! You heard me! SMS doesn’t suck! … most of the time. Sometimes you will not get the text message on time due to networking issues but it works perfectly 99.999% of the time. Oh, and if I switch my phone, it continues to work without any additional configurations, QR codes, or calling my telco – like magic 😉

The Problem With Losing Your Phone

Now, here is where things get serious! Or serious if you use mobile apps for MFA. If you lose your phone, you are screwed. You will be locked out of all your accounts or most of them.

Apple is fine if you are in the Apple ecosystem. I have a few MacBooks, iPad, and AppleWatch – at least one of them will get me the MFA code.

With Facebook, I am screwed unless I am signed in to one of my browsers. For a person like me who uses Facebook once in a blue moon, the probability is low, so if I lose my phone, I am screwed. (Maybe that will push me over the edge to finally stop using them 🙂 ). I assume the WordPress story will be the same as with Facebook. Oh, and have you ever tried to get Facebook support on the phone to help you unlock your account? 🙂

About the other ones…

Backup Codes (or The Problem With Backup Codes)

Well, most of the systems allow you to print a bunch of backup codes that you can store “safely” so if you get locked out you can “easily” sign back in. I emphasize the words “safely” and “easily”. Here is why!

Storing Backup Codes Safely

Define “safely”! The experts recommend that you print your backup codes on paper and store them “safely” offline. I assume that they mean putting it in my safe deposit box at home, right? Because I will need to have easy access to those when I get locked out. It is questionable how safe is that because robberies are not uncommon. I had a colleague who got robbed and he found his safe deposit box in the bushes behind his house – of course, empty! Also, most of those safe deposit boxes are not fire- and waterproof. So, you need to buy a fireproof safe deposit box and cement it in your basement so no grown person (I mean teenager or older) can take it out with a crowbar.

Or, they mean to put it in the safe deposit box in the bank. Where there is security and the probability of robbery is minuscule. But then, I need to run to the bank every time I get locked out.

In both cases, this is not easy. From both a logistics point of view and a usability point of view. At the end of the day, what if I am on a trip and lose my phone (which is a quite realistic scenario).

To avoid all this hassle, most of us find some workarounds. Here are a few for you (and be honest and admit that you do those too):

  • Saving the backup codes as files and putting them in a folder on your laptop.
  • Copying the backup codes and storing them in your password manager (together with your password – how secure is that? 🙂 )
  • Saving the backup codes as files and keeping them on a thumb drive in your drawer.
  • Saving them as files on your Dropbox, OneDrive, Google Drive, or another cloud drive.
  • You see where I am going with those…

To be honest, I even didn’t bother printing/saving the backup codes for some of my accounts (and not all of the systems offer that option), which I assume many of us do.

Even if I print them and store them in my safe, I need to print details of what account they belong to and if they get stolen, all my accounts will be compromised.

Storing the Seeds in the Cloud

Some of the authenticators like Microsoft Authenticator keep your seeds in the cloud. Authe was recommended to me for the same reason. The idea is that if you lose your phone, you can sign in to your authenticator on your new phone and it will sync your seeds on the new phone. Magical, right? Yes, if you are able to sign in to the authenticator on your new phone… without your MFA code. So, you are caught in this vicious circle that if you lose your phone, you will need an MFA code to sign in to your authenticator but you have no way to get the MFA code.

The Solution (Backed by the Numbers)

What are you left with if you lose your phone? The only two MFA methods that work for a lost phone are email and SMS (because even if I lose my phone, I can easily keep my number). They are the most insecure ones but have the lowest risk to get you locked out from your accounts.

I am not promoting the use of SMS and email as the second factor for authentication. But the numbers show that majority of the users who use MFA use SMS instead of an app or a hardware key (see the Twitter report). Let’s run this simple math:

  • Twitter has about 396.5M users.
  • 2.3% (at the time of this writing) use MFA for their Twitter account. This is ~9.12M MFA users.
  • 0.5% of those 9.12M use a hardware key. This is just 456K hardware key users.
  • 30.9% of those 9.12M use an auth app. This is 2.82M auth app users.
  • 79.6% of those 9.12M use SMS. This is 7.26M SMS users.

It would be nice if Twitter had a way to break those numbers down by occupation (although it will be a violation of privacy). Pretty sure they will show that the majority of people who use an auth app or a hardware key work in technology. The normal users who deem their account important protect them with SMS because SMS offers the easiest user experience.

One more thing about SMS. Because everybody is scared to lock themselves out from their accounts, people set up their authenticator app as the primary MFA tool but then they have SMS as the backup. This way, if they lose their phone, they can still gain access to their account using SMS. But as we know, security is as strong as the weakest link – in this case the SMS. The setup of an authenticator app just gives the false illusion of security.

Summary

Using more than one factor for authentication is a MUST. Using stronger authenticators would be nice but with the current experience will be hard to achieve. To convince more people to do that, companies need to offer a much friendlier experience to their users:

  • Freedom to choose authenticator app
  • Self-service
  • Easy recovery

Without those, the usage will be at the current Twitter numbers.

MFA is yet another technology developed without the user in mind. But unfortunately, a technology that is at the core of cybersecurity. It is a shame that the security vendors continue to produce all kinds of new technologies (with fancy names like SOAR, SIEM, EDR, XDR, ML, AI) without fixing the basic user experience.

In my previous post, Learn More About Your Home Network with Elastic SIEM – Part 1: Setting Up Elastic SIEM, I explained how you could set up Elastic SIEM on a Raspberry Pi[ad]. The next thing you would want to do is to collect the logs from your firewall and analyze them. Before I jump into the technical details, I should warn you that you may… not be able to do the steps below if you rely on consumer products or you use the equipment provided by your ISP.

Let me go on a short rant here! Every self-respected router vendor should allow firewall logs to be sent to an external system. A common approach is to use the SYSLOG protocol to collect the logs. If your router does not have this capability… well, I would suggest you buy a new, more advanced one.

I personally invested in a tiny Netgate SG-1100 box that runs the open-source PFSense router/firewall. You can, of course, install PFSense on your own hardware if you don’t want to buy a new device. PFSense allows you to configure up to three external log servers. Logstash, that we have configured in the previous post, can play the role of an SYSLOG server and send the events to Elasticsearch. Here is how simple the configuration of the PFSense log shipping looks:

The IP address 192.168.11.72 is the address of the Raspberry Pi, where the ELK SIEM is installed and 5140 is the port that Logstash uses to listen for incoming events. Thas is all you need to configure PFSense to send the logs to the ELK SIEM.

Our next step is to configure Logstash to collect the events from PFSense and feed them into an index in Elastic. The following project from Patrick Jennings will help you with the Logstash configuration. If you follow the instructions, you will see the new index show up in Kibana like this:

The last thing we need to do is to create a dashboard in Kibana to show the data collected from the firewall. Patrick Jennings’ project has pre-configured visualizations and a dashboard for the PFSense data. Unfortunately, when you import those, Kibana warns you that those need to be updated. The reason is that they use the old JSON format used by Kibana, and the latest versions require all objects to be described using the Newline Delimited NDJSON format (for more details, visit ndjson.org). The pfSense dashboard and visualization are available in my GitHub repository for Home SIEM.

Now, keep in mind that the pfSense logs will not feed into the SIEM functionality of the Elastic stack because it is not in the Elastic Common Schema (ECS) format. What we have created is just a dashboard that visualizes the firewall log data. Also, the dashboard and the visualizations are built using the pfSense data. If you use a different router/firewall, you will need to update the configuration to visualize the data, and things may not work out of the box. I would be curious to hear feedback on how other routers can send data to ELK.

In subsequent posts, I will describe how you can use Beats to get data from the machines connected to your local network and how you can dig deeper into the collected data.

 

 

 

 

 

Last night I had some free time to play with my network, and I ran  tcpdump out of curiosity. For a while, I’ve been interested to analyze what traffic is going through my home network, and the result of my test pushed me to get to work. I have a bunch of Raspberry Pi devices in my drawers, so, the simplest thing that I can do is get one and install Elastic SIEM on it. For those of you, who don’t know what SIEM is, it stands for Security Information and Event Management. My hope was that with it, I will be able to get a better understanding of the traffic on my home network.

Installing Elastic SIEM on Raspberry Pi

The first thing I had to do is to install the ELK stack on a Raspberry Pi. There are not too many good articles that explain how to set up Elastic SIEM on your Pi. According to Elastic, Elastic SIEM is generally available in Elastic Stack 7.6. So, installing the Elastic Stack should do the work.

A couple of notes before that:

  1. The first thing to keep in mind is that 8GB is the minimum requirement for the ELK stack. You can get around with 2GB Pi, but if you want to run the whole stack (Elasticsearch, Logstash, and Kibana) on a single device, make sure that you order Raspberry Pi 4 Model B Quad Core 64 Bit with 4GB[ad]. Even this one is a stretch if you collect a lot of information. A good option would be to split the stack over two devices: one for Elasticsearch and Kibana, and another one for the Logstash service
  2. Elastic has no builds for Raspbian. Hence, in the instructions below, I will use Debian packages and will describe the steps to install those on the Pi. This will require some custom configs and scripts, so be prepared for that. Well, this article is how to hack the installation and no warranties are provided 🙂
  3. You will not be able to use the ML functionality of Elasticsearch because it is not supported on the low-powered Raspbian device
  4. Below steps assume version 7.7.0 of the ELK stack. If you are installing a different version, make sure that you replace it accordingly in the commands below
  5. Last but not least (in the spirit of no warranties), Elasticsearch has a dependency on libc6that will be ignored and will break future updates. You have to deal with this at your own risk

Here the steps to follow.

Installing Elasticsearch on Raspberry Pi

  1. Set up your Raspberry Pi first. Here are the steps to set up your Rasberry Pi. The Raspberry Pi Imager makes it even easier to set up the OS. Once again, I would recommend using Raspberry Pi 4 Model B Quad Core 64 Bit with 4GB[ad] and a larger SD card[ad] to save logs for longer.
  2. Make sure all packages are up to date, and your Raspbian OS is fully patched.
    sudo apt-get update
    sudo apt-get upgrade
  3. Install the ZIP utility, we will need that later on for the Logstash configuration.
    sudo apt-get install zip
  4. Then, install the Java JRE because Elastic Stack requires it. You can use the open-source JRE to avoid licensing troubles with Oracle’s one.
    sudo apt-get install default-jre
  5. Once this is done, go ahead and download the Debian package for Elasticsearch. Make sure that you download the package with no JDK in it.
    wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.7.0-no-jdk-amd64.deb
  6. Once this is done, go ahead and install Elasticsearch using the package manager.
    sudo dpkg -i --force-all --ignore-depends=libc6 elasticsearch-7.7.0-no-jdk-amd64.deb
  7. Next, we need to configure Elasticsearch to use the installed JDK.
    sudo vi /etc/default/elasticsearch

    Set the JAVA_HOMEto the location of the JDK. Normally this is /usr/lib/jvm/default-java. You can also set the JAVA_HOMEto the same path in the /etc/environmentfile but this is not required.

  8. Last thing you need to do if to disable the ML XPack for Elasticsearch. Change the access mode to the /etc/elasticsearchdirectory first and edit the Elasticsearch configuration file.
    sudo chmod g+w /etc/elasticsearch
    sudo vi /etc/elasticsearch/elasticsearch.yml

    Change the xpack.ml.enabledsetting to falseas follows:

    xpack.ml.enabled: false

The above steps install and configure the Elasticsearch service on your Raspberry Pi. You can start the service with:

sudo service elasticsearch start

Or check its status with:

sudo service elasticsearch status

Installing Logstash on Raspberry Pi

Installing Logstash on the Raspberry Pi turned out to be a bit more problematic than Elasticsearch. Again, Elastic doesn’t have a Logstash package that targets ARM architecture and you need to install it manually. StackOverflow posts and GitHub issues were particularly helpful for that – I listed the two I used in the References at the end of this article. Here the steps:

  1. Download the Logstash Debian package from Elastic’s repository.
    wget https://artifacts.elastic.co/downloads/logstash/logstash-7.7.0.deb
  2. Install the downloaded package using the dpkg package installer.
    sudo dpkg -i logstash-7.7.0.deb
  3. If you run Logstash at this point and encounter error similar to logstash load error: ffi/ffi -- java.lang.NullPointerException: null get Alexandre Alouit’s fix from GitHub using.
    wget https://gist.githubusercontent.com/toddysm/6b4b9c63f32a3dfc476a725561fc23af/raw/06a2409df3eba5054d7266a8227b991a87837407/fix.sh
  4. Go to /usr/share/logstash/logstash-core/lib/jars and check the version of the jruby-complete-X.X.X.X.jarJAR
  5. Open the downloaded fix.sh, and replace the version of the jruby-complete-X.X.X.X.jaron line 11 with the one from your distribution. In my case, that was  jruby-complete-9.2.11.1.jar
  6. Change the permissions of the downloaded fix.shscript, and run it.
    chmod 755 fix.sh
    sudo ./fix.sh
  7. You can run Logstash with.
    sudo service logstash start

You can check the Logstash logs in /var/log/logstash/logstash-plain.logfor information on whether Logstash is successfully started.

Installing Kibana on Raspberry Pi

Installing Kibana had different challenges. The problem is that Kibana requires an older version of NodeJS, but the one that is packed with the Debian package doesn’t run on Raspbian. What you need to do is to replace the NodeJS version after you install the Debian package. Here the steps:

  1. Download the Kinabna Debian package from Elastic’s repository.
    wget https://artifacts.elastic.co/downloads/kibana/kibana-7.7.0-amd64.deb
  2. Install the downloaded package using the dpkg package installer.
    sudo dpkg -i --force-all kibana-7.7.0-amd64.deb
  3. Move the redistributed NodeJS to another folder (or delete it completely) and create a new empty directory nodein the Kibana installation directory.
    sudo mv /usr/share/kibana/node /usr/share/kibana/node.OLD
    sudo mkdir /usr/share/kibana/node
  4. Next, download version 10.19.0 of NodeJS. This is the required version of NodeJS for Kibana 7.7.0. If you are installing another version of Kibana, you may want to check what NodeJS version it requires. The best way to do that is to start the Kibana service and it will tell you.
    wget https://nodejs.org/download/release/v10.19.0/node-v10.19.0-linux-armv7l.tar.xz
  5. Unpack the TAR and move the content to the nodedirectory under the Kibana installation directory.
    sudo tar -xJvf node-v10.19.0-linux-armv7l.tar.xz
    sudo mv ./node-v10.19.0-linux-armv7l.tar.xz/* /usr/share/kibana/node
  6. You may also want to create symlinks for the NodeJS executable and its tools.
    sudo ln -s /usr/share/kibana/node/bin/node /usr/bin/node
    sudo ln -s /usr/share/kibana/node/bin/npm /usr/bin/npm
    sudo ln -s /usr/share/kibana/node/bin/npx /usr/bin/npx
  7. Configure Kibana to accept requests on any IP address on the device.
    sudo vi /etc/kibana/kibana.yml

    Set the server.hostsetting to 0.0.0.0like this:

    server.host: "0.0.0.0"
  8. You can run Kibana with.
    sudo service kibana start

Conclusion

Although not supported, you can run the complete ELK stack on a Raspberry Pi 4[ad] device. It is not the most trivial installation, but it is not so hard either. In the following posts, I will explain how you can use the Elastic SIEM to monitor the traffic on your network.

References

Here are some additional links that you may find useful:

For a while, I’ve been planning to build a cybersecurity research environment in the cloud that I can use to experiment with and research malicious cyber activities. Well, yesterday I received the following message on my cell phone:

Hello mate, your FEDEX package with tracking code GB-6412-GH83 is waiting for you to set delivery preferences: <url_goes_here>

My curiosity to follow the link was so tempting that I said: “Now is the time to get this sandbox working!” I will write about the scam above in another blog post, but in this one, I would like to explain what I needed in the cloud and how did I set it up.

Cybersecurity Research Needs

What (I think) I need from this sandbox environment? I know that my requirements will change over time when I get deeper in the various scenarios but for now, here is what I wanted to have:

  • First, and foremost, a dedicated network where I can click on malicious links and browse dark web sites without the fear that my laptop or local network will get infected. I also need to have the ability to split the network into subnets for different purposes
  • Next, I needed pre-built VM images for various purposes. Here some examples:
    • A Windows client machine to act as an unsuspicious user. Most probably, this VM will need to have Microsoft Office and other software like Acrobat Reader installed. Other more advanced software that will help track URLs and monitor process may also be required on this machine. I will go deeper into this in a separate post
    • Linux machine with networking tools that will allow me to better understand and monitor network traffic. Kali Linux may be the right choice, but Ubuntu and CentOS may also be helpful as different agents
    • I may also need some Windows Server and Linux server images to simulate enterprise environments
  • Very important for me was to be able to create the VM I need quickly, do the work I need, and tear it down after that. Automating the process of VM creation and set up was high up on the list
  • Also, I wanted to make sure that if I forget the VM running, it will be shut down automatically to save money

With those basic requirements, I got to work setting up the cloud environment. For my experiments, I choose Microsoft Azure because I have a good relationship with the Azure team and also some credits that I can use.

Segregating Network Access

As mentioned in the previous section, I need a separate network for the VMs to avoid any possibility of infecting my laptop. Now, the question that comes to mind is: Is a single virtual network with several subnets OK or not? Having in mind that I can destroy this environment at any time and recreate it (yes, this is the automation part), I decided to go with a single VNet and the following subnets in it:

  • Sandbox subnet to be used to spin up virtual machines that can simulate user behavior. Those will be either single VMs running Windows client and Microsoft Office or set of those if I want to observe the lateral movement within the network. I may also have a Linux machine with Wireshark installed on it to watch network traffic to the web sites that host the malicious pages
  • Honeypot subnet to be used to expose vulnerable devices to the internet. Those may be Windows Server Datacenter VMs or Linux servers without outdated patches and weaker or compromised passwords
  • Frontend subnet to be used to host exploit frameworks for red team scenarios. One example can be the Social Engineering Toolkit (SET). Also, simple redirection services or other apps can be placed in this subnet
  • Public subnet that is required in Azure if I need to set up any load balancers for the exploit apps

With all this in mind, I need to make sure that traffic between those subnets is not allowed. Hence, I had to set up a Network Security Group for each subnet to disable inbound and outbound VNet traffic.

Cybersecurity Virtual Machine Images

This can be an evergrowing list depending on the scenarios but below is the list of the first few I would like to start with. I will give them specific names based on the purpose I plan to use them for:

User VM

The purpose of the User VM is to simulate an office worker (think of receptionist, accountant or, admin). Typically such machines have a Windows Client OS installed on it as well as other office software like Word, Excel, PowerPoint, Outlook and Acrobat Reader. Different browsers like Edge, Chrome, and Firefox will be also useful to test.

At this point, the question that I need to answer is whether I would like to have any other software installed on this machine that will allow me to analyze the memory, reverse engineer binaries or, monitor network traffic on it. I decided to go with a clean user machine to be able to see the exact behavior the user sees and not impact it with the availability of other tools. One other reason I thought this would be a better approach is to avoid malware that checks for the existence of specialized software.

When I started building this VM image, I also had to decide whether I want to have any anti-virus software installed on it. And of course, the question is: “What anti-virus software?” My company is a Sophos Partner, and this would be my obvious choice. I decided to build two VM images to go with: one without anti-virus and one with.

User Analysis VM

This one is the same as the User VM but with added software for malware analysis. The minimum I need installed is:

  • Wireshark for network scanning
  • Cygwin for the necessary Linux tools
  • HEXDump viewer/editor
  • Decompiler

I am pretty sure the list will grow over time and I will keep a tap on what software I have installed or prefer to use. What my intent is to build Kali Linux type of VM but for Windows 🙂

Kali VM

This one is the obvious choice. It can be used not only for offensive capabilities but also to cover your identity when accessing malicious sites. It has all the necessary tools for hackers and it is available from the Microsoft Azure Marketplace.

Tor VM

One last VM type I would like to have is a machine on which I can install the Tor browser for private browsing. Similar to the Kali VM, it can be used for hiding the identity of the machine and the user who accesses the malicious sites. It can also be used to gain access to Dark Web sites and forums for cybersecurity research purposes.

Those are the VM images I plan for now. Later on, I may decide on more.

Automating the Security Research VM Creation

Ideally, I would like to be able to create a whole security research environment with a single script. While this is certainly possible, I will not be able to do it in an hour to load the above URL. I will slowly implement the automation based on my needs going forward. However, one thing that I will do immediately is to save regular snapshots of my VMs once I install new software. I will also need to version those.

I can use those snapshots to quickly spin up new VM with the required software if the previous one gets infected (ant this will certainly happen). So, for now, my automation will be only to create a VM from an Azure Disk snapshot.

Shutting Down the Security Research VMs

My last requirement was to shut down the security research VMs when I don’t need them. Like every other person in the world, I forget things, and if I forget the VMs running, I can incur some expenses that I would not be happy to pay. While I am working on full-fledged scheduling capability for Azure resources for customers, it is still in the works and I cannot use it yet. Hence, I will leverage the built-in Azure functionality for Dev/Test workloads and will schedule a daily shutdown at 10:00 PM PST time. This way, if I forget to turn off any VM, it will not continue to run all the time.

With all this, my plan is ready and I can move on to build my security research environment.

In my last post, I demonstrated how easy it is to create fake accounts on the major social networks. Now, let’s take a look at what can we do with those fake social network accounts. Also, let’s not forget that my goals here are to penetrate specific individual’s home network (in this case my own family’s home network) and gain access to files and passwords. To do that, I need to find as much information about this individual as possible, so I can discover a weak link to exploit.

1. Search Engines

The first and most obvious way to find anything on the web is, of course, Google. Have you ever Googled yourself? Search for “Toddy Mladenov” on Google yields about 4000 results (at the time of this writing). The first page gives me quite a few useful links:

  • My LinkedIn page
  • Some pages on sys-con.com (must be important if it is so highly ranked by Google). Will check it later
  • Link to my blog
  • Link to my Quora profile
  • Link to a book on Amazon
  • Two different Facebook links (interesting…)
  • Link to a site that can show me my email and phone number (wow! what more do I need 😀)
  • And a link to a site that can show me more about my business profile

The last two links look very intriguing. Whether we want it or not, there are thousands of companies that scrape the Web for information and sell our data for money. Of course, those two links point to such sites. Following those two links I can learn quite a lot about myself:

  • I have (at least) two emails – one with my own domain toddysm.com and one with Gmail. I can get two emails if I register for free on one of the sites or start a free trial on the other (how convenient 😀)
  • I am the CTO co-founder of Agitare Technologies, Inc
  • I get immediate access to my company’s location and phone number (for free and without any registrations)
  • Also, they give me more hints about emails t***@***.com (I can get the full email if I decide to join the free trial)
  • Clicking on the link to the company information additional information about the company – revenue, website as well as names of other employees in the company

OK, enough with those two sites. In just 5 minutes I got enough to start forming a plan. But those links made me curious. What will happen if I search for “Toddy Mladenov’s email address” and “Toddy Mladenov’s phone number” on Google? That doesn’t give me too much but points me to a few sites that do a specialized search like people search and reverse phone lookup. Following some links, I land on truepeoplesearch.com from where I can get my work email address. I also can learn that my real name is Metodi Mladenov. Another site that I learned about recently is pipl.com. A search on it, not surprisingly, returns my home address, my username (not sure for what) as well as the names of relatives and/or friends.

Doing an image search for “Toddy Mladenov” pulls out my picture as well as an old picture of my family.

Let’s do two more simple searches and be done with this. “Metodi Mladenov email address” and “Metodi Mladenov phone number” give my email addresses with Bellevue College and North Seattle College (NSC). I am sure that the NSC email address belongs to the target person because the picture matches. For the Bellevue College one, I can only assume.

Here is useful information I gathered so far with those searches:

  • Legal name
  • 3 email addresses (all work-related, 1 not confirmed)
  • 2 confirmed and 1 potential employer
  • Home address
  • Company name
  • Company address
  • Company phone number
  • Titles (at different employers)
  • Personal web site
  • Social media profiles (Facebook and LinkedIn so far)

Also, I have hints for two personal emails – one on toddysm.com and one on Gmail.

2. Social Media Sites

As you can assume, social media sites are full of information about you. If by now, you didn’t realize it, their business model is to sell that information (it doesn’t matter what Mark Zuckerberg tries to convince you). Now, let’s see what my fake accounts will offer.

Starting with Facebook, I sign in with my fake female account (SMJ) and type the URL to my personal Facebook page that Google returned. What I saw didn’t make me happy at all. Despite my ongoing efforts to lock down my privacy settings on Facebook, it turns out that anyone with a Facebook account can still see quite a lot about me. Mostly it was data from 2016 and before but enough to learn about my family and some friends. Using the fake account, I was able to see pictures of my family and the conversations that I had with friends around the globe. Of course, I was not able to see the list of my friends, but I was still able to click on some names from the conversations and photos and get to their profiles (that is smart, Facebook!). Not only that but Facebook was allowing the fake user to share these posts with other people, i.e., on her timeline (yet another smartie from the Facebook developers).

I was easily able to get to my wife’s profile (same last name, calling me “dear” in posts – easy to figure it out). From the pictures, I was able to see on my and my wife’s profile, I was also able to deduct that we have three daughters and that one of them is on Facebook, although under different last name. Unlike her parents (and like many other millennials), my daughter didn’t protect her list of friends – I was able to see all 43 of them without being her “friend.”

So far Facebook gave me valuable information about my target:

  • Family pictures
  • Wife’s name
  • Teenage daughter’s name
  • A couple of close friends

I also learned that my target and his wife are somehow privacy conscious but thanks to the sloppy privacy practices that Facebook follows, I am still able to gather important intelligence from it.

As we all know, teens are not so much on Facebook but on Instagram (like there is a big difference). Facebook though was nice enough to give me an entry point to continue my research. On to Instagram. My wife’s profile is private. Hence, I was not able to gather any more information about her from Instagram. It was different the case with my daughter – with the fake account I was able to see all her followers and people she follows. Crawling hundreds of followers manually is not something I want to do so, I decided to leave this research for later. I am pretty sure; I will be able to get something more out of her Instagram account. For now, I will just note both Instagram accounts and move on.

Next is LinkedIn. I signed in with the male (RJD) fake account that I created. The first interesting thing that I noticed is that I already had eight people accepting my invite (yes, the one I send by mistake). So, eight people blindly decided to connect with a fake account without a picture, job history details or any other detailed information whatsoever! For almost all of them I was able to see a personal email address, and for some even phone, and eventually address and birth date -pretty useful information, isn’t it? That made me think, how much data can I collect if I build a complete profile.

Getting back on track! Toddy Mladenov’s LinkedIn page gives me the whole work history, associations, and (oops) his birth date. The information confirmed that the emails found previous are most probably valid and can easily be used for a phishing campaign (you see where am I going with this 😀). Now, let’s look what my wife’s page can give me. I can say that I am really proud of her because contact info was completely locked (no email, birthdate or any other sensitive information). Of course, her full resume and work history were freely available. I didn’t expect to see my daughter on LinkedIn, but it turned out she already had a profile there – searching for her name yielded only two results in the US and only one in the area where we live. Doing some crosscheck with the information on Instagram and Facebook made me confident that I found her LinkedIn profile.

3. Government Websites

Government sites expose a lot of information about businesses and their owners. The Secretary of State site allows you to search any company name, and find not only details like legal name, address, and owners but also the owners’ personal addresses, emails and sometimes phone numbers. Department of Revenue web site also gives you the possibility to search for a company by address. Between those two, you can build a pretty good business profile.

Knowing the person’s physical address, the county’s site will give you a lot of details about their property, neighborhood, and neighbors (names are included). Google Maps and Zillow are two more tools that you can use to build a complete physical map of the property – something that you can leverage if you plan to penetrate the WiFi network (I will come back to this in one of my later posts where we will look at how to break into the WiFi network).

Closing Thoughts

I have always assumed that there will be a lot of information about me on the Internet, but until now I didn’t realize the extent to which this information can be used for malicious purposes. The other thing that I realized is that although I tried to follow the guidelines from Facebook and other social media sites to “protect” my information, it is not protected enough, which gives me no confidence that those companies will ever take care of our privacy (doesn’t matter what they say). Like many other people nowadays, I post things on the Internet without realizing the consequences of my actions.

Couple of notes for myself:

  • I need to figure out a way to remove from the web some of the personal information like emails and phone numbers. Not an easy task, especially in the US
  • Facebook, as always fails the privacy check. I have to find a way to hide those older posts from non-friends
  • I also need to educate my family on locking down their social media profiles (well, now that I have that information, I don’t want other people to use it)
  • Hide the birthday from my LinkedIn page

In my next post, I will start preparing a phishing campaign with the goal to discover my home network’s router IP. I will also try to post few suggestions, how you can restrict the access to information from your social media profiles – something that I believe, everyone should do.

Disclaimer: This post describes techniques for online reconnaissance and cyber attacks. The activities described in this post are performed with the consent of the impacted people or entities and are intended to improve the security of those people or entities as well as to bring awareness to the general public of the online threats existing today. Using the above steps without consent from the targeted parties is against the law. The reader of this post is fully responsible for any damage or legal actions against her or him, which may result from actions he or she has taken to perform malicious online activities against other people or entities.

 

 

In my previous post, How Can I Successfully Hack My Home Network? I set the stage for my “Hacking my Home” activities. A possible scenario here is that I am given the task to penetrate a high-profile target’s (i.e., myself 😀) home network and collect as much information to use for malicious purposes. Before I start doing anything though, I need to prepare myself.

Let’s think about what will I need for my activities. Laptop, Internet connection… Well, (almost) everybody has those nowadays. Let’s look at some not so obvious things.

First, I will need some fake online identities that I can use to send emails or connect with people close to my target. For this, of course, I will need to have one or more email addresses. In the past, creating a Gmail or Hotmail address was trivial. Now though, even Yahoo requires you to have a phone number for verification. That makes things a bit more complicated, but not impossible. A quick search on Google or Bing will give you a good list of burner phones that you can buy for as low as $10 at retailers like Fry’s or Wallmart. However, one good option is the Burner app – for just $5/month you can have unlimited calls, texts, and monthly number swaps. There are many uses of it:

  • Use it for general privacy – never give your real phone number on the web or anywhere else (just this is worth the money)
  • Use it for registering new email addresses or social media accounts
  • Make phone calls to avoid Caller ID (we all get those from marketers and scammers)
  • Most importantly, send text messages to avoid Caller ID

If you only need a masked phone, you can also use the Blur app from Abine. There are many other options if you search the web but be careful with the reputation. I decided to try both, Burner and Blur in my next step.

Armed with burner phone numbers, the next step is to create email addresses that I can use for various registrations. Yahoo emails are frowned upon; that is why I decided to go with Gmail. I will need to think of creating a complete online personality that I can use to send emails, engage in conversations, etc. Without knowing what will work, creating a male and female personality will be helpful.

The first step is to choose some trustworthy names. Although fancy, names like North West and Reality Winner 😀 will look more suspicious than traditional English names like John or Mary. Not surprisingly, a simple Google search gives you thousands of pages with names that inspire trustworthiness. So, I choose some of those to name my two personalities. Because I don’t want those to be known, I will use the first initials from now on to differentiate between the male (RJD) and the female (SMJ). Now, let’s go and create some e-mail addresses and social media profiles.

One complication while creating Gmail accounts is the verification process. Google requires your phone number for account creation, and after typing any of the burner phone numbers, I got an error message that the number cannot be used for verification (kind of a bummer, I thought). Interestingly though, after the initial verification (for which I used my real cell phone number), I was able to change the phone number AND verify it using the burner ones.

Next is Facebook. Despite all the efforts Facook is putting to prevent fake accounts, creating a Facebook account doesn’t even require text verification. With just an email, you can easily go and create one. Of course, if you want to change your username, you will be required to verify your account via mobile phone. The burner phones work like a charm with Facebook.

After Facebook, creating an Instagram account is a snap. Picking up a few celebrities to follow is, of course, essential to building up the profile. Be selective when you select people to follow from the suggestion list, you don’t want to look like a bot by following every possible person. Later on, after I gather more intelligence on my targets’ interests, I will tweak the following base to match better their interests, but for now, this will be enough to start building a profile and also getting access to information that is behind the Facebook and Instagram walls.

The Twitter registration process is as simple as Instagram. You don’t need a mobile phone for Twitter – e-mail is enough for the confirmation code. Once again, choose a few select accounts to follow to start building your online profile and reputation.

Creating LinkedIn account is a bit more elaborate. Similar to Gmail, LinkedIn requires real phone number for verification – both burner phones failed here. Of course, LinkedIn doesn’t immediately ask you to add your phone number for notifications. However, their account creation wizard required at least one job title and company. Not unexpected, this stumbled me a bit because I had to think what will be the best title and company to choose from. LinkedIn has one of the cripiest experiences for account creation. The most scary part for me was the ability to pull all my contacts from… well somewhere (I will need to dig into that later on). Nevertheless, my recommendation is to do the registration on a burner laptop that has none of your personal information on it. The next crippy thing was the recommendations – I got some high profile people recommended to me and of course I clicked on a bunch before I realize that it sends invitations to those people. This could have blown my cover because it recommended mostly people working in the company I chose as my employer. Surprisingly to me though just a minute after, I had a request accepted, so I decided that maybe I should not be worried so much. One important thing you need to do on your LinkedIn profile is to change the Profile viewing options in your Privacy settings to Private mode. If you do not do that, people whose profiles you look up will get notified who are you, and you want to avoid that.

Although some people may accuse me in profiling, I have to say that I chose to use the female account for Facebook, Instagram and Twitter and the male for LinkedIn initially. The reason is that I didn’t want to spent time doing all those registrations before I know more about the targets.

Now that I have all those fake accounts created, I can move to the next step: doing some online research about my target. To do that, I don’t need full profiles established but I will try to generate some online activity in the mean time to make those accounts more credible.

In my next post, I will demonstrate how easy it is to gather basic social engineering intelligence using the above accounts as well as free web sites. Stay tuned…

Disclaimer: This post describes techniques for online reconnaissance and cyber attacks. The activities described in this post are performed with the consent of the impacted people or entities and are intended to improve the security of those people or entities as well as to bring awareness to the general public of the online threats existing today. Using the above steps without consent from the targeted parties is against the law. The reader of this post is fully responsible for any damage or legal actions against her or him, which may result from actions he or she has taken to perform malicious online activities against other people or entities.

This morning I was looking at our company’s e-mail gateway and cleaning some of the quarantined messages when I got reminded that while my company’s digital infrastructure may be well protected with firewalls and e-mail gateways, my home network can be wide open and vulnerable to attacks. Like everyone else, I try not to spend too much time configuring my home network and rely on my “ISP to take care of it.” Well, this is a silly approach because the ISPs don’t care about our cybersecurity. It took me hours on the phone, two bricked routers and a house visit (for which I paid of course) to convince mine to replace their outdated router with a simpler gateway device so that I can use my own Eero as the main router and Wi-Fi access point. However, replacing an old router is not something that will solve my cybersecurity issues. Hence, I decided to stop procrastinating and make the first steps to execute on my idea to do some penetration testing on my home network. You will be able to find all my steps and (successful and failing) attempts in the series of Hack My Home post, so let’s get started.

The first thing I need to start with is to decide what my goals are. The best way to do that is to put myself in the hacker’s shoes. If I am a black-hat hacker who wants to attack an Ordinary Joe, what would I like to get from him? Here are a few things that come to mind:

  • Like many of you, I have a file server or NAS device at home, where my family stores a lot of information. Pictures, tax returns, scanned personal documents and what else. Having access to this information may turn beneficial. Hacker’s goal #1: Get access to the file share!
  • Having access to personal information may be useful, but if I am looking for fast money or a way to do bigger damage, harvesting credentials may turn out better. There is a good chance I can find some passwords in a text file on the file share, but because I don’t save mine in plain text, I need to look for other options. Hacker’s goal #2: Steal a password of a family member!

Here is the moment for a disclaimer. Because this is my home, I believe, I have full authority to hack into my devices. If I discover device vulnerability, I will follow the responsible disclosure practice and will need to delay any posts that describe the approach of breaking into the device. Regarding the second goal, stealing a password, I have full (verbal) consent from my family to do that. I also have full access to almost all of their passwords, so I don’t consider this an issue. However, if you are planning to follow my steps, please make sure that you get consent from your family – they may not be so receptive to the idea.

Next, are some assumptions. The biggest one is to assume no knowledge of my home network. Initially, I thought I should start with a diagram of my network, but this will assume I know the details. What I need to do is to get to the details from the outside using public information. If you think about it, the information that hackers can easily (and legally) obtain is the following:

  • Domain name
  • IP address
  • Email address
  • Home address
  • Phone number
  • Social media profiles

This is an excellent set of starting points, isn’t it? Some of those things may be easier obtained than others. Hence, I will need to do some research online to figure out everything I need. I will walk through each step in separate posts. For now, let’s figure out the ways I can digitally break into my home and define some simple next steps.

If I know the IP address of my router, I may be able to attack my home remotely over the Internet. For this though, I need to figure out the IP address of my router. So, one of my next steps would be to figure out an approach to do that.

If I know the location of my home, I may try to attack my home Wi-Fi network and break through it. That will be a little more complicated approach because it will require for me to be close to my home and to use some specialized devices. There may be other wireless devices in my home that I may be able to get to but those will again require some proximity to the home to exploit.

Of course, for my testing purposes, I would like to explore both approaches, but I will need to start with one of them. Because I think the remote exploit higher chance to happen, I would start from there. As a next step, I would need to figure out the entry point for my home from the open Internet, i.e., I need to figure out my router’s IP address.

In my next post, I will walk you through my thought process, the steps and the tools I can use to obtain my home’s IP address.