Learnings on Deploying Multi-Site Systems

Hey reader! First of all, a Merry Christmas to you and your family!ūüéĄūüéȬ†:’D Thank you so much for visiting this¬†site. Whew, never thought that this blog would contain this many posts¬†:)) Initially created to be a repository of¬†stuff I discover and things I want to keep for future reference but thanks to my CS 255 class, CodeSteak happenings, and my training at work, this blog already contains 28 posts¬†and counting! So super thanks again!¬†ūüėĄ¬†ūüíē

Lately, we had one project that consists of multiple connected¬†websites which also means multiple EC2 instances with different repositories, running different processes with different purposes – some run Rails, some run Redis and Sidekiq, while some run Mongo. The diversity of this project comes with the need for unification and consistent setting of¬†environment variables through all the instances. In addition to that, there must also be an easy and non-tedious way to run and terminate (as well as to check) each of the instance’s processes. Also for sustainability, a fast and easy way to create¬† all the instances at once (and tear them all at once too especially every after¬†QA sessions) would be ideal.

These diverse and flexible requirements for the instances may seem at first difficult to do and maintain. But thankfully, AWS offers the CloudFormation feature where you can build stack templates to specify what type of instances you want to open, what security groups you want assigned, what AMIS you want the instances be sourced from, what start-up scripts you want included, and a whole lot more. And from these stack templates, you can build your instances Рinfrastructure as code indeed!

In this blog post, we’ll just be seeing an overview of¬†AWS CloudFormation since its awesomeness deserves 1 whole post (and even more)!

The CloudFormation module can be seen in your account under management tools:

Screen Shot 2015-12-27 at 5.17.55 PM.png

A template for AWS Cloud Formation looks like this:

{
 "Resources" : {
   "SampleCFInstance" : {
     "Type" : "AWS::EC2::Instance",
     "Properties" : {
       "SecurityGroupIds" : [ "sg-c52c20a1" ],
       "KeyName" : "adelen-key",
       "ImageId" : "ami-1e6f737f",
       "InstanceType" : "t2.micro",
       "Tags" : [ {"Key" : "Name",
                   "Value" : "SAMPLE-CLOUDFORMATION-INSTANCE"}
                ]
      }
    }
  }
}

What our JSON code above does is tell CloudFormation to create an EC2 instance with type t2.micro from an AMI with image id ami-1e6737f¬†and include it in the sg-c52c20a1 security group. Afterwards, tag it with the name “SAMPLE-CLOUDFORMATION-INSTANCE”. Simple, isn’t it? ūüôā

Once our JSON code is ready, we can now  upload it in our Cloud Formation wizard (by clicking Create Stack in the CloudFormation main page):

Screen Shot 2015-12-27 at 5.25.50 PM.png

 

After upload is finished, you will be asked for the stack name and then will be redirected to the progress window where you can see the progress of your stack creation. From here, you can also edit and delete your stacks as well.

Screen Shot 2015-12-27 at 5.33.43 PM.png

So yay! With CloudFormation, it’s already easy to setup and tear down setups of our project. With this, we can just create the whole stack (and even build multiple stacks of the same application) for every QA session and also tear down right after to save on costs. Also, the use of CloudFormation is free! You only pay for the AWS instances, load balancers, and other resources that you create through CloudFormation.

Now on to the challenge of accessibility and control in all our instances. Surely, accessing the instances one by one to start, check, or stop each of their respective processes is possible, but of course, time consuming. The time to do those can just be allocated to more productive tasks Рsuch as building features.

So to save time, we used a Remote Control instance that serves as a jump-off point to other servers. We can now access, start and stop processes, and check the other instances using only this Remote Control instance.

AWS-RC-DG - New Page.png

This Remote Control instance is equipped with scripts to pull from the repositories of each of the instances, prepare (bundle, precompile, etc) them, and start, as well as stop them. Imagine the convenience of being able to access all the other 5 instances with just one instance.

Now we mentioned before that we can specify a specific AMI in our  CloudFormation from which our instances will be based from. As I was building the AMI for the web instances, I encountered various scenarios and for future reference, as well as to help other developers who might encounter the same problems, here are some of the steps I took (and also some of the learnings I had):

On setting up the WEB Instance (from where we’ll capture our AMI):

  1. On a freshly opened instance, one of the first steps is to update apt-get packages and install git (since it is where we will be pulling our repository from)
    sudo apt-get update
    sudo apt-get install git
  2. Make sure to add your public and private keys to your ~/.ssh directory. This will be needed when you clone your repositories or when you need to communicate with other instances without the .pem key file.
  3. When your keys (i.e. id_rsa) is too public, they will just be ignored.So make sure to give them the correct permissions:
    sudo chmod 600 ~/.ssh/id_rsa
    sudo chmod 644 ~/.ssh/id_rsa.pub

    Else you may encounter the following errors:

    Permissions for ‘id_rsa’ are too open. It is required that your private key files are NOT accessible by others. This private key will be ignored.

  4. When you have set your keys in place, start cloning the repos and checkout the branch that you will be running your applications in. After your repositories are in place, continue downloading other dependencies (i.e. Ruby, Rails, NodeJS, Nginx, etc).
  5. Make sure that the environment variables needed by your application are set. (i.e. RAILS_ENV=staging)
  6. Setup other configuration files as needed (i.e. mongoid.yml)
  7. And before you capture the AMI for this web instance, make sure that each of your sites is working. Since this will be the base AMI from where all of your other instances will be born.

On setting up Nginx (in Staging and Prod environments):

The first time I tried building this system two months ago, I had everything manual, no IP tables, no Nginx stuff, etc. Now I learned that setting up the system is sooooo much easier with these on hand.

While setting this up and figuring how things work, I was able to access sites Rails A and Rails B at <IP>:8080, so I thought the setup might be working. But looking further in¬†our configuration, it doesn’t seem right since:

  • Nginx listens at port 1500
  • Nginx’s upstream is set at localhost:8080
  • Rails is running at port 8080

So if Nginx is running (and working properly), we should be able to access it at 1500. And there seems to be a missing link: who/what forwards our requests from port 80 to our Nginx at port 1500? :O

Tinkering with ports and Nginx, I realized that what I was missing was an IP table configuration that would redirect all traffic from port 80 to Nginx at port 1500.

sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 1500

Tried executing this script and then voila, Rails A and B are now accessible at port 80 (no need to specify the ports then!).

Also, the Nginx processes does not persist through instance ons and offs so to make sure that Nginx automatically starts everytime you turn on the instance, you may add the following line in your /etc/rc.local which is always ran at system bootup:

sudo -u nginx /etc/init.d/nginx start

Also based on convention, we create a user nginx that would own our Nginx processes:

sudo adduser --system --no-create-home --disabled-login --disabled-password --group nginx

And then edit nginx.conf, set user as nginx, and then restart Nginx for our changes to take effect.

Configuring the Remote Control Instance:

  1. Populate etc/hosts with the IP addresses of the other instances. So we can easily issue commands such as
    ssh ubuntu@rails-a
  2. Setup the scripts that would allow you to connect to the other instances like the following:
    ./access.sh rails-a

    With access.sh as:

    #/bin/bash
    ssh -o "StrictHostKeyChecking no" -t ubuntu@$1
    
  3. When accessing other instances, you might notice that the “Welcome Ubuntu” message always appears and is included in your output. To disable this, just create a .huslogin file in the root of your instances (you may also include this in your WEB AMI :D).
    touch ~/.hushlogin
    
    

[OPTIONAL]  CloudFormation CFN tools:

In our CloudFormation scripts, to ensure that an instance has been successfully created, after all the startup scripts have been run, we instruct the instance to send a signal to the CloudFormation wizard to signal that instance creation is done (i.e. CreationPolicy). To enable this communication between your instance and CloudFormation, install the cfn tools via the following commands:

sudo apt-get -y install python-setuptools 
mkdir aws-cfn-bootstrap-latest 
curl https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz | tar xz -C aws-cfn-bootstrap-latest --strip-components 1 
sudo easy_install aws-cfn-bootstrap-latest which cfn-signal

You may read further on CreationPolicy attribute of CloudFormation here.

So that’s it! The past three days before Christmas have been intense. :)) Filled with so many discoveries and new learnings. ūüôā

Again, Merry Christmas and a Happy New Year!¬†ūüíē

 

shortssh: a SSH Shortcut with Bash

Hi there! Lately, I came into a scenario where I need to create and delete a lot of AWS EC2 instances. This constant building and tearing down gives a lot of different IP addresses for the instances I want to access every time. In addition to that, these instances have different key files depending on what project I built the instances for.

For SSH-ing to the instances, I already have this usual template:

ssh -i ~/Downloads/adelen-key.pem -o "StrictHostKeyChecking no" <ip-address>

Surely, doing a reverse search for the last occurrence of this command from the command line is easy but lately, since I maintain multiple key files, I have to edit whatever the last command was to include the appropriate key file name and IP address. Because of this seemingly tedious process, I was inspired to do a simple bash script to serve as a shortcut to the ssh command above.


#!/bin/bash

if [[ $1 &amp;amp;&amp;amp; $2 ]]; then

  KEY_DIRECTORY=`printenv SHORTSSH_KEY_DIRECTORY`

  if [[ $KEY_DIRECTORY ]]; then

    if [ -d $KEY_DIRECTORY ]; then

      KEY_FILE="$KEY_DIRECTORY/$1.pem"

      if [ -f $KEY_FILE ]; then
        ssh -i $KEY_FILE -o "StrictHostKeyChecking no" $2
      else
        echo "[+] File $1.pem not found in $KEY_DIRECTORY"
      fi
    else
     echo "[+] Cannot find directory: $KEY_DIRECTORY"
    fi
   else
    echo "[+] Set the location of your identity key file as    SHORTSSH_KEY_DIRECTORY in your environment variables"
 fi
else
 echo "SAMPLE USAGE: shortssh key-file ubuntu@123.456.78.90"
fi

This is very simple but serves the purpose I was needing a solution for. To connect to an EC2 instance we can just issue the following command:

shortssh filename-of-key user-name@ip-address

So for example, we want to access our server at IP address 54.192.84.33:

shortssh adelen-key ubuntu@54.192.84.33

I also thought of having the ubuntu@ hard coded in the script so we could just input the IP address but I decided not to so that this can be also useful with other servers/EC2 instances not running Ubuntu (i.e. Amazon Linux with ec2-user@ip-address).

If you wish to use this, just follow the following steps:

  1. Download the file. Either copy the one from above or download the script¬†from my Github Repository. ūüôā
  2. Set SHORTSSH_KEY_DIRECTORY as part of your environment variables. Set its value to the location of your key files. (I have mine at /Users/apfestin/Keys)
  3. Add the directory where the shortssh file is to your PATH variable so that you can issue the shortssh command from anywhere in your file system

To accomplish the steps above, I added the following lines to my ~/.profile:

export PATH=$PATH:/Users/apfestin/Desktop/short-scripts
export SHORTSSH_KEY_DIRECTORY=/Users/apfestin/Keys

So there, a shortcut to my most commonly used SSH command.¬†Hope you could find this useful! ūüôā

Thanks for reading! ūüôā

 

 

Detecting Malicious Network Activity with Wireshark

In one of our previous posts, we saw Netcat, ¬†a tool dubbed as the Swiss knife of security for its many uses –¬†for chats, file transfers, and remote shell handling among a few.

On this post, we’ll now see Wireshark, the tool dubbed as the Swiss knife for network analysis and how it can solve some of the various networks problems we see every day.

Brief History

Wireshark is a free and open-source software for packet capture and analysis. It was previously named Ethereal but was renamed to Wireshark in 2006 due to trademark issues. There is also a command line counterpart for Wireshark, Tshark, which is free and open source as well.

Installation

For this post, I installed Wireshark on¬†both my Mac with Yosemite and Ubuntu Virtual VM. For the simulations, it would be mostly Mac that we will be using but don’t worry as the interfaces are almost similar.¬†For installation on the other hand, there would be some differences as for Mac, we will be using the onsite installer while¬†for Ubuntu, we will be installing from apt-get.

> Installing Wireshark on Mac OSX

Installing Wireshark in Mac is easy as you can just download the installer from their site. Found below is the homepage of Wireshark and you can just click the Download button and choose the installer suitable to your OS version.

Screen Shot 2015-12-19 at 12.46.30 AM.png

> Installing Wireshark on Ubuntu

On the other hand, for Ubuntu, we will be installing Wireshark and its dependency, libcap2-bin from apt-get:

sudo apt-get install wireshark libcap2-bin

To run Wireshark as non-root user, we add a new group named wireshark, add our user to it, and make it the group owner of the dumpcap directory:

sudo groupadd wireshark
sudo usermod -a -G wireshark $USER
sudo chgrp wireshark /usr/bin/dumpcap
sudo chmod 755 /usr/bin/dumpcap

Now, when you go and check the Wireshark GUI, no interfaces can be found. To make the¬†interfaces visible, let’s issue the following command:

sudo setcap cap_net_raw,cap_net_admin=eip /usr/bin/dumpcap

 

Let’s Get Started: Capturing Packets

  1. Once you open Wireshark, you will be presented with a GUI where you can select which interface you want to listen to. Wireshark hints how much network traffic passes on each interface through each interface’s¬†heartbeat line-like graph:Screen Shot 2015-12-18 at 11.25.00 PM.png
  2. When capture is already in progress, you can already see the moving packets realtime on your interface. At any time, you may start or stop your capture via the Capture menu. Note: Once you have stopped your capture, it is already considered a different capture from your next.

Screen Shot 2015-12-18 at 11.23.27 PM.png

Tip. On the Options option of Capture menu on our menu bar, make sure that you run Wireshark in its promiscuous mode so you can capture all the packets in your network. By default, network interfaces only keep the packets addressed to them and ignore everything else. With promiscuous mode on, you can capture all packets even if they are not directly addressed to you.

Screen Shot 2015-12-22 at 7.04.56 AM.png

Filters

For most of our use cases, we will be using filters. Filters are provided by Wireshark to help us isolate the packets we are truly interested in from all the packets that we’re captured (which are A LOT!). Isolating them would help us focus and follow the machine conversations with ease.

Wireshark has two types of filter, capture filters and display filters. Capture filters no longer keep and display¬†the packets that don’t match the current filter (lost data already) while display filters on the other hand only take effect when you are currently on that filter. Unmatched packets¬†will just¬†be hidden but not disregarded and can be viewed again once the display filter is removed.

Here are some of the many possibilities on Wireshark filters:

Scenario 1:
We only want to see packets that were sent through a certain protocol.

How we can do it:
To filter packets by protocol, we could just type the name of the protocol we are interested in in the filter bar. Once you start typing, Wireshark also auto-suggests keywords that closely resembles what you are typing. So no much worries if you only remember a part of what you would want to find.

Say we want to only see packets sent through the DNS protocol: (Domain Name Systems lookup),we can do so by typing dns on the filter bar:

dns

Screen Shot 2015-12-18 at 11.40.16 PM.png

and there! We only saw DNS protocols. As you can see from the packet numbers, some number were skipped such as packets 1 and 2, since those packets were filtered out. But when we remove our filter, we can then again see all the packets (including 1 and 2).

Screen Shot 2015-12-18 at 11.41.55 PM

Scenario 2:
We want to only see packets coming from or to an IP address

How we can do it:
We can easily do the following to filter packets by IP address:

Step 1: To filter packets coming from an IP address, we can use the ip.src filter:

ip.src == 192.168.15.1

Screen Shot 2015-12-19 at 12.04.30 AM.png

Step 2. To filter packets that are going to a certain IP address, we can use the ip.dst filter:

ip.dst == 192.168.15.1

Screen Shot 2015-12-19 at 12.05.16 AM

Step 3: Wireshark allows logical operators(i.e. logical OR (||), logical AND (&&)) to be used in our filter bar! ūüôā So if we want to see packets coming from and to a certain IP (in our example, 192.168.15.1), we could combine the filters from Steps 1 and 2:

ip.src == 192.168.15.1 or ip.dst == 192.168.15.1

Or we could also use the provided shortcut by Wireshark for this  filter:

ip.addr == 192.168.15.1

Screen Shot 2015-12-19 at 12.06.07 AM.png

Notes on Filter Combinations:

You can combine any filtering condition with logical operators (i.e. OR (||) and AND (&&)) even if they are not the same filters to create more complex queries.

The following example shows how we can filter all ICMP (Internet Control Message Protocol) coming from 192.168.15.1:

ip.src == 192.168.15.1 && icmp

Screen Shot 2015-12-19 at 12.09.43 AM.png

Notes on Packet Types (SYN, SYN/ACK, RST, etc)

In the next following sections, we’ll be encountering more terminologies on packets and it is integral that we get an understanding of TCP flags. Flags are like switches that you can turn on and off on packets depending on what purpose do you intend the packet to have. Some of the available flags are the following:

  • SYN: short for SYNchronize. The SYN flag is¬†used to initiate a connection.
  • ACK: short for ACKnowledge. ¬†The ACK flag, on the other hand, acknowledges a previously sent invitation to connect (SYN packet)
  • RST: short for ReSeT. The RST flag is sent by an endpoint if it already wants to abort the connection.

In a packet, one or more flags can be turned on. For example, you may encounter SYN only packets as well as SYN/ACK packets.

Each flag has corresponding decimal numbers assigned to them:

  • SYN: 2
  • RST: 4
  • ACK: 16

So to calculate a SYN/ACK flag decimal value, we just add the value of SYN (2) to the value of ACK (16) and we get 18. We can use these decimal values later when we filter packets by the flags that are turned on.

For the other flags’ decimal values, you may refer¬†here.

Use Cases

1. Detecting Torrent Downloads in your Network

If you ever experienced sudden slow internet for a period of time with no obvious reasons at all (sun is out, your network provider has no problems, etc), you may want to check who might be hogging your network capacity. There may be machines who are downloading  via torrent.

For the purposes of our demonstration, let’s download the free legal torrent of A Christmas Carol by Charles Dickens (since its almost Christmas :D) from¬†Librivox: A Christmas Carol by Charles Dickens¬†with seeders from archive.org. As soon as your download is finished, open up your favorite torrent application. I currently have Bit Torrent on my machine:

Screen Shot 2015-12-19 at 12.20.49 AM.png

Once our download starts, we can witness the live feed from Wireshark:

Screen Shot 2015-12-19 at 12.28.13 AM.png

As you can see from above, BitTorrent packets suddenly appeared in our feed. Filtering further to isolate bittorrent packets:

Screen Shot 2015-12-19 at 12.35.06 AM.png

With this, we can track that BitTorrent packets are present in our network which confirms that somebody might indeed be downloading in the network.

Apart from the BitTorrent packets, we can also see an influx of UDP protocol packets:

Screen Shot 2015-12-19 at 12.28.59 AM.png

And these are also caused by our Torrent download. Torrent has an extension DHT (distributed has tables) that makes use of UDP packets to transport data.

Based on observation, at least for this example with LibriVox, BitTorrent packets were only used during the initial period where our machine is still connecting to the servers (“handshake”) but once connection has been established, data is now transferred via UDP packets.

To also see how much traffic does the BitTorrent and UDP packets contribute, we can see the Protocol Hierarchy summary.

From the Statistics menu, choose the option Protocol Hierarchy:

Screen Shot 2015-12-19 at 12.39.05 AM

And then a screen would pop up summarizing the total number of packets captured so far as well as their size.

Screen Shot 2015-12-19 at 12.48.59 AM.png

In case upon inspection, you weren’t able to see BitTorrent nor UDP packets, but still suspect that the slowing down of internet is due to heavy downloads, you may see the total network traffic that has been sent from / to a machine in your network.

You can go to Statistics > Endpoints

Screen Shot 2015-12-22 at 10.37.36 AM.png

Or if you want a more detailed view and want to know to which exact ports and IP addresses were made from/to your network, you may go to Statistics > Conversations:

Screen Shot 2015-12-22 at 10.39.02 AM.png

On both Endpoints and Conversations, the columns are sortable. Just click on the column header of whichever column you’re interested to sort by.

2. Detecting Port Scans

Have you ever connected to a public WiFi, maybe in your favorite cofee shop or restaurant, ¬†with paranoia that somebody might actually be listening to data you’re sending to the internet or maybe that somebody is looking for loopholes in your system in hopes of gaining access to it?

Attackers might do this with a tool called, NMAP, a network mapper that we saw in our previous NMAP post. In its most basic form, NMAP scans are done by sending SYN packets to ports of the target machine to which a SYN/ACK would be replied if the target’s machine port is open, otherwise an RST is replied.

For demo, we’ll try to port scan on our Ubuntu VM from our Mac (with IP Address: 172.20.10.2)

Screen Shot 2015-12-21 at 11.59.07 PM.png

And our Ubuntu VM has IP address: 172.20.10.3:Screen Shot 2015-12-21 at 11.57.36 PM.png

When we try to do a default NMAP port scan on our target machine, Ubuntu VM:

nmap 172.20.10.3

Screen Shot 2015-12-22 at 1.13.00 AM.png

We get the above results. 2 from the top 1000 ports are open.

Now before we go and filter the Nmap¬†scans, let’s first figure out what’s happening in the wires when we did the scan. Inspecting our feed, we see that there were a lo t of SYN packets sent by our Mac to our Ubuntu Virtual VM.

Filter used:

 ip.src == 172.20.10.2

You may filter further by restricting the results to have 172.20.10.3 as their destinations:

ip.src == 172.20.10.2  && ip.dst == 172.20.10.3

Screen Shot 2015-12-22 at 1.05.56 AM.png

There’s a lot of SYN, and here, we can see from and to which ports were they sent. With the screenshot above, we can infer that the¬†top 1000 ports include 80, 113, 59000, 21, 53, 111, 8888 and so on. As we learned earlier, here, our Mac is still doing the initial handshake step to the top 1000 ports.

Now, knowing that Nmap found port 21 to be open (i.e. 21 responded positively to the SYN packet sent), we can inspect how our Mac and port 21 (ftp) of our¬†Ubuntu’s VM interacted. We can filter such packets by:

ip.addr == 172.20.10.3 && tcp.port == 21

Screen Shot 2015-12-22 at 12.01.53 AM.png

We see that after a SYN packet was sent by our Mac to our Ubuntu VM, a SYN/ACK was replied suggesting that port 21 is indeed open. After an ACK is sent (since we are in the default non-sudo port scan where TCP connect() establishes a full connection), our Mac then sends a RST/ACK to immediately close the connection as indicated in line 66.

On the other hand, closed ports respond with an RST/ACK packet indicating that they are closed. As we learned earlier, we can filter those packets with only the RST and ACK flags turned on by using the decimal values assigned to each flag. Since we want to get RST/ACK, we get a total value of 20 (RST(4) + ACK(16)).

tcp and tcp.flags == 20

We can see that this filter both includes packets from our Mac and packets from our Ubuntu VM like the one we saw earlier on port 21. We can enhance this further by just including the packets that came from our Ubuntu VM since it is whose ports we are interested in.

tcp and tcp.flags == 20 and ip.src == 172.20.10.2

Screen Shot 2015-12-22 at 4.09.06 PM.png

The results show a flood of RST/ACK packets from closed ports.

So there, these symptoms on your network, as you can get from Wireshark, can let you know if somebody is indeed scanning your ports. ūüôā

What we saw in this blog post is just two of the many use cases for the many capabilities of using filters and other features of Wireshark.

Thanks for reading!

Sources:

 

NMAP: Your Ultimate Port Scanner!

On our¬†previous¬†article on Netcat, we were able to scan ports and see what services are running on a specific machine. Now, we’re gonna see a more specialized tool for these purposes, Nmap,¬†a free and open source tool for network discovery and mapping.

In this blog post, we’ll be seeing how to install, use, and configure Nmap to meet our specific needs.

I. Nmap Installation

For installing Nmap, we have two options: one is to install from your operating system’s¬†package repository and another¬†is to install it from source. Most operating systems today already have nmap in their package repositories. But compiling from source is still¬†an option if you want the latest versions installed though this may¬†still come with some minor bugs (“bleeding edge”).

For the purposes of our demo, we will be using a Mac OSX Yosemite and an¬†Ubuntu Server 14.04 in our Virtual Box. Let’s first go and install Nmap into our Ubuntu setup.

a. Installing from a repository

Nmap is already included in apt-get, Ubuntu package repository. So to install, we just need first to update and system then go and install Nmap:

sudo apt-get update
sudo apt-get install Nmap

 

b. Building from source

Note: For purposes of this demo on building from source, I provisioned another freshly installed Ubuntu server with Virtual Box. 

Building from source takes more steps but nonetheless, also simple and easy.

  1. First, we need to download and unzip the source code of Nmap. As of time of writing, the latest nmap version is 7.01 which we will also be the one that we will be downloading:
    wget http://nmap.org/dist/nmap-7.01.tar.bz2
    tar xvf nmap-7.01.tar.bz2
  2. After we have unzipped the file, we can now compile and install Nmap:
    ./configure
    sudo make install
    Screen Shot 2015-12-17 at 1.25.00 PM.png

    RAWR: The famous Nmap dragon that appears every time it is compiled from source.

     

After installation from any of the options above,  we can confirm that Nmap is indeed installed by issuing a simple help command with Nmap:

nmap -h
Screen Shot 2015-12-19 at 10.50.30 PM.png

Nmap help option

II. Fundamentals

From our previous Netcat post, we were able to touch a little of port scanning. As we said before, ports are the gateways to your computer processes. For each service, there are assigned ports by convention: 80 for HTTP, 443 for HTTTPS, 21 for FTP, etc. But with these conventions, processes can still be assigned to operate on a different port number (i.e. running HTTP on 14343). Ports 1024 and below are usually the ones that are commonly used that is why programs wanting to listen to these ports must be ran as root.

Before we delve into port scanning, let’s first see the two primary protocols on networking: TCP and UDP.

TCP and UDP

Services can listen to ports can use either of this protocol s- and usually they do. TCP protocols are usually used for connections which need reliable service. It is also dubbed as connection-oriented since it needs to complete a handshake before establishing connections. Unlike UDP which just makes the best-effort delivery but guarantees no reliability. That is why UDP is also called the connectionless protocol. Since UDP is connectionless, it is much faster as there is no connection overhead. This is why for most applications requiring fast responses such as video and audio streaming, most of the time, UDP is used.

Three-way Handshake

Above we have touched on how TCP, being connection oriented, requires a successful handshake before connection can be established.

TCP has what we call the three-way handshake in which three passes of messages occur between sender and receiver. These messages include SYN (synchronize) and ACK (acknowledge) packets.

Let’s try illustrating this handshake. Say we have two machines, Bob and Alice. Bob wants to start communicating to Alice, so he first starts with a SYN packet sent to Alice to initiate communication.

 

a.png

If Alice is ready and willing to communicate, she then sends a SYN/ACK pair (her own synchronize packet and an acknowledgement packet to the synchronize packet from Bob). Once Bob receives this SYN/ACK pair, he then sends an ACK to verify and confirm that connection is established.

One of the original and most common use of Nmap is being a port scanner. In order for Nmap to determine if a port is open and/or a host is alive, it attempts to connect to target hosts and ports by sending, most of the time, parts of these handshake. Depending on the response of the target port/host, Nmap gets to determine the status of that specific port.

 

III. Setup

A. Target / Simulation Environment

Scanning computers, especially those not yours, may lead to abuse complaints (or worse) that is why we should be very careful as to which target computers we choose to scan. One safe option is to setup virtual environments for you to tinker with.

In case you do not want to setup your own virtual machine or don’t have other machines in the network, you may also use a free service provided by the Nmap team at http://scanme.nmap.org.

Screen Shot 2015-12-19 at 5.00.58 PM.png

With this free service though, you won’t be able to change the services running and the ports that they are in but for the purposes of scanning, they could already be enough.

For purposes of flexibility, we will be doing our scans for this demonstation on an  Ubuntu Desktop 14.04 installed inside our Virtual Box.

B. Scans

Default Scan

In its simplest and most basic form, Nmap just scans the network with the default scan. By default, wen ran with sudo, it uses the TCP ping scan, which only makes half connections. Once the target port has replied with a SYN/ACK, it no longer replies with an ACK and instead just closes the connection. When not ran with root privileges, it instead uses the connect() method which establishes full connections.

Advantages of TCP ping scans is that since no connections was actually established, since the last ACK was not sent, it is able to escape many logging systems and detection systems. But detecting it is still possible, only the chances are minimized.

To perform a default scan, we setup our Ubuntu machine, and from the ifconfig command, we learn that its IP is 192.168.1.101:

Screen Shot 2015-12-19 at 6.00.45 PM.png

One of our running processes is an Apache web server listening on port 80:

Screen Shot 2015-12-19 at 6.16.58 PM.png

Let’s try doing a default port scan on our target, 192.168.1.101

nmap 192.168.1.101

Screen Shot 2015-12-19 at 6.16.12 PM

Our default port scan discovered our running service on port 80. Yay!

But…

Looking further on our virtual machine, we discover that another service, redis-server is also alive in port 6379.

Screen Shot 2015-12-19 at 6.11.18 PM.png

Why wasn’t this seen by our default port scan?

By default, Nmap port scans only query the top 1000 ports (not the first 1000 but the top 1000 most used ports). This is maybe the reason why we didn’t see redis on our port scans.

To be able to also inspect other ports as well, especially those not belonging to the top 1000, Nmap provides a way to specify desired port ranges.

Specifying Scan Ranges
Services can be assigned to any ports from 1 to 65,535. And system administratots usually do place their service on high numbered ports to avoid detection by normal scans. With this¬†scanning with just the 1000 ports may not be enough to find the “hiding” services. We need to specify a larger or more specific range that we are interested in:

a. Specific ports:

We can specify a specific port we want inspected with the -p flag:

nmap 192.168.1.101 -p 80
nmap 192.168.1.101 -p 6379

Screen Shot 2015-12-19 at 6.18.00 PM.png
Here, when we targeted port 6379, it was already seen as open by Nmap.

b. Specific Range of IP Addresses

We can also scan a range of IP addresses by specifying them in their CIDR form:

nmap 192.168.1.0/24 -p 6379

Screen Shot 2015-12-19 at 6.20.41 PM.png
Also in case, you need more information such as MAC Addresses, you may run the nmap command with root privileges:

sudo nmap 192.168.1.0/24 -p 6379

Screen Shot 2015-12-19 at 6.22.08 PM.png
c. Specific port range

When we aren’t sure of the specific port our target process is running, we may just specify a specific port range that we want to probe. Also, this could be useful if we want a larger range of ports than our top 1000.

Specifying a port range is also still donw via the -p flag immediately followed by the port range you are interested in.

nmap 192.168.1.101 -p50-7000

Screen Shot 2015-12-19 at 6.18.33 PM.png

c. All ports

The Nmap team has also provided a shortcut, the -p- flag,  if you want all ports scanned. Using this though might take more time since more ports are queried.

nmap 192.168.1.101 -p-

Screen Shot 2015-12-19 at 6.35.54 PM.png

Service Version Scan

In addition to knowing which ports are alive and open, it is also useful to know what exact services are running on these ports. This is especially useful if someone is running a service on a non-default port (i.e. http on 14343). Doing service version scans also provide us with the versions of the running services. This is useful especially if one is looking for vulnerabilities that are only present in certain versions of the software. It is important to tell what version is running and the vulnerabilities that can be exploited.

Now let’s run a service scan on our Ubuntu VM. We see that ports 21 and 80 are both open. Port 21 is usually used for FTP (file transfer protocol).

Screen Shot 2015-12-19 at 6.39.34 PM.png

Now let’s further inspect this and determine what are the actual services running on these ports:

Screen Shot 2015-12-19 at 6.39.41 PM.png

With our -sV flag on, we learned that it is actually a redis server that is running on our port 21 and not a FTP client.

We can also notice that the service scan took longer (18.80 seconds vs 1.24 seconds) as it still grabbed the banners of services it discovered.

Logging Scan: Logging Results to Logfiles

Surely, being able to scan networks realtime is a convenience. But sometimes, we also need to keep a history of network behaviour and states we have captured via Nmap. For these purpose, Nmap provides the -oA flag to be followed by your preferred base_filename.

nmap scanme.nmap.org -oA logname

With these command, three resulting files will be created:

1. <base_filename>.xml: results of scan in XML format
2.<base_filename>.nmap: human readable output
3.<base_filename>.gnmap: grep-able nmap output

Screen Shot 2015-12-19 at 6.43.51 PM.png

Now that we have seen how to know which ports are open on a computer and which services are running on them, we now answer one basic prerequisite question. How do we know which hosts are online?

If we don’t have a specific IP address in mind whose ports we want to scan, fret not! Nmap provides commands to do ping sweeps. Given a range of IP addresses, it will determine which ports are online (even if some hosts tend to appear to be offline).

nmap -sn 192.168.1.0/24

Screen Shot 2015-12-19 at 6.45.23 PM.png

What ping sweeps do is send ping / ICMP echo requests to which machines are designed to respond to. Ping sweeps are more efficient than port scans if you’re still identifying who your target host is as it only pings a few ports (and not the whole range nor the top 1000) to determine if a host is up.

C. Timing and Optimization

Nmap also provides the option for you to adjust the timings of your scan with the -T flag followed by a number from 1 to 5, with 5 indicating the fastest scan. Port scans by default are running on T3.

Screen Shot 2015-12-19 at 10.35.06 PM.png

Image from Nmap Essentials by David Shaw

Surely, faster scans will provide quicker results and decrease waiting time, but this may also make your scan more vulnerable to detection by the target host. Also, some system administrators, as security measures, deliberately make hosts respond a little bit longer to requests. Having faster scans with shorter timeout allowances may overlook these actually alive hosts.

To just see a comparison of the speed of these scans, let’s try to run a T1 (sneaky) and a T5 (insane) all-port scan on our Ubuntu VM.

nmap 192.168.1.104 -p- -T5

Screen Shot 2015-12-20 at 12.32.29 AM.png

nmap 192.168.1.104 -p- -T1

Screen Shot 2015-12-20 at 12.32.36 AM.png

We can see from these that a T5 scan is significantly much faster than a T1 scan which is on the other side of the speed spectrum. The T5 scan was able to complete the scan on all ports in less than 5 seconds as opposed to our T1 scan only completing 0.02% in 5 minutes.

D. OS Detection

Nmap also provides the capability to determine what OS is running on a certain host:

sudo nmap scanme.nmap.org -O -Pn -vv -n

The -O flag allows us to detect the operating system. When used¬†with this flag, Nmap must be ran with root privileges. Let’s go and try to detect the operating system of scanme.nmap.org:

Screen Shot 2015-12-20 at 12.49.11 AM.png

Screen Shot 2015-12-20 at 12.49.35 AM.png

Since Nmap is not so sure¬†as to what the OS is, it gives out possibilities¬†with their respective probabilities which is already very useful rather than not having any hints at all. ūüôā

So there, we saw some of the basic scans and capabilities of Nmap. There is still so much more that Nmap has to offer, scripting engines and all, which deserves further reading! Check it out as well on the Nmap official website.

Lastly, we should not forget that just like any tool, the use of Nmap should be taken with diligence and care. ūüôā

Thanks for reading!

Sources:

GDB: Digging Deep with the GNU Debugger

One of the things I like the most about Ruby is the availability of gems that allow runtime debugging (i.e. pry). With all of my past projects before, in school and in work, let it be PHP or C, to debug a certain hard-to-catch error, I had to do a lot of test prints after every block of code to check what the states and values of each of my variables were and to also double check that the program reached a certain part.

Upon reading on gdb, I got very excited and glad that there actually is a runtime debugger for C and C++ (*flashback to all my undergrad C machine problems and C++ thesis*)!!!

In this blog post, we’ll be having an overview of what GDB is and some of its basic capabilities.

gdb or the GNU Debugger is a debugger tool for several languages (including C, C++, Fortran, etc) which can be used to inspect what the program is doing at certain points of execution. It is by default already installed in most operating systems (tried on OSX Yosemite and Ubuntu 14.04).

You can try to see if it already exists by doing a basic help command with gdb:

gdb -help

But in case your system does not have it yet, you may follow these  installation instructions. Once gdb is installed in your system, we can now start gdb-ugging!

  1. Let’s first start gdb on our command line by typing:
    gdb

    Screen Shot 2015-12-19 at 9.44.11 PM.png

  2. Now that our gdb console is ready, we can load the file that we want to inspect. I have this following file: prog.c, that basically just performs basic arithmetic using the four basic math operations.
    
    #include<stdio.h>;
    
    int add(int x, int y) {
     return x + y;
    }
    
    int subtract(int x, int y) {
     return x - y;
    }
    
    int multiply(int x, int y) {
     return x * y;
    }
    
    double divide(int x, int y) {
     return (x * 1.0) / y;
    }
    
    int main() {
     int x, y;
    
    printf("Enter first number: ");
     scanf("%d", &x);
    
    printf("Enter second number: ");
     scanf("%d", &y);
    
    printf("Sum is: %d\n", add(x,y));
     printf("Difference is: %d\n", subtract(x,y));
     printf("Product is: %d\n", multiply(x,y));
     printf("Quotient is: %f\n", divide(x,y));
    
    }
    

    We save it and compile it with prog as the name of our output file.

    gcc -g prog.c -o prog

    You may notice the -g flag, this enables built-in debugging support which our gdb program will need to execute some of the commands.

    Let’s just try to run prog¬†in our console just to make sure that it works and see the results.

    Screen Shot 2015-12-19 at 8.39.03 PM.png

    Now that we saw that our program runs fine, let’s now load ¬†it inside gdb.

    file prog

    Take note that what we are loading into our gdb is the executable file and not the uncompiled source code.

    Screen Shot 2015-12-19 at 8.37.56 PM.png

  3. Once our file is loaded, let’s execute it in our gdb by issuing¬†the run command:
    run

    Screen Shot 2015-12-19 at 8.41.00 PM.png

    And there, our files ran without any errors and it also produced the same results as with our console given the same pair of input.

  4. Let’s say our file encountered¬†a segmentation fault somewhere. Let’s We can force a segmentation fault by adding the following lines near the bottom before we close the main() function:
    
    printf("\nForcing a segmentation fault: \n");
    *(char *)0 = 0;
    
    

    Let’s try to see if this indeed results to a segmentation fault in our command line. You may notice that a warning was given when we compiled our program. Since this is just a warning, we can proceed as our code was still compiled. Running¬†./prog, we now encounter a segmentation fault.

    Screen Shot 2015-12-19 at 8.43.30 PM.png

    Now let us load and run our file in gdb,

    Screen Shot 2015-12-19 at 9.56.06 PM.png

    gdb also encountered the same segmentation fault but this time, we are given more information (i.e. what line did the segfault occur at). These additional information could truly be useful in debugging and tracing errors in programs.

Now that we have seen a basic use case for gdb, let’s delve in deeper with breakpoints and watchpoints.

Breakpoints

Breakpoints are¬†parts that you can assign at which¬†execution will¬†stop for a while to allow inspection of the states and values of variables at that point of execution. If the program halted or resulted to a segmentation fault before a certain breakpoint, the breakpoint won’t be reached anymore.

Setting break points is easy. You can either set break points via specifying the specific line where you want execution to pause:

break prog.c:31

Note: Be sure to compile your C program with the -g flag to enable  built in debugging utilities. Also, if you made any changes to your file, be sure to recompile it again and reload it in gdb.

Let’s now set our break point at line 31, right before the sum of the two numbers are printed.

vv.png

What we first did was to load the program in (1) and then proceeded to define a breakpoint at (2) at line 31. And then we proceed with the program’s execution inside gdb, and once we encounter line 31, gdb notifies us of a breakpoint at (3). At that breakpoint, we can also issue commands to inspect the value of variables:

print x
print y

We could also specify breakpoints at functions. SO every time the function is called, a breakpoint is reached:

break divide

This adds a breakpoint every time our divide() function is called.

While on breakpoints, it is also possible for you to alter the variable values:

in (3)
set variable x = 100
set variable y = 80

ee.png

What we did above was to first enter the initial values for x and y as requested by scanf in (1). And then since we set our breakpoint at divide(), a breakpoint was reached once divide() wascalled at (2). We first printed the current values for x and y at (2) and they are 10 and 2 respectively.

In (3), we then proceeded to alter the value’s variables – setting x and y to 100 and 80 respectively and then issued the continue command. The continue command is used to exit from the breakpoint and continue execution of the program. We see in (4) that indeed our changes to the values of x and y took effect as instead of 5 (10 / 2), 1.25 (100 / 80) appeared as the quotient.

 

*Conditional Breakpoints

Breakpoints can also only apply if a certain condition is met. For this purpose, gdb provides conditional breakpoints. The if condition is just appended to the break statement form that we saw a while ago.

break x:20 if y == 0

*Clearing Defined Breakpoints

If in case we want to reset our breakpoints, we just issue the clear commands and our breakpoints will be reset.

clear

Screen Shot 2015-12-19 at 9.18.34 PM.png

Watchpoints

In contrast to breakpoints, gdb also provide watchpoints to watch variables. Every time the value of the watched variable changes, the program temporarily stops to print the old and new values. On that temporary stop as well, you will enter a breakpoint-like console where you can inspect variable states and even change some of their values.

Let’s say we change our main() function to be the following. There are occasional changes to the value of x and y along the execution of the program (at some parts they were multiplied by 2 and 3 and then increased by 5). We will see if the execution will indeed be interrupted by the watch points every time x and/or will change values.


int main() {
 int x, y;

printf("Enter first number: ");
 scanf("%d", &x);

printf("Enter second number: ");
 scanf("%d", &y);

printf("Sum is: %d\n", add(x,y));
 printf("Difference is: %d\n", subtract(x,y));

 x = x * 2;
 y = y * 3;

printf("Product is: %d\n", multiply(x,y));

x = x + 5;
 y = y + 5;

 printf("Quotient is: %f\n", divide(x,y));

}

In order for us to watch variables, the variables that we are interested in should be in context so we should set the watch points while the program is in execution. For this, we first set a breakpoint on our main function (1) so that as soon as our program starts, we can assign watch points.

ff.png

Once main is executed our breakpoint is reached (2). We then tell gdb to watch x and y (3). Issuing the continue command would now continue the execution of the program. It asks for the first and second numbers to which we respond:

Screen Shot 2015-12-19 at 9.29.38 PM.png

Remember that assigning values to our variables changes their values so our watchpoints were activated. The old and new values were displayed as well. Similar with breakpoints, we can just issue the continue command to leave watch point and continue execution.

And when our program reached one of lines that mutate x’s and y’s value:

line 37: y = y * 3

Screen Shot 2015-12-19 at 9.30.14 PM.png

It then again printed the old and new values of y. Again similar to breakpoints, we can alter variable values. In this case, we set x = 100 and it took effect as product is already 900.

So there! In this blog post, we were able to see some basic use cases for gdb and how we can use it to debug our programs.

Thanks for reading! ‘Til next time! ūüôā

Sources:

Netcat: Security’s Swiss Army Knife

Every once in a while, we get to stumble into tools that are simple and mundane yet¬†are actually very useful.¬†One such tool is Netcat, released in 1995, it still continues to be one of the most favorite tools for security (ranking #8 in the Sectool’s list for 2015).

At the most basic explanation, Netcat just establishes connection between two computers and allow data to be written across TCP and UDP transport layer protocols and the IP network layer protocol. This behaviour could bring up lots of possibilities! In this blog post, we will be seeing what we can do with Netcat.

For our purpose of demonstration, let’s setup¬†up a Virtual Box running an Ubuntu Server 14.04. Just like any tools, Netcat has many benefits if used properly¬†but grave effects if used poorly. Be sure to¬†use it with caution and don’t use it¬†to do malicious acts. Get consent first on machines that you’ll be targeting if they’re not yours.

A. Chat

Before we delve into how we can make chat servers with Netcat, let’s first see what makes up a chat scenario:

We first need to have a listener! This just listens to anybody who wants to speak with it and finally, we have the one who’ll make the connection. The connection will be successful as long as there is someone waiting at the other side.

So how do we make one with Telnet? Easy!

  1. On our Virtual Box, let’s set up the listening part with the following command:
    nc -l -p 7174

    Dissecting this command, we have the -l and -p flags specified. -l instructs our machine to listen and -p specifies the port on which it will listen to. Take note that when a client disconnects, the server would also stop listening. On Windows machines, the -L flag can be used instead of -l to make the connections persistent even in the event of disconnecting machines.

    We now have our VM machine waiting for connections:

    Screen Shot 2015-12-14 at 12.28.28 AM.png

  2. Then on our host machine, let us connect to the listening VM via:
    nc 192.168.1.101 7174

    where 192.168.1.101 is our VM’s IP as we found out when we did an ipconfig and 7174 is the port where it is listening at.

  3. Once connected, let’s try to send messages:Screen Shot 2015-12-14 at 12.31.41 AM.pngScreen Shot 2015-12-14 at 12.31.31 AM.png

    Yay! As we see above, communication is two-way. Both the VM and our host machine can send messages to one another.

Note that when we try to create two connections from our host machine to the listening VM, our second connection does not push through.

B. File Transfer

Here, we are going to execute a basic file transfer. This could be useful in cases where our server doesn’t have any FTP utilities or other transfer mechanisms.

  1. Say we have a file secret.txt in our VM that contains the following information:Screen Shot 2015-12-14 at 12.43.57 AM.png
  2. And we want this transferred to our host machine. We can set up the passage for file transfer by issuing the following command to our VM:
    nc -l -p 7174 < secret.txt
  3. To download this on our host machine, we can issue the following command and wait for download to finish.
    nc 192.168.1.101 7174 > secret.txt

    Once download finishes, the connection automatically closes.

    We can also make Netcat more verbose by adding the -v flag:

    nc -v -l -p 7174 < secret.txt

    Screen Shot 2015-12-14 at 12.52.08 AM.png

  4. To check if the file has indeed been successfully transferred:Screen Shot 2015-12-14 at 12.58.04 AM.png

C. Banner Grabbing

Sometimes, we want to know what services (and its versions) are running on a specific port. Netcat doesn’t alter the data stream so we’re sure we are not going to get unpredictable results.

  1. On our VM, we installed a web server at port 80. Let us keep the identity of the web server a secret for now. Let’s have Netcat tell us what it is.
    Screen Shot 2015-12-14 at 1.03.09 AM.png
    We just know that we have some service listening at port 80.
  2. On our host machine, let’s try connecting to our VM’s port 80 via Netcat:Screen Shot 2015-12-14 at 1.07.35 AM.pngFrom here, we can see that it is Nginx (with version 1.4.6) that’s running on our VM’s port 80.

D. Take Hold of a Remote Shell

In this section, we are going to see how we can obtain a remote shell on a target computer system. This is useful in case you want access to a remote computer.

Before we proceed, let’s first get to know the versions of Netcat. When you issue the netcat command alone, you may read that there is another version for Netcat: netcat-traditional:

Screen Shot 2015-12-14 at 1.17.21 AM.png

By default what we have is netcat-openbsd which is a “safer” version as it does not include other flags like -e which can execute programs on the remote machine.

For purposes of demonstration, we will be installing netcat-traditional:

sudo apt-get install netcat-traditional

Then we switch nc from being netcat-openbsd to netcat-traditional

sudo update-alternatives --config nc

Screen Shot 2015-12-14 at 1.25.07 AM.png

We now have netcat-traditional as our nc. We can check it by doing a netcat -h:

Screen Shot 2015-12-14 at 1.27.33 AM.png

Note that we have two additional flags , -c and -e, which are tagged as [dangerous!!].

Now we can prepare our listener! ūüôā

  1. Set the listener on our VM
    sudo nc -lp 7174 -e /bin/bash

    With this command, we instruct Netcat to listen on port 7174 and then call /bin/bash on the connection to give our host machine access to a remote shell.

  2. Then we connect to our VM from our host machine
    nc 192.168.1.101 7174

    We connect to our VM the same way as before.

  3. Not much feedback is given regarding the status of our connection but trying to do a cd command, we get the following results.Screen Shot 2015-12-14 at 1.39.53 AM.png
  4. Let’s try to make a directory and a file on our VM via¬†NetcatScreen Shot 2015-12-14 at 1.39.28 AM.png
  5.  Verify on our VM that our folder was indeed created:Screen Shot 2015-12-14 at 1.40.59 AM.png
  6. This capability of Netcat is very powerful as you almost already have full control of the VM’s command line. It is as if you are issuing commands on the actual VM itself. You can even add users with admin privileges, edit file permissions, create accounts, etc.

E. Port Scanning

Knowing an IP address of a computer is not enough if you wish to know which services run on it. Ports define the gateways on how these processes communicate with other processes and the world. Think of IP addresses as buildings and ports as the specific doors in the building.

For this purpose, Netcat can be used to determine which ports are open and what services are running on them.

Say we have a target machine with IP address 192.168.0.84 running Redis on port 6379:

Screen Shot 2015-12-14 at 7.37.04 PM

On an attacker’s perspective, we still don’t know which ports are open. With Netcat, we can scan a range of ports in the hopes that we hit an open one.

nc -zv -w 1 192.168.1.101  1-6390

For this command, we can see 2 new flags: -z and -w:

  • -z: Having this turned on puts¬†Netcat on¬†Zero-I/O mode which can make scans relatively faster.
  • -w 1: This tells Netcat to wait 1 second between scan attempts (i.e. server may take that long to respond)

Our last parameter tells the range of ports, Netcat will scan. Now let’s try to run this and we get the following results:

Screen Shot 2015-12-14 at 7.46.39 PM.png

Aha port 6379 is open! We can also do a randomize scan within the port range by adding a -r flag to our command above. This makes our scan less prone to being detected than scanning with consecutive port numbers (though some Intrusion Detection Systems can still detect port scans even if they are random!)

The command above asusmes that we know the IP of our target machine, but what if we don’t? With Netcat, we can still do a port scan for a range of addresses (with some script):

for i in {82..84}; do nc -zv -w 1 192.168.0.$i 6378-6381; done

Screen Shot 2015-12-14 at 8.10.16 PM.png

For our example, we just did a scan of IP Addresses: 192.168.0.82, 192.168.0.83, 192.168.0.84 and saw which are listening and what ports are open on each.

 

With these 5 use cases, we saw¬†how powerful and flexible Netcat is as a tool! But as with any tool, be sure to use it responsibly and diligently! ūüôā

Thanks for reading!

 

Reference/s: Netcat Starter by K.C. Yerrid

NGINX: “A Server Built for a Modern World”

Cause life’s a constant change, and nothing stays the same~

– Constant Change, Jose Mari Chan

 

Hello there! Haha so what’s with the cheesy line from Jose Mari Chan’s Constant Change (which is a song we also do love in the family!)? ¬†Recently, ¬†I was able to read up and have a hands-on experience with Nginx.

Nginx is a powerful web server that surfaced in the early 2000s. Its development was motivated to address the C10K problem where processes may only be able to handle up to 10,000 simultaneous connections due to operating system and software constraints. This problem may be unimaginable nowadays especially with our hi-tech software and hardware that can process a gazillion bits per minute. But back in the 1990’s and the early 2000’s, this was the case.

During that time, it was Apache that was the most popular (actually until now, Nginx only comes second) web server. Released in 1995, it is one of the forerunners that gave birth adn shaped the World Wide Web that we know today.

Nginx aimed to handle many concurrent applications, survive high traffic,  and serve webpages extremely fast which was not part of the original purpose of Apache designers. Who would have thought that network traffic would boom this big.

Actually, while reading on its history, I came to remember one of our blog posts from before regarding the original intentions and purpose of TCP/IP and how time has also shaped its direction.

Nginx is relatively young and has less features than Apache but as Clement Nedelcu in his book Nginx HTTP Server says “it was intended for a more modern era”.

Armed with a knowledge of the motivation behind Nginx, let us now try to see how can we get started with Nginx and what basic configurations we can make to let it suit our needs.¬†For this post, we’ll be running Nginx on a Ubuntu Server 14.04.

Nginx Installation

Install Nginx with apt-get.

Nginx also offers the option to compile it from source and this is usually for OS distributions who don’t provide it in their respective package managers or whose package versions may already be outdated. Compiling from source may also give greater flexibility as which modules to be included can be specified¬†during compilation. But for our purpose, we can just use apt-get to install Nginx.

sudo apt-get update
sudo apt-get install nginx

Upon installation, Nginx automatically starts. To see this, you may do a

ps -ef  | grep nginx

Screen Shot 2015-12-12 at 9.29.24 PM.png

We see that Nginx started one master process and 4 more worker processes that are ready to accept connections. It is also noteworthy to see that the master process has root as its owner and the worker processes have www-data.

The main purpose of the master process is to read and evaluate the configuration file and maintain the worker processes. The worker processes are the ones who process actual requests. In case one worker dies, the master process ensures that another one is spawned.

Nginx Default Configuration

Once we have Nginx up and running, let’s now take a look at the default configuration of Nginx. Head over to the default configuration file at:

/etc/nginx/nginx.conf

Throughout the file, you may notice one liner configurations that consist of an attribute name, a value, and a semi-colon to end the line, these are what we call directives. Directives make the majority of Nginx configuration files.

Let’s take a closer look on some of the parts:

user www-data;
worker_processes 4;
pid /run/nginx.pid;
  1. user: It’s¬†good practice to have a dedicated user for our worker processes rather than having it owned by root as well. Since the default comes with a non-root user already, we can leave this as it is or if you want you still create a dedicated user for your worker processes.
  2. worker_processes: Remember the¬†4 worker processes we found a while ago, this directive is the one that controls it. The number of worker processes usually is set to be the same as the number of cores in the computer so that it can be efficient. Specifying more than your actual number of cores may be detrimental to the system and specifying less could lead to under-utilization of your computer’s power.
    In case on of your worker processes dies, another one is spawn so that the number of workers specified in your configuration file is still met.
  3. pid: The file where the process id of the master Nginx process is stored.

Throughout the configuration file, you may also see directives enclosed by curly braces ({}) and we call these blocks. These directives are grouped together in blocks as they are provided by the same module (such as Events in our example below).

events {
 worker_connections 768;
 # multi_accept on;
}
  1. worker_connections: maximum number of connections each worker worker process can accommodate.
http {

 ...

 include /etc/nginx/mime.types;

 ...

 access_log /var/log/nginx/access.log;
 error_log /var/log/nginx/error.log;

 include /etc/nginx/conf.d/*.conf;
 include /etc/nginx/sites-enabled/*;
}

Location of logs can also be customized via the access_log and error_log directives. The default resides at /var/log/nginx/access.log for access logs and /var/log/nginx/error.log for error logs.

Nginx allows inclusions of external files with the include directive. Included files appear exactly where they are included with the include directive.

The last line on the above example includes files under the sites-enabled folder. This folder includes the different config files for each of your sites if your server hosts multiple sites. By convention, one file would contain the configuration of one site to properly separate configurations and  adjustments can easily be made as no two sites are tightly coupled with one another. By default, we have one config file under sites-enabled, and this is default.

Opening it, we can see more configuration blocks and one of them is the following:

server {
 listen 80 default_server;
 listen [::]:80 default_server ipv6only=on;

 root /usr/share/nginx/html;
 index index.html index.htm;

 # Make site accessible from http://localhost/
 server_name localhost;

On our server block, we see that by default, Nginx listens to port 80 and the root was set to be in the nginx/html folder and  default server name is localhost.

Before we try changing the defaults, let’s ensure that our setup works and can be accessed on a browser:

Screen Shot 2015-12-13 at 5.28.33 PM.pngNow, let’s try to change some of these defaults:

a. Changing the default listening port

To change the port on which Nginx is listening, let us change the default port 80 on the server block.

listen 80 default_server;
listen [::]:80 default_server ipv6only=on;

Say we want it to listen at port 14344 instead, we now have:

listen 14344 default_server;
listen [::]:14344 default_server ipv6only=on;

To apply our changes, make sure to reload Nginx every time you edit the configurations:

sudo service nginx reload

In case, the service doesn’t want to restart or your changes weren’t applied, try to check if your configuration file is valid (i.e. has the correct syntax; no typographical errors, etc). For this purpose, Nginx provides a way to test the configuration file’s syntax:

 sudo nginx -t

Screen Shot 2015-12-12 at 10.43.14 PM.png

Once we have successfully reloaded our Nginx¬†config, let’s now¬†check that this works. Let’s refresh our browser with no ports (as 80 is set as default), our site is now nowhere to be found.

Screen Shot 2015-12-12 at 10.43.45 PM.png

But if we specify port 14344 then refresh, we can now see our site:

Screen Shot 2015-12-12 at 10.43.58 PM.png

b. Changing the root

Say, we have our application directory at /srv/my-app. And our static pages to be served by Nginx is at /srv/my-app/public. To set the root from which Nginx will be looking for the files it will render (i.e. static pages), we change the root via:

root /srv/my-app/public;

When this is your root, Nginx also by default looks for your index file (index.html, index, htm, etc) in this folder. Now when we refresh, our <ip>:14344, we can see the index page that I added at /srv/my-app/public.

Screen Shot 2015-12-12 at 11.08.24 PM.png

 

index.html at /srv/my-app/public


&nbsp;
<h1>Welcome to the index page!</h1>
&nbsp;
<h4>Because all great things start with a single step, right?</h4>
&nbsp;

c. Using Custom Error Message Pages (Error 500, 404, etc)

In relation to (b) where we changed the application root and saw a different index file, we can also specify custom error pages that we want shown in our site. This way, we can specially customize them to maybe add further instructions on what to do if their actions caused such errors.

We can customize such errors by the error directive in sites-enabled/default:

error_page 404 /my_404.html;

And add the file my_404.html to /srv/my_app/public:

I have this my_404.html file:

<div style="min-height: 100%; background-size: cover; background-image: url('traffic.jpeg');"><center style="padding: 150px; color: white; font-size: 40px;">
<h1>404: Not Found</h1>
<h4>But don't worry, not all who wander are lost. :)</h4>
&nbsp;

</center></div>

So when we go to an undefined page in our site, our customized 404 page is displayed.

Screen Shot 2015-12-12 at 11.56.42 PM.png

e. Customized Cache Headers from Nginx

Nginx delivers static pages really really fast as it accesses directly the file system rather than having the app server process them as requests. In addition to this, Nginx offers caching in which you can specify which files you want cached and for how long. For this purpose, we can add the following inside the server block of sites-enabled/default:

location ~* \.(jpg|jpeg|png|gif)$ {
 expires 30d;
}
location ~* \.(ico|css|js)$ {
 expires 5h;
}

What this does is set the expiry of files with jpg, jpeg, png, and gif  extensions to be after 30 days and for files with ico, css, and js extensions to be after 5 hours.

When we try and load our traffic.jpeg file, and inspect the headers of our jpeg file, the 30 day expiration we set is reflected.

Screen Shot 2015-12-13 at 1.31.41 AM.png

f. Using Nginx as a Reverse Proxy Server

Reverse proxy servers act as an intermediary between clients and the server. It accepts requests on behalf of the server and is the one who directs traffic to them. Reverse proxy servers provide an additional level of abstraction and control to ensure the smooth flow of traffic.

For this purpose, Nginx provides the upstream block where we can specify the servers unto which we want our traffic routed.

Say we have this Sinatra application running on localhost:4567.


require 'sinatra'

set :bind, '0.0.0.0'
set :port, '4567'

get '/' do
"Hello, world! I am from Sinatra ONE"
end

We can forward connections to it via adding the following upstream block in the file sites-enabled/default:

Screen Shot 2015-12-13 at 1.55.15 AM.png

And an inner location block to the server block:

Screen Shot 2015-12-13 at 6.19.41 PM.png

What this does is just to forward requests matching / to the app1 server group we declared before.

And then proceeding to port 14344 of our server via the browser, we can see:

Screen Shot 2015-12-13 at 2.01.04 AM.png

Awesome, isn’t it? ūüôā

In addition to acting as a reverse proxy server, Nginx is also capable of being a load balancer, when you specify multiple servers on the upstream block, by default Nginx routes connections to them via a round robin fashion (other routing options include least connected, and IP hash which is best for cases where sessions must be persistent).

Let’s try running a duplicate Sinatra on port 4568. We can tweak the text a little bit so that it is indicative of being the second Sinatra application. Let’s also add an additional server directive on our upstream block:

Screen Shot 2015-12-13 at 1.55.56 AM.png

 

Refreshing our page multiple times, we can see that some requests get forwarded to our first Sinatra application while some to the second one at port 4568.

output_yvMZwp.gifIn  case we want more traffic to be redirected to on of our servers, we can add a weight attribute to the server directive in our upstream block:

server localhost:4567 weight=2;
server localhost:4568;

So if we have this, for every three requests, 2 goes to port 4567 and one goes to 4568.

g. Specifying Path for ELB Health Check

On our previous post on load balancers, we were able to talk about health checks, the ones used by load balancers to determine if our server is still up. On one of our examples, we set our health check to be the following:

Screen Shot 2015-12-13 at 6.09.39 PM.png

This is OK but could still be further optimized. We can tell Nginx to immediately return a success status to AWS instead of passing it to our server to be processed and loaded (i.e. / loads the homepage which could contain image, css, js, etc files) before it can be returned to AWS. Loading the whole page might just add unnecessary load to our server.

We can tell Nginx to immediately return a OK status by adding the following location block inside the server block of our sites-enabled/default file:

location /elb-status {
 return 200 'Alive!';
 add_header Content-Type text/plain;
}

What we did above is just to add a specific location detector so that when health checks are made at /elb-status, a 200 is immediately returned. For this to be accessed by ELB, make sure that you set the Health Check Ping Path to be /elb-status and Port to be 14344 at your ELB configurations.

In adding location blocks, we must be careful on their sequence. If multiple location blocks match a certain request, only the first one is considered and the succeeding ones are ignored.

For reference, here are the final sequence of location blocks that we added in this tutorial:

upstream app1 {
 server localhost:4567;
 server localhost:4568;
}

server {
 listen 14344 default_server;
 listen [::]:14344 default_server ipv6only=on;

 root /srv/my_app/public;
 # root /usr/share/nginx/html;
 index index.html index.htm;
 # Make site accessible from http://localhost/
 server_name 54.201.203.28;

 location /elb-status {
 return 200 'Alive!';
 add_header Content-Type text/plain;
 }

 location ~* \.(jpg|jpeg|png|gif)$ {
 expires 30d;
 }


 location / {
 # First attempt to serve request as file, then
 # as directory, then fall back to displaying a 404.
 # try_files $uri $uri/ =404;
 # Uncomment to enable naxsi on this location
 # include /etc/nginx/naxsi.rules
 proxy_pass http://app1;
 }

 # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests
 #location /RequestDenied {
 # proxy_pass http://127.0.0.1:8080;
 #}

 error_page 404 /my_404.html;
 error_page 500 /50x.html;

...

So there, we saw what Nginx is and the motivations behind its birth, as well as some of the configurations we can change to better suit¬†our needs. Thank you so much for reading! ūüôā

P.S. Recently, I’m very privileged to get to know a lot of technologies and write posts about them (hehe, you may have noticed the influx of technical posts the past weeks).¬†Haha those won’t be possible without the guidance and mentorship of the super awesome Joshua Lat! Super thank you Josh for all the guidance and support! ūüôā
Thanks again, dear reader and ’til next time!

Sources:

  1. Nginx HTTP Server, 2nd edition by Clement Nedelcu
  2. Digital Ocean: Custom Nginx Error Pages on Ubuntu 
  3. Digital Ocean: Nginx Configuration Optimization
  4. Digital Ocean: Reverse Proxy vs Load Balancer
  5. Digital Ocean: Reverse Proxy Server